Sep 30 16:56:43 localhost kernel: Linux version 5.14.0-617.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025
Sep 30 16:56:43 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Sep 30 16:56:43 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Sep 30 16:56:43 localhost kernel: BIOS-provided physical RAM map:
Sep 30 16:56:43 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Sep 30 16:56:43 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Sep 30 16:56:43 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Sep 30 16:56:43 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Sep 30 16:56:43 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Sep 30 16:56:43 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Sep 30 16:56:43 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Sep 30 16:56:43 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Sep 30 16:56:43 localhost kernel: NX (Execute Disable) protection: active
Sep 30 16:56:43 localhost kernel: APIC: Static calls initialized
Sep 30 16:56:43 localhost kernel: SMBIOS 2.8 present.
Sep 30 16:56:43 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Sep 30 16:56:43 localhost kernel: Hypervisor detected: KVM
Sep 30 16:56:43 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Sep 30 16:56:43 localhost kernel: kvm-clock: using sched offset of 4382299040 cycles
Sep 30 16:56:43 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Sep 30 16:56:43 localhost kernel: tsc: Detected 2800.000 MHz processor
Sep 30 16:56:43 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Sep 30 16:56:43 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Sep 30 16:56:43 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Sep 30 16:56:43 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Sep 30 16:56:43 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Sep 30 16:56:43 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Sep 30 16:56:43 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Sep 30 16:56:43 localhost kernel: Using GB pages for direct mapping
Sep 30 16:56:43 localhost kernel: RAMDISK: [mem 0x2d7d0000-0x32bdffff]
Sep 30 16:56:43 localhost kernel: ACPI: Early table checksum verification disabled
Sep 30 16:56:43 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Sep 30 16:56:43 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 16:56:43 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 16:56:43 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 16:56:43 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Sep 30 16:56:43 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 16:56:43 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 16:56:43 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Sep 30 16:56:43 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Sep 30 16:56:43 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Sep 30 16:56:43 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Sep 30 16:56:43 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Sep 30 16:56:43 localhost kernel: No NUMA configuration found
Sep 30 16:56:43 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Sep 30 16:56:43 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Sep 30 16:56:43 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Sep 30 16:56:43 localhost kernel: Zone ranges:
Sep 30 16:56:43 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Sep 30 16:56:43 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Sep 30 16:56:43 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Sep 30 16:56:43 localhost kernel:   Device   empty
Sep 30 16:56:43 localhost kernel: Movable zone start for each node
Sep 30 16:56:43 localhost kernel: Early memory node ranges
Sep 30 16:56:43 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Sep 30 16:56:43 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Sep 30 16:56:43 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Sep 30 16:56:43 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Sep 30 16:56:43 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Sep 30 16:56:43 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Sep 30 16:56:43 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Sep 30 16:56:43 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Sep 30 16:56:43 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Sep 30 16:56:43 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Sep 30 16:56:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Sep 30 16:56:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Sep 30 16:56:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Sep 30 16:56:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Sep 30 16:56:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Sep 30 16:56:43 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Sep 30 16:56:43 localhost kernel: TSC deadline timer available
Sep 30 16:56:43 localhost kernel: CPU topo: Max. logical packages:   8
Sep 30 16:56:43 localhost kernel: CPU topo: Max. logical dies:       8
Sep 30 16:56:43 localhost kernel: CPU topo: Max. dies per package:   1
Sep 30 16:56:43 localhost kernel: CPU topo: Max. threads per core:   1
Sep 30 16:56:43 localhost kernel: CPU topo: Num. cores per package:     1
Sep 30 16:56:43 localhost kernel: CPU topo: Num. threads per package:   1
Sep 30 16:56:43 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Sep 30 16:56:43 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Sep 30 16:56:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Sep 30 16:56:43 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Sep 30 16:56:43 localhost kernel: Booting paravirtualized kernel on KVM
Sep 30 16:56:43 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Sep 30 16:56:43 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Sep 30 16:56:43 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Sep 30 16:56:43 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Sep 30 16:56:43 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Sep 30 16:56:43 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Sep 30 16:56:43 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Sep 30 16:56:43 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64", will be passed to user space.
Sep 30 16:56:43 localhost kernel: random: crng init done
Sep 30 16:56:43 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Sep 30 16:56:43 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Sep 30 16:56:43 localhost kernel: Fallback order for Node 0: 0 
Sep 30 16:56:43 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Sep 30 16:56:43 localhost kernel: Policy zone: Normal
Sep 30 16:56:43 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Sep 30 16:56:43 localhost kernel: software IO TLB: area num 8.
Sep 30 16:56:43 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Sep 30 16:56:43 localhost kernel: ftrace: allocating 49329 entries in 193 pages
Sep 30 16:56:43 localhost kernel: ftrace: allocated 193 pages with 3 groups
Sep 30 16:56:43 localhost kernel: Dynamic Preempt: voluntary
Sep 30 16:56:43 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Sep 30 16:56:43 localhost kernel: rcu:         RCU event tracing is enabled.
Sep 30 16:56:43 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Sep 30 16:56:43 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Sep 30 16:56:43 localhost kernel:         Rude variant of Tasks RCU enabled.
Sep 30 16:56:43 localhost kernel:         Tracing variant of Tasks RCU enabled.
Sep 30 16:56:43 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Sep 30 16:56:43 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Sep 30 16:56:43 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Sep 30 16:56:43 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Sep 30 16:56:43 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Sep 30 16:56:43 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Sep 30 16:56:43 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Sep 30 16:56:43 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Sep 30 16:56:43 localhost kernel: Console: colour VGA+ 80x25
Sep 30 16:56:43 localhost kernel: printk: console [ttyS0] enabled
Sep 30 16:56:43 localhost kernel: ACPI: Core revision 20230331
Sep 30 16:56:43 localhost kernel: APIC: Switch to symmetric I/O mode setup
Sep 30 16:56:43 localhost kernel: x2apic enabled
Sep 30 16:56:43 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Sep 30 16:56:43 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Sep 30 16:56:43 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Sep 30 16:56:43 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Sep 30 16:56:43 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Sep 30 16:56:43 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Sep 30 16:56:43 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Sep 30 16:56:43 localhost kernel: Spectre V2 : Mitigation: Retpolines
Sep 30 16:56:43 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Sep 30 16:56:43 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Sep 30 16:56:43 localhost kernel: RETBleed: Mitigation: untrained return thunk
Sep 30 16:56:43 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Sep 30 16:56:43 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Sep 30 16:56:43 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Sep 30 16:56:43 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Sep 30 16:56:43 localhost kernel: x86/bugs: return thunk changed
Sep 30 16:56:43 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Sep 30 16:56:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Sep 30 16:56:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Sep 30 16:56:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Sep 30 16:56:43 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Sep 30 16:56:43 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Sep 30 16:56:43 localhost kernel: Freeing SMP alternatives memory: 40K
Sep 30 16:56:43 localhost kernel: pid_max: default: 32768 minimum: 301
Sep 30 16:56:43 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Sep 30 16:56:43 localhost kernel: landlock: Up and running.
Sep 30 16:56:43 localhost kernel: Yama: becoming mindful.
Sep 30 16:56:43 localhost kernel: SELinux:  Initializing.
Sep 30 16:56:43 localhost kernel: LSM support for eBPF active
Sep 30 16:56:43 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Sep 30 16:56:43 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Sep 30 16:56:43 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Sep 30 16:56:43 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Sep 30 16:56:43 localhost kernel: ... version:                0
Sep 30 16:56:43 localhost kernel: ... bit width:              48
Sep 30 16:56:43 localhost kernel: ... generic registers:      6
Sep 30 16:56:43 localhost kernel: ... value mask:             0000ffffffffffff
Sep 30 16:56:43 localhost kernel: ... max period:             00007fffffffffff
Sep 30 16:56:43 localhost kernel: ... fixed-purpose events:   0
Sep 30 16:56:43 localhost kernel: ... event mask:             000000000000003f
Sep 30 16:56:43 localhost kernel: signal: max sigframe size: 1776
Sep 30 16:56:43 localhost kernel: rcu: Hierarchical SRCU implementation.
Sep 30 16:56:43 localhost kernel: rcu:         Max phase no-delay instances is 400.
Sep 30 16:56:43 localhost kernel: smp: Bringing up secondary CPUs ...
Sep 30 16:56:43 localhost kernel: smpboot: x86: Booting SMP configuration:
Sep 30 16:56:43 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Sep 30 16:56:43 localhost kernel: smp: Brought up 1 node, 8 CPUs
Sep 30 16:56:43 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Sep 30 16:56:43 localhost kernel: node 0 deferred pages initialised in 24ms
Sep 30 16:56:43 localhost kernel: Memory: 7765596K/8388068K available (16384K kernel code, 5784K rwdata, 13988K rodata, 4072K init, 7304K bss, 616480K reserved, 0K cma-reserved)
Sep 30 16:56:43 localhost kernel: devtmpfs: initialized
Sep 30 16:56:43 localhost kernel: x86/mm: Memory block size: 128MB
Sep 30 16:56:43 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Sep 30 16:56:43 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Sep 30 16:56:43 localhost kernel: pinctrl core: initialized pinctrl subsystem
Sep 30 16:56:43 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Sep 30 16:56:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Sep 30 16:56:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Sep 30 16:56:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Sep 30 16:56:43 localhost kernel: audit: initializing netlink subsys (disabled)
Sep 30 16:56:43 localhost kernel: audit: type=2000 audit(1759251402.208:1): state=initialized audit_enabled=0 res=1
Sep 30 16:56:43 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Sep 30 16:56:43 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Sep 30 16:56:43 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Sep 30 16:56:43 localhost kernel: cpuidle: using governor menu
Sep 30 16:56:43 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Sep 30 16:56:43 localhost kernel: PCI: Using configuration type 1 for base access
Sep 30 16:56:43 localhost kernel: PCI: Using configuration type 1 for extended access
Sep 30 16:56:43 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Sep 30 16:56:43 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Sep 30 16:56:43 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Sep 30 16:56:43 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Sep 30 16:56:43 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Sep 30 16:56:43 localhost kernel: Demotion targets for Node 0: null
Sep 30 16:56:43 localhost kernel: cryptd: max_cpu_qlen set to 1000
Sep 30 16:56:43 localhost kernel: ACPI: Added _OSI(Module Device)
Sep 30 16:56:43 localhost kernel: ACPI: Added _OSI(Processor Device)
Sep 30 16:56:43 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Sep 30 16:56:43 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Sep 30 16:56:43 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Sep 30 16:56:43 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Sep 30 16:56:43 localhost kernel: ACPI: Interpreter enabled
Sep 30 16:56:43 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Sep 30 16:56:43 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Sep 30 16:56:43 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Sep 30 16:56:43 localhost kernel: PCI: Using E820 reservations for host bridge windows
Sep 30 16:56:43 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Sep 30 16:56:43 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Sep 30 16:56:43 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [3] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [4] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [5] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [6] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [7] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [8] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [9] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [10] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [11] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [12] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [13] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [14] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [15] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [16] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [17] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [18] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [19] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [20] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [21] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [22] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [23] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [24] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [25] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [26] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [27] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [28] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [29] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [30] registered
Sep 30 16:56:43 localhost kernel: acpiphp: Slot [31] registered
Sep 30 16:56:43 localhost kernel: PCI host bridge to bus 0000:00
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Sep 30 16:56:43 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Sep 30 16:56:43 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Sep 30 16:56:43 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Sep 30 16:56:43 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Sep 30 16:56:43 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Sep 30 16:56:43 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Sep 30 16:56:43 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Sep 30 16:56:43 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Sep 30 16:56:43 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Sep 30 16:56:43 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Sep 30 16:56:43 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Sep 30 16:56:43 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Sep 30 16:56:43 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Sep 30 16:56:43 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Sep 30 16:56:43 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Sep 30 16:56:43 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Sep 30 16:56:43 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Sep 30 16:56:43 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Sep 30 16:56:43 localhost kernel: iommu: Default domain type: Translated
Sep 30 16:56:43 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Sep 30 16:56:43 localhost kernel: SCSI subsystem initialized
Sep 30 16:56:43 localhost kernel: ACPI: bus type USB registered
Sep 30 16:56:43 localhost kernel: usbcore: registered new interface driver usbfs
Sep 30 16:56:43 localhost kernel: usbcore: registered new interface driver hub
Sep 30 16:56:43 localhost kernel: usbcore: registered new device driver usb
Sep 30 16:56:43 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Sep 30 16:56:43 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Sep 30 16:56:43 localhost kernel: PTP clock support registered
Sep 30 16:56:43 localhost kernel: EDAC MC: Ver: 3.0.0
Sep 30 16:56:43 localhost kernel: NetLabel: Initializing
Sep 30 16:56:43 localhost kernel: NetLabel:  domain hash size = 128
Sep 30 16:56:43 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Sep 30 16:56:43 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Sep 30 16:56:43 localhost kernel: PCI: Using ACPI for IRQ routing
Sep 30 16:56:43 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Sep 30 16:56:43 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Sep 30 16:56:43 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Sep 30 16:56:43 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Sep 30 16:56:43 localhost kernel: vgaarb: loaded
Sep 30 16:56:43 localhost kernel: clocksource: Switched to clocksource kvm-clock
Sep 30 16:56:43 localhost kernel: VFS: Disk quotas dquot_6.6.0
Sep 30 16:56:43 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Sep 30 16:56:43 localhost kernel: pnp: PnP ACPI init
Sep 30 16:56:43 localhost kernel: pnp 00:03: [dma 2]
Sep 30 16:56:43 localhost kernel: pnp: PnP ACPI: found 5 devices
Sep 30 16:56:43 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Sep 30 16:56:43 localhost kernel: NET: Registered PF_INET protocol family
Sep 30 16:56:43 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Sep 30 16:56:43 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Sep 30 16:56:43 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Sep 30 16:56:43 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Sep 30 16:56:43 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Sep 30 16:56:43 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Sep 30 16:56:43 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Sep 30 16:56:43 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Sep 30 16:56:43 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Sep 30 16:56:43 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Sep 30 16:56:43 localhost kernel: NET: Registered PF_XDP protocol family
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Sep 30 16:56:43 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Sep 30 16:56:43 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Sep 30 16:56:43 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Sep 30 16:56:43 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 95863 usecs
Sep 30 16:56:43 localhost kernel: PCI: CLS 0 bytes, default 64
Sep 30 16:56:43 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Sep 30 16:56:43 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Sep 30 16:56:43 localhost kernel: ACPI: bus type thunderbolt registered
Sep 30 16:56:43 localhost kernel: Trying to unpack rootfs image as initramfs...
Sep 30 16:56:43 localhost kernel: Initialise system trusted keyrings
Sep 30 16:56:43 localhost kernel: Key type blacklist registered
Sep 30 16:56:43 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Sep 30 16:56:43 localhost kernel: zbud: loaded
Sep 30 16:56:43 localhost kernel: integrity: Platform Keyring initialized
Sep 30 16:56:43 localhost kernel: integrity: Machine keyring initialized
Sep 30 16:56:43 localhost kernel: Freeing initrd memory: 86080K
Sep 30 16:56:43 localhost kernel: NET: Registered PF_ALG protocol family
Sep 30 16:56:43 localhost kernel: xor: automatically using best checksumming function   avx       
Sep 30 16:56:43 localhost kernel: Key type asymmetric registered
Sep 30 16:56:43 localhost kernel: Asymmetric key parser 'x509' registered
Sep 30 16:56:43 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Sep 30 16:56:43 localhost kernel: io scheduler mq-deadline registered
Sep 30 16:56:43 localhost kernel: io scheduler kyber registered
Sep 30 16:56:43 localhost kernel: io scheduler bfq registered
Sep 30 16:56:43 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Sep 30 16:56:43 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Sep 30 16:56:43 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Sep 30 16:56:43 localhost kernel: ACPI: button: Power Button [PWRF]
Sep 30 16:56:43 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Sep 30 16:56:43 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Sep 30 16:56:43 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Sep 30 16:56:43 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Sep 30 16:56:43 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Sep 30 16:56:43 localhost kernel: Non-volatile memory driver v1.3
Sep 30 16:56:43 localhost kernel: rdac: device handler registered
Sep 30 16:56:43 localhost kernel: hp_sw: device handler registered
Sep 30 16:56:43 localhost kernel: emc: device handler registered
Sep 30 16:56:43 localhost kernel: alua: device handler registered
Sep 30 16:56:43 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Sep 30 16:56:43 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Sep 30 16:56:43 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Sep 30 16:56:43 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Sep 30 16:56:43 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Sep 30 16:56:43 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Sep 30 16:56:43 localhost kernel: usb usb1: Product: UHCI Host Controller
Sep 30 16:56:43 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-617.el9.x86_64 uhci_hcd
Sep 30 16:56:43 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Sep 30 16:56:43 localhost kernel: hub 1-0:1.0: USB hub found
Sep 30 16:56:43 localhost kernel: hub 1-0:1.0: 2 ports detected
Sep 30 16:56:43 localhost kernel: usbcore: registered new interface driver usbserial_generic
Sep 30 16:56:43 localhost kernel: usbserial: USB Serial support registered for generic
Sep 30 16:56:43 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Sep 30 16:56:43 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Sep 30 16:56:43 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Sep 30 16:56:43 localhost kernel: mousedev: PS/2 mouse device common for all mice
Sep 30 16:56:43 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Sep 30 16:56:43 localhost kernel: rtc_cmos 00:04: registered as rtc0
Sep 30 16:56:43 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-09-30T16:56:42 UTC (1759251402)
Sep 30 16:56:43 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Sep 30 16:56:43 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Sep 30 16:56:43 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Sep 30 16:56:43 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Sep 30 16:56:43 localhost kernel: usbcore: registered new interface driver usbhid
Sep 30 16:56:43 localhost kernel: usbhid: USB HID core driver
Sep 30 16:56:43 localhost kernel: drop_monitor: Initializing network drop monitor service
Sep 30 16:56:43 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Sep 30 16:56:43 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Sep 30 16:56:43 localhost kernel: Initializing XFRM netlink socket
Sep 30 16:56:43 localhost kernel: NET: Registered PF_INET6 protocol family
Sep 30 16:56:43 localhost kernel: Segment Routing with IPv6
Sep 30 16:56:43 localhost kernel: NET: Registered PF_PACKET protocol family
Sep 30 16:56:43 localhost kernel: mpls_gso: MPLS GSO support
Sep 30 16:56:43 localhost kernel: IPI shorthand broadcast: enabled
Sep 30 16:56:43 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Sep 30 16:56:43 localhost kernel: AES CTR mode by8 optimization enabled
Sep 30 16:56:43 localhost kernel: sched_clock: Marking stable (1335002260, 140904620)->(1585146780, -109239900)
Sep 30 16:56:43 localhost kernel: registered taskstats version 1
Sep 30 16:56:43 localhost kernel: Loading compiled-in X.509 certificates
Sep 30 16:56:43 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Sep 30 16:56:43 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Sep 30 16:56:43 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Sep 30 16:56:43 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Sep 30 16:56:43 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Sep 30 16:56:43 localhost kernel: Demotion targets for Node 0: null
Sep 30 16:56:43 localhost kernel: page_owner is disabled
Sep 30 16:56:43 localhost kernel: Key type .fscrypt registered
Sep 30 16:56:43 localhost kernel: Key type fscrypt-provisioning registered
Sep 30 16:56:43 localhost kernel: Key type big_key registered
Sep 30 16:56:43 localhost kernel: Key type encrypted registered
Sep 30 16:56:43 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Sep 30 16:56:43 localhost kernel: Loading compiled-in module X.509 certificates
Sep 30 16:56:43 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Sep 30 16:56:43 localhost kernel: ima: Allocated hash algorithm: sha256
Sep 30 16:56:43 localhost kernel: ima: No architecture policies found
Sep 30 16:56:43 localhost kernel: evm: Initialising EVM extended attributes:
Sep 30 16:56:43 localhost kernel: evm: security.selinux
Sep 30 16:56:43 localhost kernel: evm: security.SMACK64 (disabled)
Sep 30 16:56:43 localhost kernel: evm: security.SMACK64EXEC (disabled)
Sep 30 16:56:43 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Sep 30 16:56:43 localhost kernel: evm: security.SMACK64MMAP (disabled)
Sep 30 16:56:43 localhost kernel: evm: security.apparmor (disabled)
Sep 30 16:56:43 localhost kernel: evm: security.ima
Sep 30 16:56:43 localhost kernel: evm: security.capability
Sep 30 16:56:43 localhost kernel: evm: HMAC attrs: 0x1
Sep 30 16:56:43 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Sep 30 16:56:43 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Sep 30 16:56:43 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Sep 30 16:56:43 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Sep 30 16:56:43 localhost kernel: usb 1-1: Manufacturer: QEMU
Sep 30 16:56:43 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Sep 30 16:56:43 localhost kernel: Running certificate verification RSA selftest
Sep 30 16:56:43 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Sep 30 16:56:43 localhost kernel: Running certificate verification ECDSA selftest
Sep 30 16:56:43 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Sep 30 16:56:43 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Sep 30 16:56:43 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Sep 30 16:56:43 localhost kernel: clk: Disabling unused clocks
Sep 30 16:56:43 localhost kernel: Freeing unused decrypted memory: 2028K
Sep 30 16:56:43 localhost kernel: Freeing unused kernel image (initmem) memory: 4072K
Sep 30 16:56:43 localhost kernel: Write protecting the kernel read-only data: 30720k
Sep 30 16:56:43 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 348K
Sep 30 16:56:43 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Sep 30 16:56:43 localhost kernel: Run /init as init process
Sep 30 16:56:43 localhost kernel:   with arguments:
Sep 30 16:56:43 localhost kernel:     /init
Sep 30 16:56:43 localhost kernel:   with environment:
Sep 30 16:56:43 localhost kernel:     HOME=/
Sep 30 16:56:43 localhost kernel:     TERM=linux
Sep 30 16:56:43 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64
Sep 30 16:56:43 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Sep 30 16:56:43 localhost systemd[1]: Detected virtualization kvm.
Sep 30 16:56:43 localhost systemd[1]: Detected architecture x86-64.
Sep 30 16:56:43 localhost systemd[1]: Running in initrd.
Sep 30 16:56:43 localhost systemd[1]: No hostname configured, using default hostname.
Sep 30 16:56:43 localhost systemd[1]: Hostname set to <localhost>.
Sep 30 16:56:43 localhost systemd[1]: Initializing machine ID from VM UUID.
Sep 30 16:56:43 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Sep 30 16:56:43 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Sep 30 16:56:43 localhost systemd[1]: Reached target Local Encrypted Volumes.
Sep 30 16:56:43 localhost systemd[1]: Reached target Initrd /usr File System.
Sep 30 16:56:43 localhost systemd[1]: Reached target Local File Systems.
Sep 30 16:56:43 localhost systemd[1]: Reached target Path Units.
Sep 30 16:56:43 localhost systemd[1]: Reached target Slice Units.
Sep 30 16:56:43 localhost systemd[1]: Reached target Swaps.
Sep 30 16:56:43 localhost systemd[1]: Reached target Timer Units.
Sep 30 16:56:43 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Sep 30 16:56:43 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Sep 30 16:56:43 localhost systemd[1]: Listening on Journal Socket.
Sep 30 16:56:43 localhost systemd[1]: Listening on udev Control Socket.
Sep 30 16:56:43 localhost systemd[1]: Listening on udev Kernel Socket.
Sep 30 16:56:43 localhost systemd[1]: Reached target Socket Units.
Sep 30 16:56:43 localhost systemd[1]: Starting Create List of Static Device Nodes...
Sep 30 16:56:43 localhost systemd[1]: Starting Journal Service...
Sep 30 16:56:43 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Sep 30 16:56:43 localhost systemd[1]: Starting Apply Kernel Variables...
Sep 30 16:56:43 localhost systemd[1]: Starting Create System Users...
Sep 30 16:56:43 localhost systemd[1]: Starting Setup Virtual Console...
Sep 30 16:56:43 localhost systemd[1]: Finished Create List of Static Device Nodes.
Sep 30 16:56:43 localhost systemd[1]: Finished Apply Kernel Variables.
Sep 30 16:56:43 localhost systemd[1]: Finished Create System Users.
Sep 30 16:56:43 localhost systemd-journald[311]: Journal started
Sep 30 16:56:43 localhost systemd-journald[311]: Runtime Journal (/run/log/journal/889dcbce7e29433f8b4dbf5603fcc4a5) is 8.0M, max 153.5M, 145.5M free.
Sep 30 16:56:43 localhost systemd-sysusers[315]: Creating group 'users' with GID 100.
Sep 30 16:56:43 localhost systemd-sysusers[315]: Creating group 'dbus' with GID 81.
Sep 30 16:56:43 localhost systemd-sysusers[315]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Sep 30 16:56:43 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Sep 30 16:56:43 localhost systemd[1]: Started Journal Service.
Sep 30 16:56:43 localhost systemd[1]: Starting Create Volatile Files and Directories...
Sep 30 16:56:43 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Sep 30 16:56:43 localhost systemd[1]: Finished Create Volatile Files and Directories.
Sep 30 16:56:43 localhost systemd[1]: Finished Setup Virtual Console.
Sep 30 16:56:43 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Sep 30 16:56:43 localhost systemd[1]: Starting dracut cmdline hook...
Sep 30 16:56:43 localhost dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Sep 30 16:56:43 localhost dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Sep 30 16:56:43 localhost systemd[1]: Finished dracut cmdline hook.
Sep 30 16:56:43 localhost systemd[1]: Starting dracut pre-udev hook...
Sep 30 16:56:43 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Sep 30 16:56:43 localhost kernel: device-mapper: uevent: version 1.0.3
Sep 30 16:56:43 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Sep 30 16:56:43 localhost kernel: RPC: Registered named UNIX socket transport module.
Sep 30 16:56:43 localhost kernel: RPC: Registered udp transport module.
Sep 30 16:56:43 localhost kernel: RPC: Registered tcp transport module.
Sep 30 16:56:43 localhost kernel: RPC: Registered tcp-with-tls transport module.
Sep 30 16:56:43 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Sep 30 16:56:43 localhost rpc.statd[446]: Version 2.5.4 starting
Sep 30 16:56:43 localhost rpc.statd[446]: Initializing NSM state
Sep 30 16:56:44 localhost rpc.idmapd[451]: Setting log level to 0
Sep 30 16:56:44 localhost systemd[1]: Finished dracut pre-udev hook.
Sep 30 16:56:44 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Sep 30 16:56:44 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Sep 30 16:56:44 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Sep 30 16:56:44 localhost systemd[1]: Starting dracut pre-trigger hook...
Sep 30 16:56:44 localhost systemd[1]: Finished dracut pre-trigger hook.
Sep 30 16:56:44 localhost systemd[1]: Starting Coldplug All udev Devices...
Sep 30 16:56:44 localhost systemd[1]: Created slice Slice /system/modprobe.
Sep 30 16:56:44 localhost systemd[1]: Starting Load Kernel Module configfs...
Sep 30 16:56:44 localhost systemd[1]: Finished Coldplug All udev Devices.
Sep 30 16:56:44 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep 30 16:56:44 localhost systemd[1]: Finished Load Kernel Module configfs.
Sep 30 16:56:44 localhost systemd[1]: Mounting Kernel Configuration File System...
Sep 30 16:56:44 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Sep 30 16:56:44 localhost systemd[1]: Reached target Network.
Sep 30 16:56:44 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Sep 30 16:56:44 localhost systemd[1]: Starting dracut initqueue hook...
Sep 30 16:56:44 localhost systemd[1]: Mounted Kernel Configuration File System.
Sep 30 16:56:44 localhost systemd[1]: Reached target System Initialization.
Sep 30 16:56:44 localhost systemd[1]: Reached target Basic System.
Sep 30 16:56:44 localhost kernel: libata version 3.00 loaded.
Sep 30 16:56:44 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Sep 30 16:56:44 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Sep 30 16:56:44 localhost systemd-udevd[466]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 16:56:44 localhost kernel: scsi host0: ata_piix
Sep 30 16:56:44 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Sep 30 16:56:44 localhost kernel: scsi host1: ata_piix
Sep 30 16:56:44 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Sep 30 16:56:44 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Sep 30 16:56:44 localhost kernel:  vda: vda1
Sep 30 16:56:44 localhost kernel: ata1: found unknown device (class 0)
Sep 30 16:56:44 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Sep 30 16:56:44 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Sep 30 16:56:44 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Sep 30 16:56:44 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Sep 30 16:56:44 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Sep 30 16:56:44 localhost systemd[1]: Found device /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Sep 30 16:56:44 localhost systemd[1]: Reached target Initrd Root Device.
Sep 30 16:56:44 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Sep 30 16:56:44 localhost systemd[1]: Finished dracut initqueue hook.
Sep 30 16:56:44 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Sep 30 16:56:44 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Sep 30 16:56:44 localhost systemd[1]: Reached target Remote File Systems.
Sep 30 16:56:44 localhost systemd[1]: Starting dracut pre-mount hook...
Sep 30 16:56:44 localhost systemd[1]: Finished dracut pre-mount hook.
Sep 30 16:56:44 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8...
Sep 30 16:56:44 localhost systemd-fsck[559]: /usr/sbin/fsck.xfs: XFS file system.
Sep 30 16:56:44 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Sep 30 16:56:44 localhost systemd[1]: Mounting /sysroot...
Sep 30 16:56:45 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Sep 30 16:56:45 localhost kernel: XFS (vda1): Mounting V5 Filesystem d6a81468-b74c-4055-b485-def635ab40f8
Sep 30 16:56:45 localhost kernel: XFS (vda1): Ending clean mount
Sep 30 16:56:45 localhost systemd[1]: Mounted /sysroot.
Sep 30 16:56:45 localhost systemd[1]: Reached target Initrd Root File System.
Sep 30 16:56:45 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Sep 30 16:56:45 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Sep 30 16:56:45 localhost systemd[1]: Reached target Initrd File Systems.
Sep 30 16:56:45 localhost systemd[1]: Reached target Initrd Default Target.
Sep 30 16:56:45 localhost systemd[1]: Starting dracut mount hook...
Sep 30 16:56:45 localhost systemd[1]: Finished dracut mount hook.
Sep 30 16:56:45 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Sep 30 16:56:45 localhost rpc.idmapd[451]: exiting on signal 15
Sep 30 16:56:45 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Sep 30 16:56:45 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Sep 30 16:56:45 localhost systemd[1]: Stopped target Network.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Timer Units.
Sep 30 16:56:45 localhost systemd[1]: dbus.socket: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Sep 30 16:56:45 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Initrd Default Target.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Basic System.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Initrd Root Device.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Initrd /usr File System.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Path Units.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Remote File Systems.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Slice Units.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Socket Units.
Sep 30 16:56:45 localhost systemd[1]: Stopped target System Initialization.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Local File Systems.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Swaps.
Sep 30 16:56:45 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped dracut mount hook.
Sep 30 16:56:45 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped dracut pre-mount hook.
Sep 30 16:56:45 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Sep 30 16:56:45 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Sep 30 16:56:45 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped dracut initqueue hook.
Sep 30 16:56:45 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Apply Kernel Variables.
Sep 30 16:56:45 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Sep 30 16:56:45 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Coldplug All udev Devices.
Sep 30 16:56:45 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped dracut pre-trigger hook.
Sep 30 16:56:45 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Sep 30 16:56:45 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Setup Virtual Console.
Sep 30 16:56:45 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Sep 30 16:56:45 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Sep 30 16:56:45 localhost systemd[1]: systemd-udevd.service: Consumed 1.013s CPU time.
Sep 30 16:56:45 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Closed udev Control Socket.
Sep 30 16:56:45 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Closed udev Kernel Socket.
Sep 30 16:56:45 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped dracut pre-udev hook.
Sep 30 16:56:45 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped dracut cmdline hook.
Sep 30 16:56:45 localhost systemd[1]: Starting Cleanup udev Database...
Sep 30 16:56:45 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Sep 30 16:56:45 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Sep 30 16:56:45 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Stopped Create System Users.
Sep 30 16:56:45 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Sep 30 16:56:45 localhost systemd[1]: Finished Cleanup udev Database.
Sep 30 16:56:45 localhost systemd[1]: Reached target Switch Root.
Sep 30 16:56:45 localhost systemd[1]: Starting Switch Root...
Sep 30 16:56:45 localhost systemd[1]: Switching root.
Sep 30 16:56:46 localhost systemd-journald[311]: Journal stopped
Sep 30 16:56:47 localhost systemd-journald[311]: Received SIGTERM from PID 1 (systemd).
Sep 30 16:56:47 localhost kernel: audit: type=1404 audit(1759251406.187:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Sep 30 16:56:47 localhost kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 16:56:47 localhost kernel: SELinux:  policy capability open_perms=1
Sep 30 16:56:47 localhost kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 16:56:47 localhost kernel: SELinux:  policy capability always_check_network=0
Sep 30 16:56:47 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 16:56:47 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 16:56:47 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 16:56:47 localhost kernel: audit: type=1403 audit(1759251406.395:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Sep 30 16:56:47 localhost systemd[1]: Successfully loaded SELinux policy in 216.577ms.
Sep 30 16:56:47 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.754ms.
Sep 30 16:56:47 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Sep 30 16:56:47 localhost systemd[1]: Detected virtualization kvm.
Sep 30 16:56:47 localhost systemd[1]: Detected architecture x86-64.
Sep 30 16:56:47 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 16:56:47 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Sep 30 16:56:47 localhost systemd[1]: Stopped Switch Root.
Sep 30 16:56:47 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Sep 30 16:56:47 localhost systemd[1]: Created slice Slice /system/getty.
Sep 30 16:56:47 localhost systemd[1]: Created slice Slice /system/serial-getty.
Sep 30 16:56:47 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Sep 30 16:56:47 localhost systemd[1]: Created slice User and Session Slice.
Sep 30 16:56:47 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Sep 30 16:56:47 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Sep 30 16:56:47 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Sep 30 16:56:47 localhost systemd[1]: Reached target Local Encrypted Volumes.
Sep 30 16:56:47 localhost systemd[1]: Stopped target Switch Root.
Sep 30 16:56:47 localhost systemd[1]: Stopped target Initrd File Systems.
Sep 30 16:56:47 localhost systemd[1]: Stopped target Initrd Root File System.
Sep 30 16:56:47 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Sep 30 16:56:47 localhost systemd[1]: Reached target Path Units.
Sep 30 16:56:47 localhost systemd[1]: Reached target rpc_pipefs.target.
Sep 30 16:56:47 localhost systemd[1]: Reached target Slice Units.
Sep 30 16:56:47 localhost systemd[1]: Reached target Swaps.
Sep 30 16:56:47 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Sep 30 16:56:47 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Sep 30 16:56:47 localhost systemd[1]: Reached target RPC Port Mapper.
Sep 30 16:56:47 localhost systemd[1]: Listening on Process Core Dump Socket.
Sep 30 16:56:47 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Sep 30 16:56:47 localhost systemd[1]: Listening on udev Control Socket.
Sep 30 16:56:47 localhost systemd[1]: Listening on udev Kernel Socket.
Sep 30 16:56:47 localhost systemd[1]: Mounting Huge Pages File System...
Sep 30 16:56:47 localhost systemd[1]: Mounting POSIX Message Queue File System...
Sep 30 16:56:47 localhost systemd[1]: Mounting Kernel Debug File System...
Sep 30 16:56:47 localhost systemd[1]: Mounting Kernel Trace File System...
Sep 30 16:56:47 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Sep 30 16:56:47 localhost systemd[1]: Starting Create List of Static Device Nodes...
Sep 30 16:56:47 localhost systemd[1]: Starting Load Kernel Module configfs...
Sep 30 16:56:47 localhost systemd[1]: Starting Load Kernel Module drm...
Sep 30 16:56:47 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Sep 30 16:56:47 localhost systemd[1]: Starting Load Kernel Module fuse...
Sep 30 16:56:47 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Sep 30 16:56:47 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Sep 30 16:56:47 localhost systemd[1]: Stopped File System Check on Root Device.
Sep 30 16:56:47 localhost systemd[1]: Stopped Journal Service.
Sep 30 16:56:47 localhost systemd[1]: Starting Journal Service...
Sep 30 16:56:47 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Sep 30 16:56:47 localhost systemd[1]: Starting Generate network units from Kernel command line...
Sep 30 16:56:47 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Sep 30 16:56:47 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Sep 30 16:56:47 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Sep 30 16:56:47 localhost systemd[1]: Starting Apply Kernel Variables...
Sep 30 16:56:47 localhost systemd[1]: Starting Coldplug All udev Devices...
Sep 30 16:56:47 localhost kernel: fuse: init (API version 7.37)
Sep 30 16:56:47 localhost systemd[1]: Mounted Huge Pages File System.
Sep 30 16:56:47 localhost systemd[1]: Mounted POSIX Message Queue File System.
Sep 30 16:56:47 localhost systemd[1]: Mounted Kernel Debug File System.
Sep 30 16:56:47 localhost systemd[1]: Mounted Kernel Trace File System.
Sep 30 16:56:47 localhost systemd[1]: Finished Create List of Static Device Nodes.
Sep 30 16:56:47 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Sep 30 16:56:47 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep 30 16:56:47 localhost systemd[1]: Finished Load Kernel Module configfs.
Sep 30 16:56:47 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep 30 16:56:47 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Sep 30 16:56:47 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Sep 30 16:56:47 localhost systemd[1]: Finished Load Kernel Module fuse.
Sep 30 16:56:47 localhost systemd-journald[680]: Journal started
Sep 30 16:56:47 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Sep 30 16:56:46 localhost systemd[1]: Queued start job for default target Multi-User System.
Sep 30 16:56:46 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Sep 30 16:56:47 localhost systemd[1]: Started Journal Service.
Sep 30 16:56:47 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Sep 30 16:56:47 localhost systemd[1]: Finished Generate network units from Kernel command line.
Sep 30 16:56:47 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Sep 30 16:56:47 localhost systemd[1]: Finished Apply Kernel Variables.
Sep 30 16:56:47 localhost kernel: ACPI: bus type drm_connector registered
Sep 30 16:56:47 localhost systemd[1]: Mounting FUSE Control File System...
Sep 30 16:56:47 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Sep 30 16:56:47 localhost systemd[1]: Starting Rebuild Hardware Database...
Sep 30 16:56:47 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Sep 30 16:56:47 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep 30 16:56:47 localhost systemd[1]: Starting Load/Save OS Random Seed...
Sep 30 16:56:47 localhost systemd[1]: Starting Create System Users...
Sep 30 16:56:47 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Sep 30 16:56:47 localhost systemd-journald[680]: Received client request to flush runtime journal.
Sep 30 16:56:47 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep 30 16:56:47 localhost systemd[1]: Finished Load Kernel Module drm.
Sep 30 16:56:47 localhost systemd[1]: Mounted FUSE Control File System.
Sep 30 16:56:47 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Sep 30 16:56:47 localhost systemd[1]: Finished Load/Save OS Random Seed.
Sep 30 16:56:47 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Sep 30 16:56:47 localhost systemd[1]: Finished Create System Users.
Sep 30 16:56:47 localhost systemd[1]: Finished Coldplug All udev Devices.
Sep 30 16:56:47 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Sep 30 16:56:47 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Sep 30 16:56:47 localhost systemd[1]: Reached target Preparation for Local File Systems.
Sep 30 16:56:47 localhost systemd[1]: Reached target Local File Systems.
Sep 30 16:56:47 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Sep 30 16:56:47 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Sep 30 16:56:47 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep 30 16:56:47 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Sep 30 16:56:47 localhost systemd[1]: Starting Automatic Boot Loader Update...
Sep 30 16:56:47 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Sep 30 16:56:47 localhost systemd[1]: Starting Create Volatile Files and Directories...
Sep 30 16:56:47 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Sep 30 16:56:47 localhost systemd[1]: Finished Automatic Boot Loader Update.
Sep 30 16:56:47 localhost systemd[1]: Finished Create Volatile Files and Directories.
Sep 30 16:56:47 localhost systemd[1]: Starting Security Auditing Service...
Sep 30 16:56:47 localhost systemd[1]: Starting RPC Bind...
Sep 30 16:56:47 localhost systemd[1]: Starting Rebuild Journal Catalog...
Sep 30 16:56:47 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Sep 30 16:56:47 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Sep 30 16:56:47 localhost systemd[1]: Finished Rebuild Journal Catalog.
Sep 30 16:56:47 localhost systemd[1]: Started RPC Bind.
Sep 30 16:56:47 localhost augenrules[710]: /sbin/augenrules: No change
Sep 30 16:56:47 localhost augenrules[725]: No rules
Sep 30 16:56:47 localhost augenrules[725]: enabled 1
Sep 30 16:56:47 localhost augenrules[725]: failure 1
Sep 30 16:56:47 localhost augenrules[725]: pid 705
Sep 30 16:56:47 localhost augenrules[725]: rate_limit 0
Sep 30 16:56:47 localhost augenrules[725]: backlog_limit 8192
Sep 30 16:56:47 localhost augenrules[725]: lost 0
Sep 30 16:56:47 localhost augenrules[725]: backlog 0
Sep 30 16:56:47 localhost augenrules[725]: backlog_wait_time 60000
Sep 30 16:56:47 localhost augenrules[725]: backlog_wait_time_actual 0
Sep 30 16:56:47 localhost augenrules[725]: enabled 1
Sep 30 16:56:47 localhost augenrules[725]: failure 1
Sep 30 16:56:47 localhost augenrules[725]: pid 705
Sep 30 16:56:47 localhost augenrules[725]: rate_limit 0
Sep 30 16:56:47 localhost augenrules[725]: backlog_limit 8192
Sep 30 16:56:47 localhost augenrules[725]: lost 0
Sep 30 16:56:47 localhost augenrules[725]: backlog 0
Sep 30 16:56:47 localhost augenrules[725]: backlog_wait_time 60000
Sep 30 16:56:47 localhost augenrules[725]: backlog_wait_time_actual 0
Sep 30 16:56:47 localhost augenrules[725]: enabled 1
Sep 30 16:56:47 localhost augenrules[725]: failure 1
Sep 30 16:56:47 localhost augenrules[725]: pid 705
Sep 30 16:56:47 localhost augenrules[725]: rate_limit 0
Sep 30 16:56:47 localhost augenrules[725]: backlog_limit 8192
Sep 30 16:56:47 localhost augenrules[725]: lost 0
Sep 30 16:56:47 localhost augenrules[725]: backlog 0
Sep 30 16:56:47 localhost augenrules[725]: backlog_wait_time 60000
Sep 30 16:56:47 localhost augenrules[725]: backlog_wait_time_actual 0
Sep 30 16:56:47 localhost systemd[1]: Started Security Auditing Service.
Sep 30 16:56:47 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Sep 30 16:56:47 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Sep 30 16:56:47 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Sep 30 16:56:47 localhost systemd[1]: Finished Rebuild Hardware Database.
Sep 30 16:56:47 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Sep 30 16:56:47 localhost systemd[1]: Starting Update is Completed...
Sep 30 16:56:47 localhost systemd[1]: Finished Update is Completed.
Sep 30 16:56:47 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Sep 30 16:56:47 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Sep 30 16:56:47 localhost systemd[1]: Reached target System Initialization.
Sep 30 16:56:47 localhost systemd[1]: Started dnf makecache --timer.
Sep 30 16:56:47 localhost systemd[1]: Started Daily rotation of log files.
Sep 30 16:56:47 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Sep 30 16:56:47 localhost systemd[1]: Reached target Timer Units.
Sep 30 16:56:47 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Sep 30 16:56:47 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Sep 30 16:56:47 localhost systemd[1]: Reached target Socket Units.
Sep 30 16:56:47 localhost systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 16:56:47 localhost systemd[1]: Starting D-Bus System Message Bus...
Sep 30 16:56:47 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Sep 30 16:56:47 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Sep 30 16:56:47 localhost systemd[1]: Starting Load Kernel Module configfs...
Sep 30 16:56:47 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep 30 16:56:47 localhost systemd[1]: Finished Load Kernel Module configfs.
Sep 30 16:56:48 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Sep 30 16:56:48 localhost dbus-broker-lau[771]: Ready
Sep 30 16:56:48 localhost systemd[1]: Started D-Bus System Message Bus.
Sep 30 16:56:48 localhost systemd[1]: Reached target Basic System.
Sep 30 16:56:48 localhost systemd[1]: Starting NTP client/server...
Sep 30 16:56:48 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Sep 30 16:56:48 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Sep 30 16:56:48 localhost systemd[1]: Starting IPv4 firewall with iptables...
Sep 30 16:56:48 localhost systemd[1]: Started irqbalance daemon.
Sep 30 16:56:48 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Sep 30 16:56:48 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Sep 30 16:56:48 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Sep 30 16:56:48 localhost kernel: kvm_amd: TSC scaling supported
Sep 30 16:56:48 localhost kernel: kvm_amd: Nested Virtualization enabled
Sep 30 16:56:48 localhost kernel: kvm_amd: Nested Paging enabled
Sep 30 16:56:48 localhost kernel: kvm_amd: LBR virtualization supported
Sep 30 16:56:48 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Sep 30 16:56:48 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Sep 30 16:56:48 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Sep 30 16:56:48 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 16:56:48 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 16:56:48 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 16:56:48 localhost systemd[1]: Reached target sshd-keygen.target.
Sep 30 16:56:48 localhost kernel: Console: switching to colour dummy device 80x25
Sep 30 16:56:48 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Sep 30 16:56:48 localhost kernel: [drm] features: -context_init
Sep 30 16:56:48 localhost kernel: [drm] number of scanouts: 1
Sep 30 16:56:48 localhost kernel: [drm] number of cap sets: 0
Sep 30 16:56:48 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Sep 30 16:56:48 localhost chronyd[808]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Sep 30 16:56:48 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Sep 30 16:56:48 localhost kernel: Console: switching to colour frame buffer device 128x48
Sep 30 16:56:48 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Sep 30 16:56:48 localhost chronyd[808]: Loaded 0 symmetric keys
Sep 30 16:56:48 localhost chronyd[808]: Using right/UTC timezone to obtain leap second data
Sep 30 16:56:48 localhost chronyd[808]: Loaded seccomp filter (level 2)
Sep 30 16:56:48 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Sep 30 16:56:48 localhost systemd[1]: Reached target User and Group Name Lookups.
Sep 30 16:56:48 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Sep 30 16:56:48 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Sep 30 16:56:48 localhost systemd[1]: Starting User Login Management...
Sep 30 16:56:48 localhost systemd[1]: Started NTP client/server.
Sep 30 16:56:48 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Sep 30 16:56:48 localhost systemd-logind[811]: New seat seat0.
Sep 30 16:56:48 localhost systemd-logind[811]: Watching system buttons on /dev/input/event0 (Power Button)
Sep 30 16:56:48 localhost systemd-logind[811]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Sep 30 16:56:48 localhost systemd[1]: Started User Login Management.
Sep 30 16:56:48 localhost iptables.init[791]: iptables: Applying firewall rules: [  OK  ]
Sep 30 16:56:48 localhost systemd[1]: Finished IPv4 firewall with iptables.
Sep 30 16:56:48 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 30 Sep 2025 16:56:48 +0000. Up 7.69 seconds.
Sep 30 16:56:49 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Sep 30 16:56:49 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Sep 30 16:56:49 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp6iyne0wu.mount: Deactivated successfully.
Sep 30 16:56:49 localhost systemd[1]: Starting Hostname Service...
Sep 30 16:56:49 localhost systemd[1]: Started Hostname Service.
Sep 30 16:56:49 np0005463147.novalocal systemd-hostnamed[856]: Hostname set to <np0005463147.novalocal> (static)
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Reached target Preparation for Network.
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Starting Network Manager...
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.7531] NetworkManager (version 1.54.1-1.el9) is starting... (boot:17653e5b-2838-4126-901c-d60259dd6a79)
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.7539] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.7820] manager[0x55a9972a6080]: monitoring kernel firmware directory '/lib/firmware'.
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.7893] hostname: hostname: using hostnamed
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.7894] hostname: static hostname changed from (none) to "np0005463147.novalocal"
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.7903] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8215] manager[0x55a9972a6080]: rfkill: Wi-Fi hardware radio set enabled
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8218] manager[0x55a9972a6080]: rfkill: WWAN hardware radio set enabled
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8359] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8360] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8361] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8362] manager: Networking is enabled by state file
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8371] settings: Loaded settings plugin: keyfile (internal)
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8629] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8653] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8678] dhcp: init: Using DHCP client 'internal'
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8680] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8692] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8703] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8710] device (lo): Activation: starting connection 'lo' (fc4e626b-e829-4908-a3d3-ce552dfc6be3)
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8718] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8721] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Started Network Manager.
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8774] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8777] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8779] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8782] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8786] device (eth0): carrier: link connected
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8791] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Reached target Network.
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8798] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8822] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8825] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8831] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8832] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8841] device (lo): Activation: successful, device activated.
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8852] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8856] manager: NetworkManager state is now CONNECTING
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8860] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8873] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 16:56:49 np0005463147.novalocal NetworkManager[860]: <info>  [1759251409.8879] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Starting Network Manager Wait Online...
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Reached target NFS client services.
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: Reached target Remote File Systems.
Sep 30 16:56:49 np0005463147.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.5901] dhcp4 (eth0): state changed new lease, address=38.102.83.202
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.5915] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.5944] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.5984] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.5986] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.5991] manager: NetworkManager state is now CONNECTED_SITE
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.5997] device (eth0): Activation: successful, device activated.
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.6005] manager: NetworkManager state is now CONNECTED_GLOBAL
Sep 30 16:56:51 np0005463147.novalocal NetworkManager[860]: <info>  [1759251411.6009] manager: startup complete
Sep 30 16:56:51 np0005463147.novalocal systemd[1]: Finished Network Manager Wait Online.
Sep 30 16:56:51 np0005463147.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 30 Sep 2025 16:56:51 +0000. Up 10.63 seconds.
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.202         | 255.255.255.0 | global | fa:16:3e:a0:74:43 |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fea0:7443/64 |       .       |  link  | fa:16:3e:a0:74:43 |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Sep 30 16:56:51 np0005463147.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Sep 30 16:56:52 np0005463147.novalocal useradd[992]: new group: name=cloud-user, GID=1001
Sep 30 16:56:52 np0005463147.novalocal useradd[992]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Sep 30 16:56:52 np0005463147.novalocal useradd[992]: add 'cloud-user' to group 'adm'
Sep 30 16:56:52 np0005463147.novalocal useradd[992]: add 'cloud-user' to group 'systemd-journal'
Sep 30 16:56:52 np0005463147.novalocal useradd[992]: add 'cloud-user' to shadow group 'adm'
Sep 30 16:56:52 np0005463147.novalocal useradd[992]: add 'cloud-user' to shadow group 'systemd-journal'
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Generating public/private rsa key pair.
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: The key fingerprint is:
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: SHA256:sUbA/QSF0TCIxMBCEg6cngzozrSjHKGl1Ig4JaJ+610 root@np0005463147.novalocal
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: The key's randomart image is:
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: +---[RSA 3072]----+
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |B+o+.o.o=B.      |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |Xoo o o.o.o      |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |X=+     oo       |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |=X..   . o.      |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |B+o     S        |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |oB .   .         |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |o + .  E         |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |.. .. .          |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |  .. .           |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: +----[SHA256]-----+
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: The key fingerprint is:
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: SHA256:JtqgnivYkqQi4HdbTbGAyCnbiV/kyqR0jUXNBpGmMn8 root@np0005463147.novalocal
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: The key's randomart image is:
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: +---[ECDSA 256]---+
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |     +*          |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |  . +o.+         |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: | . +o+.. .       |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: | o=.B   . o      |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: | ++*.+. So       |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |o.*ooE oo        |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |*+.+o .. .       |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |Oo.o ..          |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |oo=....          |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: +----[SHA256]-----+
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: The key fingerprint is:
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: SHA256:nElo/tg4WqqW7q887B7Vz3Tdxal8nLeZpD0qrdyjCaE root@np0005463147.novalocal
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: The key's randomart image is:
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: +--[ED25519 256]--+
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |                 |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |       .       ..|
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |      o .      .o|
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |     + o o ...o..|
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |    . o S.. .o.=.|
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: |   .   O...   = =|
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: | ...  =E=.  .. * |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: | .=. + .  o.oo. .|
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: | *O=+      =+o.  |
Sep 30 16:56:53 np0005463147.novalocal cloud-init[923]: +----[SHA256]-----+
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Reached target Cloud-config availability.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Reached target Network is Online.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Starting System Logging Service...
Sep 30 16:56:53 np0005463147.novalocal sm-notify[1008]: Version 2.5.4 starting
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Starting OpenSSH server daemon...
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Starting Permit User Sessions...
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Started Notify NFS peers of a restart.
Sep 30 16:56:53 np0005463147.novalocal sshd[1010]: Server listening on 0.0.0.0 port 22.
Sep 30 16:56:53 np0005463147.novalocal sshd[1010]: Server listening on :: port 22.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Started OpenSSH server daemon.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Finished Permit User Sessions.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Started Command Scheduler.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Started Getty on tty1.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Started Serial Getty on ttyS0.
Sep 30 16:56:53 np0005463147.novalocal crond[1012]: (CRON) STARTUP (1.5.7)
Sep 30 16:56:53 np0005463147.novalocal crond[1012]: (CRON) INFO (Syslog will be used instead of sendmail.)
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Reached target Login Prompts.
Sep 30 16:56:53 np0005463147.novalocal crond[1012]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 40% if used.)
Sep 30 16:56:53 np0005463147.novalocal crond[1012]: (CRON) INFO (running with inotify support)
Sep 30 16:56:53 np0005463147.novalocal rsyslogd[1009]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1009" x-info="https://www.rsyslog.com"] start
Sep 30 16:56:53 np0005463147.novalocal rsyslogd[1009]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Started System Logging Service.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Reached target Multi-User System.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Sep 30 16:56:53 np0005463147.novalocal rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 16:56:53 np0005463147.novalocal cloud-init[1021]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 30 Sep 2025 16:56:53 +0000. Up 12.35 seconds.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Sep 30 16:56:53 np0005463147.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1025]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 30 Sep 2025 16:56:54 +0000. Up 12.75 seconds.
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1027]: #############################################################
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1028]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1030]: 256 SHA256:JtqgnivYkqQi4HdbTbGAyCnbiV/kyqR0jUXNBpGmMn8 root@np0005463147.novalocal (ECDSA)
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1032]: 256 SHA256:nElo/tg4WqqW7q887B7Vz3Tdxal8nLeZpD0qrdyjCaE root@np0005463147.novalocal (ED25519)
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1034]: 3072 SHA256:sUbA/QSF0TCIxMBCEg6cngzozrSjHKGl1Ig4JaJ+610 root@np0005463147.novalocal (RSA)
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1035]: -----END SSH HOST KEY FINGERPRINTS-----
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1036]: #############################################################
Sep 30 16:56:54 np0005463147.novalocal cloud-init[1025]: Cloud-init v. 24.4-7.el9 finished at Tue, 30 Sep 2025 16:56:54 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.97 seconds
Sep 30 16:56:54 np0005463147.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Sep 30 16:56:54 np0005463147.novalocal systemd[1]: Reached target Cloud-init target.
Sep 30 16:56:54 np0005463147.novalocal systemd[1]: Startup finished in 1.771s (kernel) + 3.152s (initrd) + 8.115s (userspace) = 13.040s.
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1040]: Connection reset by 38.102.83.114 port 47816 [preauth]
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1042]: Unable to negotiate with 38.102.83.114 port 47830: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1046]: Unable to negotiate with 38.102.83.114 port 47848: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1048]: Unable to negotiate with 38.102.83.114 port 47856: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1054]: Unable to negotiate with 38.102.83.114 port 47882: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1044]: Connection closed by 38.102.83.114 port 47840 [preauth]
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1056]: Unable to negotiate with 38.102.83.114 port 47884: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1050]: Connection closed by 38.102.83.114 port 47872 [preauth]
Sep 30 16:56:54 np0005463147.novalocal sshd-session[1052]: Connection closed by 38.102.83.114 port 47880 [preauth]
Sep 30 16:56:56 np0005463147.novalocal chronyd[808]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Sep 30 16:56:56 np0005463147.novalocal chronyd[808]: System clock TAI offset set to 37 seconds
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: Cannot change IRQ 35 affinity: Operation not permitted
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: IRQ 35 affinity is now unmanaged
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: Cannot change IRQ 33 affinity: Operation not permitted
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: IRQ 33 affinity is now unmanaged
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: Cannot change IRQ 31 affinity: Operation not permitted
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: IRQ 31 affinity is now unmanaged
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: Cannot change IRQ 28 affinity: Operation not permitted
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: IRQ 28 affinity is now unmanaged
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: Cannot change IRQ 34 affinity: Operation not permitted
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: IRQ 34 affinity is now unmanaged
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: Cannot change IRQ 32 affinity: Operation not permitted
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: IRQ 32 affinity is now unmanaged
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: Cannot change IRQ 30 affinity: Operation not permitted
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: IRQ 30 affinity is now unmanaged
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: Cannot change IRQ 29 affinity: Operation not permitted
Sep 30 16:56:58 np0005463147.novalocal irqbalance[792]: IRQ 29 affinity is now unmanaged
Sep 30 16:57:01 np0005463147.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 16:57:15 np0005463147.novalocal sshd-session[1058]: Accepted publickey for zuul from 38.102.83.114 port 35586 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Sep 30 16:57:15 np0005463147.novalocal systemd[1]: Created slice User Slice of UID 1000.
Sep 30 16:57:15 np0005463147.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Sep 30 16:57:15 np0005463147.novalocal systemd-logind[811]: New session 1 of user zuul.
Sep 30 16:57:15 np0005463147.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Sep 30 16:57:15 np0005463147.novalocal systemd[1]: Starting User Manager for UID 1000...
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Queued start job for default target Main User Target.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Created slice User Application Slice.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Started Mark boot as successful after the user session has run 2 minutes.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Reached target Paths.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Reached target Timers.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Starting D-Bus User Message Bus Socket...
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Starting Create User's Volatile Files and Directories...
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Finished Create User's Volatile Files and Directories.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Listening on D-Bus User Message Bus Socket.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Reached target Sockets.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Reached target Basic System.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Reached target Main User Target.
Sep 30 16:57:15 np0005463147.novalocal systemd[1062]: Startup finished in 156ms.
Sep 30 16:57:15 np0005463147.novalocal systemd[1]: Started User Manager for UID 1000.
Sep 30 16:57:15 np0005463147.novalocal systemd[1]: Started Session 1 of User zuul.
Sep 30 16:57:15 np0005463147.novalocal sshd-session[1058]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 16:57:16 np0005463147.novalocal python3[1144]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 16:57:18 np0005463147.novalocal python3[1172]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 16:57:19 np0005463147.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 16:57:25 np0005463147.novalocal python3[1232]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 16:57:26 np0005463147.novalocal python3[1272]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Sep 30 16:57:28 np0005463147.novalocal python3[1298]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJZRnlw5lxA7UBiwhIWHL9fYC6NNPpslBJpmHQk/tT7YcFtPXtFCFVX5DH+DwaLlXqqjlCzUzh7V1c8ItxRgp2aE1CNn5dfciCNfCvpELE5UvrVEHMg+Yn9jZdXBENoj2Ph3y9cVfl6lDKws6pKufo8fW/z65wwOVxAiJyYhDb4BueCFuOA8UT9u+O3aB1TSXMJe9jxldV6kUwN5sJ2cJkm9SBDd++KtEKnG7yuw6SOhh3PCNwlzKOy4McnzAXF1P2vuYvJBS+53c221epEc5ZDxcTkCBndN/OSDxnL7pVGjWkNS1eplYJ03PmNPFNRsjyrhlShMEiKrNoTroSY1HLsSdFpCfK2roJTQHnzkl4QnsZXI76ZldD/rU370gz4wDHAJm7TrUTz0scMRAOgIIH5hD7XqiVcOH9+Y2KVHFvvKXZAWgvpuozlqalQbU3/Cnb6dA7NP6tinef3MvzTyxR5BhyowXb1gha8eX9XmFoQlc9ndpgKD1dSgHYy2pQxas= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:28 np0005463147.novalocal python3[1322]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:29 np0005463147.novalocal python3[1421]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 16:57:29 np0005463147.novalocal python3[1492]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759251449.1832564-229-33604260396550/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=9a7c1f9fca9042f6bc5bf6667fbd0ac7_id_rsa follow=False checksum=67b7651487d5caa423673a36898f80db9e6af5ef backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:30 np0005463147.novalocal python3[1615]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 16:57:30 np0005463147.novalocal python3[1686]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759251450.3148377-273-17629045760089/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=9a7c1f9fca9042f6bc5bf6667fbd0ac7_id_rsa.pub follow=False checksum=f21d6bceda5d43e7671c6bab343ac33d3c1e3c12 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:32 np0005463147.novalocal python3[1734]: ansible-ping Invoked with data=pong
Sep 30 16:57:33 np0005463147.novalocal python3[1758]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 16:57:35 np0005463147.novalocal python3[1816]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Sep 30 16:57:36 np0005463147.novalocal python3[1848]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:36 np0005463147.novalocal python3[1872]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:37 np0005463147.novalocal python3[1896]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:37 np0005463147.novalocal python3[1920]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:37 np0005463147.novalocal python3[1944]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:38 np0005463147.novalocal python3[1968]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:39 np0005463147.novalocal sudo[1992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfdrzlewxxtosxhsqtiynkqzqjpbrqkc ; /usr/bin/python3'
Sep 30 16:57:39 np0005463147.novalocal sudo[1992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:39 np0005463147.novalocal python3[1994]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:39 np0005463147.novalocal sudo[1992]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:40 np0005463147.novalocal sudo[2070]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwqngbiycwbllevjnhoqqcvlzjqsfvon ; /usr/bin/python3'
Sep 30 16:57:40 np0005463147.novalocal sudo[2070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:40 np0005463147.novalocal python3[2072]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 16:57:40 np0005463147.novalocal sudo[2070]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:40 np0005463147.novalocal sudo[2143]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faujtzdntfjzhwngiwurbvprqgthhhbq ; /usr/bin/python3'
Sep 30 16:57:40 np0005463147.novalocal sudo[2143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:41 np0005463147.novalocal python3[2145]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759251460.0561914-26-235467434813914/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:41 np0005463147.novalocal sudo[2143]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:41 np0005463147.novalocal python3[2193]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:42 np0005463147.novalocal python3[2217]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:42 np0005463147.novalocal python3[2241]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:42 np0005463147.novalocal python3[2265]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:43 np0005463147.novalocal python3[2289]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:43 np0005463147.novalocal python3[2313]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:43 np0005463147.novalocal python3[2337]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:44 np0005463147.novalocal python3[2361]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:44 np0005463147.novalocal python3[2385]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:44 np0005463147.novalocal python3[2409]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:45 np0005463147.novalocal python3[2433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:45 np0005463147.novalocal python3[2457]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:45 np0005463147.novalocal python3[2481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:46 np0005463147.novalocal python3[2505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:46 np0005463147.novalocal python3[2529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:46 np0005463147.novalocal python3[2553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:47 np0005463147.novalocal python3[2577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:47 np0005463147.novalocal python3[2601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:48 np0005463147.novalocal python3[2625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:48 np0005463147.novalocal python3[2649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:48 np0005463147.novalocal python3[2673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:49 np0005463147.novalocal python3[2697]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:49 np0005463147.novalocal python3[2721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:49 np0005463147.novalocal python3[2745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:49 np0005463147.novalocal python3[2769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:50 np0005463147.novalocal python3[2793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 16:57:51 np0005463147.novalocal sudo[2817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkrrzerbjatqdiuebfniufzwthkmozig ; /usr/bin/python3'
Sep 30 16:57:51 np0005463147.novalocal sudo[2817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:51 np0005463147.novalocal python3[2819]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Sep 30 16:57:51 np0005463147.novalocal systemd[1]: Starting Time & Date Service...
Sep 30 16:57:51 np0005463147.novalocal systemd[1]: Started Time & Date Service.
Sep 30 16:57:52 np0005463147.novalocal systemd-timedated[2821]: Changed time zone to 'UTC' (UTC).
Sep 30 16:57:52 np0005463147.novalocal sudo[2817]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:52 np0005463147.novalocal sudo[2848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drdivefkegxzqpdfqlshqvbekdhzsdqm ; /usr/bin/python3'
Sep 30 16:57:52 np0005463147.novalocal sudo[2848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:52 np0005463147.novalocal python3[2850]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:52 np0005463147.novalocal sudo[2848]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:52 np0005463147.novalocal python3[2926]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 16:57:53 np0005463147.novalocal python3[2997]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759251472.5683446-202-228258748528802/source _original_basename=tmp4z_fl3nt follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:53 np0005463147.novalocal python3[3097]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 16:57:54 np0005463147.novalocal python3[3168]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759251473.4321184-242-61122749827476/source _original_basename=tmpt881hzqc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:54 np0005463147.novalocal sudo[3268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdqewqbefzfgbkvfbovkhzegjbokrrfn ; /usr/bin/python3'
Sep 30 16:57:54 np0005463147.novalocal sudo[3268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:54 np0005463147.novalocal python3[3270]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 16:57:54 np0005463147.novalocal sudo[3268]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:55 np0005463147.novalocal sudo[3341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-conzvivmgifeqxeasoojeffrxixlfddu ; /usr/bin/python3'
Sep 30 16:57:55 np0005463147.novalocal sudo[3341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:55 np0005463147.novalocal python3[3343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759251474.5575066-306-178640130120889/source _original_basename=tmpnxz_akk8 follow=False checksum=a70bebf2c8ca4f48b35db326a6da932097539870 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:55 np0005463147.novalocal sudo[3341]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:55 np0005463147.novalocal python3[3391]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 16:57:55 np0005463147.novalocal python3[3417]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 16:57:56 np0005463147.novalocal sudo[3495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dypzymhtqwmgtkqauxgpzpilmztpkxbt ; /usr/bin/python3'
Sep 30 16:57:56 np0005463147.novalocal sudo[3495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:56 np0005463147.novalocal python3[3497]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 16:57:56 np0005463147.novalocal sudo[3495]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:56 np0005463147.novalocal sudo[3568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pswapubzojkcorwbvpbdvajogbedngbt ; /usr/bin/python3'
Sep 30 16:57:56 np0005463147.novalocal sudo[3568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:56 np0005463147.novalocal python3[3570]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759251476.1912389-362-864800383670/source _original_basename=tmpt_hwq8xh follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:57:56 np0005463147.novalocal sudo[3568]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:57 np0005463147.novalocal sudo[3619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcqtoewfsqkxxclxdyvilrucckcztqvq ; /usr/bin/python3'
Sep 30 16:57:57 np0005463147.novalocal sudo[3619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:57:57 np0005463147.novalocal python3[3621]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-e63c-10db-00000000001e-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 16:57:57 np0005463147.novalocal sudo[3619]: pam_unix(sudo:session): session closed for user root
Sep 30 16:57:58 np0005463147.novalocal python3[3649]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-e63c-10db-00000000001f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Sep 30 16:57:59 np0005463147.novalocal python3[3677]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:58:16 np0005463147.novalocal sudo[3701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcdepcupnarzlstmdmbcshcbxqznvguz ; /usr/bin/python3'
Sep 30 16:58:16 np0005463147.novalocal sudo[3701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:58:16 np0005463147.novalocal python3[3703]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:58:16 np0005463147.novalocal sudo[3701]: pam_unix(sudo:session): session closed for user root
Sep 30 16:58:22 np0005463147.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Sep 30 16:58:54 np0005463147.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Sep 30 16:58:54 np0005463147.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.7896] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Sep 30 16:58:54 np0005463147.novalocal systemd-udevd[3707]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8139] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8176] settings: (eth1): created default wired connection 'Wired connection 1'
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8181] device (eth1): carrier: link connected
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8184] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8191] policy: auto-activating connection 'Wired connection 1' (3ae83a79-f376-35d9-8615-d9dcf6285ffc)
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8196] device (eth1): Activation: starting connection 'Wired connection 1' (3ae83a79-f376-35d9-8615-d9dcf6285ffc)
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8197] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8200] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8206] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 16:58:54 np0005463147.novalocal NetworkManager[860]: <info>  [1759251534.8212] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Sep 30 16:58:55 np0005463147.novalocal python3[3733]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-574d-0424-000000000112-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 16:59:05 np0005463147.novalocal sudo[3811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctqkymnkibkbacsrspbvsdvnxsqbnqvy ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 16:59:05 np0005463147.novalocal sudo[3811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:59:05 np0005463147.novalocal python3[3813]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 16:59:05 np0005463147.novalocal sudo[3811]: pam_unix(sudo:session): session closed for user root
Sep 30 16:59:05 np0005463147.novalocal sudo[3884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqwuiidehqgwutikawolhwfxtkjmoiad ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 16:59:05 np0005463147.novalocal sudo[3884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:59:05 np0005463147.novalocal python3[3886]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759251545.1346192-103-28729648269503/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=6149de442a225194173a95bce2d7b7220941bf8c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 16:59:05 np0005463147.novalocal sudo[3884]: pam_unix(sudo:session): session closed for user root
Sep 30 16:59:06 np0005463147.novalocal sudo[3934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcjxqsgpyiiwauzekgotadtgugohssmk ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 16:59:06 np0005463147.novalocal sudo[3934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 16:59:06 np0005463147.novalocal python3[3936]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Stopped Network Manager Wait Online.
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Stopping Network Manager Wait Online...
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Stopping Network Manager...
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[860]: <info>  [1759251546.7935] caught SIGTERM, shutting down normally.
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[860]: <info>  [1759251546.7945] dhcp4 (eth0): canceled DHCP transaction
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[860]: <info>  [1759251546.7945] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[860]: <info>  [1759251546.7945] dhcp4 (eth0): state changed no lease
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[860]: <info>  [1759251546.7947] manager: NetworkManager state is now CONNECTING
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[860]: <info>  [1759251546.8044] dhcp4 (eth1): canceled DHCP transaction
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[860]: <info>  [1759251546.8044] dhcp4 (eth1): state changed no lease
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[860]: <info>  [1759251546.8096] exiting (success)
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Stopped Network Manager.
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: NetworkManager.service: Consumed 1.077s CPU time, 9.8M memory peak.
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Starting Network Manager...
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.8715] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:17653e5b-2838-4126-901c-d60259dd6a79)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.8716] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.8782] manager[0x564d25e79070]: monitoring kernel firmware directory '/lib/firmware'.
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Starting Hostname Service...
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Started Hostname Service.
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9506] hostname: hostname: using hostnamed
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9511] hostname: static hostname changed from (none) to "np0005463147.novalocal"
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9519] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9527] manager[0x564d25e79070]: rfkill: Wi-Fi hardware radio set enabled
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9528] manager[0x564d25e79070]: rfkill: WWAN hardware radio set enabled
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9576] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9577] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9578] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9578] manager: Networking is enabled by state file
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9582] settings: Loaded settings plugin: keyfile (internal)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9590] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9638] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9655] dhcp: init: Using DHCP client 'internal'
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9660] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9669] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9680] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9694] device (lo): Activation: starting connection 'lo' (fc4e626b-e829-4908-a3d3-ce552dfc6be3)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9705] device (eth0): carrier: link connected
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9711] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9720] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9721] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9733] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9744] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9755] device (eth1): carrier: link connected
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9762] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9772] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3ae83a79-f376-35d9-8615-d9dcf6285ffc) (indicated)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9772] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9784] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9797] device (eth1): Activation: starting connection 'Wired connection 1' (3ae83a79-f376-35d9-8615-d9dcf6285ffc)
Sep 30 16:59:06 np0005463147.novalocal systemd[1]: Started Network Manager.
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9809] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9821] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9829] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9835] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9839] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9847] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9853] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9858] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9866] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9879] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9884] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9901] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9906] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9939] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9948] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9957] device (lo): Activation: successful, device activated.
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9969] dhcp4 (eth0): state changed new lease, address=38.102.83.202
Sep 30 16:59:06 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251546.9980] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Sep 30 16:59:07 np0005463147.novalocal systemd[1]: Starting Network Manager Wait Online...
Sep 30 16:59:07 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251547.0087] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Sep 30 16:59:07 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251547.0108] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Sep 30 16:59:07 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251547.0110] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Sep 30 16:59:07 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251547.0115] manager: NetworkManager state is now CONNECTED_SITE
Sep 30 16:59:07 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251547.0122] device (eth0): Activation: successful, device activated.
Sep 30 16:59:07 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251547.0132] manager: NetworkManager state is now CONNECTED_GLOBAL
Sep 30 16:59:07 np0005463147.novalocal sudo[3934]: pam_unix(sudo:session): session closed for user root
Sep 30 16:59:07 np0005463147.novalocal python3[4020]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-574d-0424-0000000000b2-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 16:59:17 np0005463147.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 16:59:25 np0005463147.novalocal systemd[1062]: Starting Mark boot as successful...
Sep 30 16:59:25 np0005463147.novalocal systemd[1062]: Finished Mark boot as successful.
Sep 30 16:59:36 np0005463147.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2481] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Sep 30 16:59:52 np0005463147.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 16:59:52 np0005463147.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2749] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2752] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2763] device (eth1): Activation: successful, device activated.
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2772] manager: startup complete
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2775] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <warn>  [1759251592.2783] device (eth1): Activation: failed for connection 'Wired connection 1'
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2792] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Sep 30 16:59:52 np0005463147.novalocal systemd[1]: Finished Network Manager Wait Online.
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2874] dhcp4 (eth1): canceled DHCP transaction
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2874] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2875] dhcp4 (eth1): state changed no lease
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2890] policy: auto-activating connection 'ci-private-network' (69237a25-976c-5886-ad18-fe086b52d43a)
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2894] device (eth1): Activation: starting connection 'ci-private-network' (69237a25-976c-5886-ad18-fe086b52d43a)
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2895] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2897] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2903] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.2911] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.3063] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.3064] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 16:59:52 np0005463147.novalocal NetworkManager[3947]: <info>  [1759251592.3069] device (eth1): Activation: successful, device activated.
Sep 30 17:00:02 np0005463147.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 17:00:07 np0005463147.novalocal sshd-session[1071]: Received disconnect from 38.102.83.114 port 35586:11: disconnected by user
Sep 30 17:00:07 np0005463147.novalocal sshd-session[1071]: Disconnected from user zuul 38.102.83.114 port 35586
Sep 30 17:00:07 np0005463147.novalocal sshd-session[1058]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:00:07 np0005463147.novalocal systemd-logind[811]: Session 1 logged out. Waiting for processes to exit.
Sep 30 17:00:19 np0005463147.novalocal sshd-session[4049]: Invalid user admin from 185.156.73.233 port 53022
Sep 30 17:00:20 np0005463147.novalocal sshd-session[4049]: Connection closed by invalid user admin 185.156.73.233 port 53022 [preauth]
Sep 30 17:00:36 np0005463147.novalocal sshd-session[4051]: Accepted publickey for zuul from 38.102.83.114 port 54680 ssh2: RSA SHA256:DFNImqpR4L6Frzap1o3GpslEX6xER8N06/GWUjaeSng
Sep 30 17:00:36 np0005463147.novalocal systemd-logind[811]: New session 3 of user zuul.
Sep 30 17:00:36 np0005463147.novalocal systemd[1]: Started Session 3 of User zuul.
Sep 30 17:00:37 np0005463147.novalocal sshd-session[4051]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:00:37 np0005463147.novalocal sudo[4130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnriwpgpepyenqobnptokkxsckbtkxuw ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 17:00:37 np0005463147.novalocal sudo[4130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:00:37 np0005463147.novalocal python3[4132]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:00:37 np0005463147.novalocal sudo[4130]: pam_unix(sudo:session): session closed for user root
Sep 30 17:00:37 np0005463147.novalocal sudo[4203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drktxhadkxfjlfujniyjyvubgvshlfoe ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 17:00:37 np0005463147.novalocal sudo[4203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:00:37 np0005463147.novalocal python3[4205]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759251637.129337-320-14881640481549/source _original_basename=tmpeztuxhse follow=False checksum=691c6e2d9962c1bc8a148fd3530a3221907fba75 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:00:37 np0005463147.novalocal sudo[4203]: pam_unix(sudo:session): session closed for user root
Sep 30 17:00:41 np0005463147.novalocal sshd-session[4054]: Connection closed by 38.102.83.114 port 54680
Sep 30 17:00:41 np0005463147.novalocal sshd-session[4051]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:00:41 np0005463147.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Sep 30 17:00:41 np0005463147.novalocal systemd-logind[811]: Session 3 logged out. Waiting for processes to exit.
Sep 30 17:00:41 np0005463147.novalocal systemd-logind[811]: Removed session 3.
Sep 30 17:01:01 np0005463147.novalocal CROND[4231]: (root) CMD (run-parts /etc/cron.hourly)
Sep 30 17:01:01 np0005463147.novalocal run-parts[4234]: (/etc/cron.hourly) starting 0anacron
Sep 30 17:01:01 np0005463147.novalocal anacron[4242]: Anacron started on 2025-09-30
Sep 30 17:01:01 np0005463147.novalocal anacron[4242]: Will run job `cron.daily' in 45 min.
Sep 30 17:01:01 np0005463147.novalocal anacron[4242]: Will run job `cron.weekly' in 65 min.
Sep 30 17:01:01 np0005463147.novalocal anacron[4242]: Will run job `cron.monthly' in 85 min.
Sep 30 17:01:01 np0005463147.novalocal anacron[4242]: Jobs will be executed sequentially
Sep 30 17:01:01 np0005463147.novalocal run-parts[4244]: (/etc/cron.hourly) finished 0anacron
Sep 30 17:01:01 np0005463147.novalocal CROND[4230]: (root) CMDEND (run-parts /etc/cron.hourly)
Sep 30 17:02:25 np0005463147.novalocal systemd[1062]: Created slice User Background Tasks Slice.
Sep 30 17:02:25 np0005463147.novalocal systemd[1062]: Starting Cleanup of User's Temporary Files and Directories...
Sep 30 17:02:25 np0005463147.novalocal systemd[1062]: Finished Cleanup of User's Temporary Files and Directories.
Sep 30 17:03:54 np0005463147.novalocal sshd-session[4248]: Invalid user admin from 185.156.73.233 port 44848
Sep 30 17:03:54 np0005463147.novalocal sshd-session[4248]: Connection closed by invalid user admin 185.156.73.233 port 44848 [preauth]
Sep 30 17:06:13 np0005463147.novalocal sshd-session[4253]: Accepted publickey for zuul from 38.102.83.114 port 41214 ssh2: RSA SHA256:DFNImqpR4L6Frzap1o3GpslEX6xER8N06/GWUjaeSng
Sep 30 17:06:13 np0005463147.novalocal systemd-logind[811]: New session 4 of user zuul.
Sep 30 17:06:13 np0005463147.novalocal systemd[1]: Started Session 4 of User zuul.
Sep 30 17:06:13 np0005463147.novalocal sshd-session[4253]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:06:13 np0005463147.novalocal sudo[4280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gitiqosvfitjdrcztczxjhnkdomjiwqc ; /usr/bin/python3'
Sep 30 17:06:13 np0005463147.novalocal sudo[4280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:14 np0005463147.novalocal python3[4282]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-fa1f-9b67-000000001cf3-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:06:14 np0005463147.novalocal sudo[4280]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:14 np0005463147.novalocal sudo[4308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzjljhzoztwllkiqgnothehomhgvakni ; /usr/bin/python3'
Sep 30 17:06:14 np0005463147.novalocal sudo[4308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:14 np0005463147.novalocal python3[4310]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:06:14 np0005463147.novalocal sudo[4308]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:14 np0005463147.novalocal sudo[4335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spqkwcpezetdiadtxgcnrdbymsbwihei ; /usr/bin/python3'
Sep 30 17:06:14 np0005463147.novalocal sudo[4335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:14 np0005463147.novalocal python3[4337]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:06:14 np0005463147.novalocal sudo[4335]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:14 np0005463147.novalocal sudo[4361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dogukedsqcbgmvmkpjsxcgrieznonbkj ; /usr/bin/python3'
Sep 30 17:06:14 np0005463147.novalocal sudo[4361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:14 np0005463147.novalocal python3[4363]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:06:14 np0005463147.novalocal sudo[4361]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:15 np0005463147.novalocal sudo[4387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiuwhcurzocibkyjendtuxpnjtrhehfe ; /usr/bin/python3'
Sep 30 17:06:15 np0005463147.novalocal sudo[4387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:15 np0005463147.novalocal python3[4389]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:06:15 np0005463147.novalocal sudo[4387]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:15 np0005463147.novalocal sudo[4413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idjnyvwzypnwlncduykrwsexmxtztkzq ; /usr/bin/python3'
Sep 30 17:06:15 np0005463147.novalocal sudo[4413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:15 np0005463147.novalocal python3[4415]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:06:15 np0005463147.novalocal python3[4415]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Sep 30 17:06:15 np0005463147.novalocal sudo[4413]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:16 np0005463147.novalocal sudo[4439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjpkhlzesoiwxocrqkiqctrxskbiezpl ; /usr/bin/python3'
Sep 30 17:06:16 np0005463147.novalocal sudo[4439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:16 np0005463147.novalocal python3[4441]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:06:16 np0005463147.novalocal systemd[1]: Reloading.
Sep 30 17:06:16 np0005463147.novalocal systemd-rc-local-generator[4461]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:06:16 np0005463147.novalocal sudo[4439]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:18 np0005463147.novalocal sudo[4495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tepwioydtichuzlisbroyqtqeiahrfhp ; /usr/bin/python3'
Sep 30 17:06:18 np0005463147.novalocal sudo[4495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:18 np0005463147.novalocal python3[4497]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Sep 30 17:06:18 np0005463147.novalocal sudo[4495]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:18 np0005463147.novalocal sudo[4521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psrjhbqdaeprevsjthfwpmkcirodmnjv ; /usr/bin/python3'
Sep 30 17:06:18 np0005463147.novalocal sudo[4521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:18 np0005463147.novalocal python3[4523]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:06:18 np0005463147.novalocal sudo[4521]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:19 np0005463147.novalocal sudo[4550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqzhksjzjmgcntjvtrjnjpcxuiglblib ; /usr/bin/python3'
Sep 30 17:06:19 np0005463147.novalocal sudo[4550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:19 np0005463147.novalocal python3[4552]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:06:19 np0005463147.novalocal sudo[4550]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:19 np0005463147.novalocal sudo[4578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeogvdtnmseoqqwwrnjzqxorattnthix ; /usr/bin/python3'
Sep 30 17:06:19 np0005463147.novalocal sudo[4578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:19 np0005463147.novalocal python3[4580]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:06:19 np0005463147.novalocal sudo[4578]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:19 np0005463147.novalocal sudo[4606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secfiqnnlaasjtmlqnhxkijijnbydqja ; /usr/bin/python3'
Sep 30 17:06:19 np0005463147.novalocal sudo[4606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:19 np0005463147.novalocal python3[4608]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:06:19 np0005463147.novalocal sudo[4606]: pam_unix(sudo:session): session closed for user root
Sep 30 17:06:20 np0005463147.novalocal python3[4635]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-fa1f-9b67-000000001cf9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:06:20 np0005463147.novalocal python3[4665]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:06:23 np0005463147.novalocal sshd-session[4256]: Connection closed by 38.102.83.114 port 41214
Sep 30 17:06:23 np0005463147.novalocal sshd-session[4253]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:06:23 np0005463147.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Sep 30 17:06:23 np0005463147.novalocal systemd[1]: session-4.scope: Consumed 3.603s CPU time.
Sep 30 17:06:23 np0005463147.novalocal systemd-logind[811]: Session 4 logged out. Waiting for processes to exit.
Sep 30 17:06:23 np0005463147.novalocal systemd-logind[811]: Removed session 4.
Sep 30 17:06:24 np0005463147.novalocal sshd-session[4670]: Accepted publickey for zuul from 38.102.83.114 port 42214 ssh2: RSA SHA256:DFNImqpR4L6Frzap1o3GpslEX6xER8N06/GWUjaeSng
Sep 30 17:06:24 np0005463147.novalocal systemd-logind[811]: New session 5 of user zuul.
Sep 30 17:06:24 np0005463147.novalocal systemd[1]: Started Session 5 of User zuul.
Sep 30 17:06:24 np0005463147.novalocal sshd-session[4670]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:06:24 np0005463147.novalocal sudo[4697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lomusglpxegyocjcfeeimpdkrikbsmvd ; /usr/bin/python3'
Sep 30 17:06:24 np0005463147.novalocal sudo[4697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:06:25 np0005463147.novalocal python3[4699]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Sep 30 17:06:41 np0005463147.novalocal kernel: SELinux:  Converting 364 SID table entries...
Sep 30 17:06:41 np0005463147.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:06:41 np0005463147.novalocal kernel: SELinux:  policy capability open_perms=1
Sep 30 17:06:41 np0005463147.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:06:41 np0005463147.novalocal kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:06:41 np0005463147.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:06:41 np0005463147.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:06:41 np0005463147.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:06:52 np0005463147.novalocal kernel: SELinux:  Converting 364 SID table entries...
Sep 30 17:06:52 np0005463147.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:06:52 np0005463147.novalocal kernel: SELinux:  policy capability open_perms=1
Sep 30 17:06:52 np0005463147.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:06:52 np0005463147.novalocal kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:06:52 np0005463147.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:06:52 np0005463147.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:06:52 np0005463147.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:06:54 np0005463147.novalocal sshd[1010]: Timeout before authentication for connection from 223.71.210.61 to 38.102.83.202, pid = 4251
Sep 30 17:07:02 np0005463147.novalocal kernel: SELinux:  Converting 364 SID table entries...
Sep 30 17:07:02 np0005463147.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:07:02 np0005463147.novalocal kernel: SELinux:  policy capability open_perms=1
Sep 30 17:07:02 np0005463147.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:07:02 np0005463147.novalocal kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:07:02 np0005463147.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:07:02 np0005463147.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:07:02 np0005463147.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:07:04 np0005463147.novalocal setsebool[4762]: The virt_use_nfs policy boolean was changed to 1 by root
Sep 30 17:07:04 np0005463147.novalocal setsebool[4762]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Sep 30 17:07:17 np0005463147.novalocal kernel: SELinux:  Converting 367 SID table entries...
Sep 30 17:07:17 np0005463147.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:07:17 np0005463147.novalocal kernel: SELinux:  policy capability open_perms=1
Sep 30 17:07:17 np0005463147.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:07:17 np0005463147.novalocal kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:07:17 np0005463147.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:07:17 np0005463147.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:07:17 np0005463147.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:07:38 np0005463147.novalocal dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Sep 30 17:07:38 np0005463147.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:07:38 np0005463147.novalocal systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:07:38 np0005463147.novalocal systemd[1]: Reloading.
Sep 30 17:07:38 np0005463147.novalocal systemd-rc-local-generator[5521]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:07:38 np0005463147.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 17:07:39 np0005463147.novalocal systemd[1]: Starting PackageKit Daemon...
Sep 30 17:07:39 np0005463147.novalocal PackageKit[6228]: daemon start
Sep 30 17:07:39 np0005463147.novalocal systemd[1]: Starting Authorization Manager...
Sep 30 17:07:39 np0005463147.novalocal polkitd[6289]: Started polkitd version 0.117
Sep 30 17:07:39 np0005463147.novalocal polkitd[6289]: Loading rules from directory /etc/polkit-1/rules.d
Sep 30 17:07:39 np0005463147.novalocal polkitd[6289]: Loading rules from directory /usr/share/polkit-1/rules.d
Sep 30 17:07:39 np0005463147.novalocal polkitd[6289]: Finished loading, compiling and executing 3 rules
Sep 30 17:07:39 np0005463147.novalocal polkitd[6289]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Sep 30 17:07:39 np0005463147.novalocal systemd[1]: Started Authorization Manager.
Sep 30 17:07:39 np0005463147.novalocal systemd[1]: Started PackageKit Daemon.
Sep 30 17:07:39 np0005463147.novalocal sudo[4697]: pam_unix(sudo:session): session closed for user root
Sep 30 17:07:40 np0005463147.novalocal python3[6915]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-d4b9-f2f5-00000000000b-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:07:41 np0005463147.novalocal kernel: evm: overlay not supported
Sep 30 17:07:41 np0005463147.novalocal systemd[1062]: Starting D-Bus User Message Bus...
Sep 30 17:07:41 np0005463147.novalocal dbus-broker-launch[8036]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Sep 30 17:07:41 np0005463147.novalocal dbus-broker-launch[8036]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Sep 30 17:07:41 np0005463147.novalocal systemd[1062]: Started D-Bus User Message Bus.
Sep 30 17:07:41 np0005463147.novalocal dbus-broker-lau[8036]: Ready
Sep 30 17:07:41 np0005463147.novalocal systemd[1062]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Sep 30 17:07:41 np0005463147.novalocal systemd[1062]: Created slice Slice /user.
Sep 30 17:07:41 np0005463147.novalocal systemd[1062]: podman-7752.scope: unit configures an IP firewall, but not running as root.
Sep 30 17:07:41 np0005463147.novalocal systemd[1062]: (This warning is only shown for the first unit using IP firewalling.)
Sep 30 17:07:41 np0005463147.novalocal systemd[1062]: Started podman-7752.scope.
Sep 30 17:07:42 np0005463147.novalocal systemd[1062]: Started podman-pause-5b480bf9.scope.
Sep 30 17:07:42 np0005463147.novalocal sudo[8640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amgjwitpysaaaijkoxibfevctispzpnd ; /usr/bin/python3'
Sep 30 17:07:42 np0005463147.novalocal sudo[8640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:07:42 np0005463147.novalocal python3[8651]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                      location = "38.129.56.221:5001"
                                                      insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                      location = "38.129.56.221:5001"
                                                      insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:07:42 np0005463147.novalocal sudo[8640]: pam_unix(sudo:session): session closed for user root
Sep 30 17:07:43 np0005463147.novalocal sshd-session[4673]: Connection closed by 38.102.83.114 port 42214
Sep 30 17:07:43 np0005463147.novalocal sshd-session[4670]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:07:43 np0005463147.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Sep 30 17:07:43 np0005463147.novalocal systemd[1]: session-5.scope: Consumed 1min 5.004s CPU time.
Sep 30 17:07:43 np0005463147.novalocal systemd-logind[811]: Session 5 logged out. Waiting for processes to exit.
Sep 30 17:07:43 np0005463147.novalocal systemd-logind[811]: Removed session 5.
Sep 30 17:08:01 np0005463147.novalocal sshd-session[16036]: Connection closed by 38.102.83.36 port 57700 [preauth]
Sep 30 17:08:01 np0005463147.novalocal sshd-session[16042]: Connection closed by 38.102.83.36 port 57710 [preauth]
Sep 30 17:08:01 np0005463147.novalocal sshd-session[16040]: Unable to negotiate with 38.102.83.36 port 57720: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Sep 30 17:08:01 np0005463147.novalocal sshd-session[16034]: Unable to negotiate with 38.102.83.36 port 57724: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Sep 30 17:08:01 np0005463147.novalocal sshd-session[16037]: Unable to negotiate with 38.102.83.36 port 57736: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Sep 30 17:08:06 np0005463147.novalocal sshd-session[17574]: Accepted publickey for zuul from 38.102.83.114 port 42400 ssh2: RSA SHA256:DFNImqpR4L6Frzap1o3GpslEX6xER8N06/GWUjaeSng
Sep 30 17:08:06 np0005463147.novalocal systemd-logind[811]: New session 6 of user zuul.
Sep 30 17:08:06 np0005463147.novalocal systemd[1]: Started Session 6 of User zuul.
Sep 30 17:08:06 np0005463147.novalocal sshd-session[17574]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:08:07 np0005463147.novalocal python3[17666]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLdfvsAXcCq7Fq0F3Kitiz5SuHbqf8SH0K2JwdxHA7U8WZUl9eCqhG5JROBTXolfOebEF70oH6VJjv6QTxswAaY= zuul@np0005463146.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 17:08:07 np0005463147.novalocal sudo[17791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgejjcbeyvxxehqskmdarqazsbuereih ; /usr/bin/python3'
Sep 30 17:08:07 np0005463147.novalocal sudo[17791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:08:07 np0005463147.novalocal python3[17801]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLdfvsAXcCq7Fq0F3Kitiz5SuHbqf8SH0K2JwdxHA7U8WZUl9eCqhG5JROBTXolfOebEF70oH6VJjv6QTxswAaY= zuul@np0005463146.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 17:08:07 np0005463147.novalocal sudo[17791]: pam_unix(sudo:session): session closed for user root
Sep 30 17:08:08 np0005463147.novalocal sudo[18063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edfjkvbtpibvpabbtibfibsfuxjcxfhb ; /usr/bin/python3'
Sep 30 17:08:08 np0005463147.novalocal sudo[18063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:08:08 np0005463147.novalocal python3[18071]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005463147.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Sep 30 17:08:08 np0005463147.novalocal useradd[18140]: new group: name=cloud-admin, GID=1002
Sep 30 17:08:08 np0005463147.novalocal useradd[18140]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Sep 30 17:08:09 np0005463147.novalocal sudo[18063]: pam_unix(sudo:session): session closed for user root
Sep 30 17:08:09 np0005463147.novalocal sudo[18376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbzeeyykanueitwtzupeqxapovdnlqg ; /usr/bin/python3'
Sep 30 17:08:09 np0005463147.novalocal sudo[18376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:08:09 np0005463147.novalocal python3[18384]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLdfvsAXcCq7Fq0F3Kitiz5SuHbqf8SH0K2JwdxHA7U8WZUl9eCqhG5JROBTXolfOebEF70oH6VJjv6QTxswAaY= zuul@np0005463146.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 17:08:09 np0005463147.novalocal sudo[18376]: pam_unix(sudo:session): session closed for user root
Sep 30 17:08:09 np0005463147.novalocal sudo[18613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nassidgdrasodshmywwpyskgukzvxypp ; /usr/bin/python3'
Sep 30 17:08:09 np0005463147.novalocal sudo[18613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:08:09 np0005463147.novalocal python3[18624]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:08:09 np0005463147.novalocal sudo[18613]: pam_unix(sudo:session): session closed for user root
Sep 30 17:08:10 np0005463147.novalocal sudo[18908]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oncgpxziefleguaokdpditejphdlzwuq ; /usr/bin/python3'
Sep 30 17:08:10 np0005463147.novalocal sudo[18908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:08:10 np0005463147.novalocal python3[18918]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759252089.6190555-151-25768736108758/source _original_basename=tmplkra0vts follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:08:10 np0005463147.novalocal sudo[18908]: pam_unix(sudo:session): session closed for user root
Sep 30 17:08:11 np0005463147.novalocal sudo[19186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazvzwxcyobqrzzmidszrgtkofxchxgq ; /usr/bin/python3'
Sep 30 17:08:11 np0005463147.novalocal sudo[19186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:08:11 np0005463147.novalocal python3[19192]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Sep 30 17:08:11 np0005463147.novalocal systemd[1]: Starting Hostname Service...
Sep 30 17:08:11 np0005463147.novalocal systemd[1]: Started Hostname Service.
Sep 30 17:08:11 np0005463147.novalocal systemd-hostnamed[19295]: Changed pretty hostname to 'compute-0'
Sep 30 17:08:11 compute-0 systemd-hostnamed[19295]: Hostname set to <compute-0> (static)
Sep 30 17:08:11 compute-0 NetworkManager[3947]: <info>  [1759252091.4964] hostname: static hostname changed from "np0005463147.novalocal" to "compute-0"
Sep 30 17:08:11 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 17:08:11 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 17:08:11 compute-0 sudo[19186]: pam_unix(sudo:session): session closed for user root
Sep 30 17:08:12 compute-0 sshd-session[17616]: Connection closed by 38.102.83.114 port 42400
Sep 30 17:08:12 compute-0 sshd-session[17574]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:08:12 compute-0 systemd-logind[811]: Session 6 logged out. Waiting for processes to exit.
Sep 30 17:08:12 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Sep 30 17:08:12 compute-0 systemd[1]: session-6.scope: Consumed 2.484s CPU time.
Sep 30 17:08:12 compute-0 systemd-logind[811]: Removed session 6.
Sep 30 17:08:21 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 17:08:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:08:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:08:37 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 1.704s CPU time.
Sep 30 17:08:37 compute-0 systemd[1]: run-r583537e84c8646b4ba0b82aebe862cab.service: Deactivated successfully.
Sep 30 17:08:41 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 17:10:24 compute-0 sshd-session[26549]: Connection closed by 14.103.235.143 port 35034 [preauth]
Sep 30 17:10:38 compute-0 sshd-session[26551]: Invalid user teste from 80.94.95.115 port 37856
Sep 30 17:10:38 compute-0 sshd-session[26551]: Connection closed by invalid user teste 80.94.95.115 port 37856 [preauth]
Sep 30 17:12:25 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Sep 30 17:12:25 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Sep 30 17:12:25 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Sep 30 17:12:25 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Sep 30 17:12:30 compute-0 sshd-session[26557]: Accepted publickey for zuul from 38.102.83.36 port 53960 ssh2: RSA SHA256:DFNImqpR4L6Frzap1o3GpslEX6xER8N06/GWUjaeSng
Sep 30 17:12:30 compute-0 systemd-logind[811]: New session 7 of user zuul.
Sep 30 17:12:30 compute-0 systemd[1]: Started Session 7 of User zuul.
Sep 30 17:12:30 compute-0 sshd-session[26557]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:12:30 compute-0 python3[26633]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:12:32 compute-0 sudo[26747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvxbfajhwdtsmixhofhyjgelehgoaqew ; /usr/bin/python3'
Sep 30 17:12:32 compute-0 sudo[26747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:32 compute-0 python3[26749]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:12:32 compute-0 sudo[26747]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:32 compute-0 sudo[26820]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoluqixabzosumejllvjhonlamrhwdmc ; /usr/bin/python3'
Sep 30 17:12:32 compute-0 sudo[26820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:32 compute-0 python3[26822]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759252351.96422-30442-251288795335558/source mode=0755 _original_basename=delorean.repo follow=False checksum=6543f0d49313391d10c7b4b619155c98ddf76b9b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:12:32 compute-0 sudo[26820]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:32 compute-0 sudo[26846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyilcnklefqjmntqfmhvdcczasaejvor ; /usr/bin/python3'
Sep 30 17:12:32 compute-0 sudo[26846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:32 compute-0 python3[26848]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-master-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:12:32 compute-0 sudo[26846]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:33 compute-0 sudo[26919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmodqfuxjvyjswfqiwbcmkweelctgpit ; /usr/bin/python3'
Sep 30 17:12:33 compute-0 sudo[26919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:33 compute-0 python3[26921]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759252351.96422-30442-251288795335558/source mode=0755 _original_basename=delorean-master-testing.repo follow=False checksum=c22157e85d05af7ffbafa054f80958446d397a41 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:12:33 compute-0 sudo[26919]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:33 compute-0 sudo[26945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjchwmvwucfbfqfqfnmullrozmbhlfan ; /usr/bin/python3'
Sep 30 17:12:33 compute-0 sudo[26945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:33 compute-0 python3[26947]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:12:33 compute-0 sudo[26945]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:33 compute-0 sudo[27018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scqkkcxzqwaurbalupizvgnupfjlzmzd ; /usr/bin/python3'
Sep 30 17:12:33 compute-0 sudo[27018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:33 compute-0 python3[27020]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759252351.96422-30442-251288795335558/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:12:33 compute-0 sudo[27018]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:33 compute-0 sudo[27044]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbaulhrnfzlbmoxzvvmqnrvvuqntbbc ; /usr/bin/python3'
Sep 30 17:12:33 compute-0 sudo[27044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:34 compute-0 python3[27046]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:12:34 compute-0 sudo[27044]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:34 compute-0 sudo[27117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trjvfafmgiuuppbomfpkhsoaemgjjwuo ; /usr/bin/python3'
Sep 30 17:12:34 compute-0 sudo[27117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:34 compute-0 python3[27119]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759252351.96422-30442-251288795335558/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:12:34 compute-0 sudo[27117]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:34 compute-0 sudo[27143]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwuwkaixfkhiwvaxpyleqnrcdtwzlkv ; /usr/bin/python3'
Sep 30 17:12:34 compute-0 sudo[27143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:34 compute-0 python3[27145]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:12:34 compute-0 sudo[27143]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:34 compute-0 sudo[27216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtmfjyotoxdwboscsmncsonvvdwojwbp ; /usr/bin/python3'
Sep 30 17:12:34 compute-0 sudo[27216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:35 compute-0 python3[27218]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759252351.96422-30442-251288795335558/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:12:35 compute-0 sudo[27216]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:35 compute-0 sudo[27242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmvrvzzxjtrnuwnpvjtburizunprzujq ; /usr/bin/python3'
Sep 30 17:12:35 compute-0 sudo[27242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:35 compute-0 python3[27244]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:12:35 compute-0 sudo[27242]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:35 compute-0 sudo[27315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pixyhmccighosefhdfvjmjamwtclirjz ; /usr/bin/python3'
Sep 30 17:12:35 compute-0 sudo[27315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:35 compute-0 python3[27317]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759252351.96422-30442-251288795335558/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:12:35 compute-0 sudo[27315]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:35 compute-0 sudo[27341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvarswiiuyefyninliksomyjxpbathaa ; /usr/bin/python3'
Sep 30 17:12:35 compute-0 sudo[27341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:35 compute-0 python3[27343]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:12:35 compute-0 sudo[27341]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:36 compute-0 sudo[27414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hinzmkncqkmghtjjmgxqvcivugruchfw ; /usr/bin/python3'
Sep 30 17:12:36 compute-0 sudo[27414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:36 compute-0 python3[27416]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759252351.96422-30442-251288795335558/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=039facbb479fa58856d4f56208cb1f104e804408 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:12:36 compute-0 sudo[27414]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:36 compute-0 sudo[27440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilsommlldqzjwdkrejngrohqdeiswjcv ; /usr/bin/python3'
Sep 30 17:12:36 compute-0 sudo[27440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:36 compute-0 python3[27442]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/gating.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:12:36 compute-0 sudo[27440]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:36 compute-0 sudo[27513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovzvpoaczvxpvlkrpclzjpszisjqocbv ; /usr/bin/python3'
Sep 30 17:12:36 compute-0 sudo[27513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:12:36 compute-0 python3[27515]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759252351.96422-30442-251288795335558/source mode=0755 _original_basename=gating.repo follow=False checksum=e31dc74caa36ffb4db145632be353eaf0e546b82 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:12:36 compute-0 sudo[27513]: pam_unix(sudo:session): session closed for user root
Sep 30 17:12:39 compute-0 sshd-session[27540]: Connection closed by 192.168.122.11 port 42984 [preauth]
Sep 30 17:12:39 compute-0 sshd-session[27541]: Connection closed by 192.168.122.11 port 42988 [preauth]
Sep 30 17:12:39 compute-0 sshd-session[27542]: Unable to negotiate with 192.168.122.11 port 42990: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Sep 30 17:12:39 compute-0 sshd-session[27543]: Unable to negotiate with 192.168.122.11 port 42998: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Sep 30 17:12:39 compute-0 sshd-session[27544]: Unable to negotiate with 192.168.122.11 port 43014: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Sep 30 17:12:45 compute-0 PackageKit[6228]: daemon quit
Sep 30 17:12:45 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Sep 30 17:13:59 compute-0 python3[27577]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:14:31 compute-0 sshd-session[27579]: Received disconnect from 106.36.198.78 port 48358:11:  [preauth]
Sep 30 17:14:31 compute-0 sshd-session[27579]: Disconnected from authenticating user root 106.36.198.78 port 48358 [preauth]
Sep 30 17:14:46 compute-0 sshd[1010]: Timeout before authentication for connection from 14.103.233.27 to 38.102.83.202, pid = 27550
Sep 30 17:15:43 compute-0 sshd[1010]: drop connection #0 from [14.103.233.27]:48520 on [38.102.83.202]:22 penalty: exceeded LoginGraceTime
Sep 30 17:18:30 compute-0 sshd[1010]: Timeout before authentication for connection from 14.103.233.27 to 38.102.83.202, pid = 27582
Sep 30 17:18:59 compute-0 sshd-session[26560]: Received disconnect from 38.102.83.36 port 53960:11: disconnected by user
Sep 30 17:18:59 compute-0 sshd-session[26560]: Disconnected from user zuul 38.102.83.36 port 53960
Sep 30 17:18:59 compute-0 sshd-session[26557]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:18:59 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Sep 30 17:18:59 compute-0 systemd[1]: session-7.scope: Consumed 5.119s CPU time.
Sep 30 17:18:59 compute-0 systemd-logind[811]: Session 7 logged out. Waiting for processes to exit.
Sep 30 17:18:59 compute-0 systemd-logind[811]: Removed session 7.
Sep 30 17:20:02 compute-0 sshd-session[27586]: Invalid user test from 185.156.73.233 port 38378
Sep 30 17:20:03 compute-0 sshd-session[27586]: Connection closed by invalid user test 185.156.73.233 port 38378 [preauth]
Sep 30 17:22:07 compute-0 sshd[1010]: Timeout before authentication for connection from 14.103.233.27 to 38.102.83.202, pid = 27588
Sep 30 17:22:55 compute-0 sshd[1010]: drop connection #0 from [14.103.233.27]:34916 on [38.102.83.202]:22 penalty: exceeded LoginGraceTime
Sep 30 17:25:44 compute-0 sshd[1010]: Timeout before authentication for connection from 14.103.233.27 to 38.102.83.202, pid = 27594
Sep 30 17:25:57 compute-0 sshd[1010]: drop connection #0 from [14.103.233.27]:37232 on [38.102.83.202]:22 penalty: exceeded LoginGraceTime
Sep 30 17:27:00 compute-0 sshd-session[27598]: Accepted publickey for zuul from 192.168.122.30 port 53832 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:27:00 compute-0 systemd-logind[811]: New session 8 of user zuul.
Sep 30 17:27:00 compute-0 systemd[1]: Started Session 8 of User zuul.
Sep 30 17:27:00 compute-0 sshd-session[27598]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:27:01 compute-0 python3.9[27751]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:27:02 compute-0 sudo[27930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhryhwkcrdqadczsnmmolkhmpzkwwdmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253221.8397176-44-104248079445707/AnsiballZ_command.py'
Sep 30 17:27:02 compute-0 sudo[27930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:02 compute-0 python3.9[27932]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:27:10 compute-0 sudo[27930]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:10 compute-0 sshd-session[27601]: Connection closed by 192.168.122.30 port 53832
Sep 30 17:27:10 compute-0 sshd-session[27598]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:27:10 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Sep 30 17:27:10 compute-0 systemd[1]: session-8.scope: Consumed 8.171s CPU time.
Sep 30 17:27:10 compute-0 systemd-logind[811]: Session 8 logged out. Waiting for processes to exit.
Sep 30 17:27:10 compute-0 systemd-logind[811]: Removed session 8.
Sep 30 17:27:26 compute-0 sshd-session[27989]: Accepted publickey for zuul from 192.168.122.30 port 43088 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:27:26 compute-0 systemd-logind[811]: New session 9 of user zuul.
Sep 30 17:27:26 compute-0 systemd[1]: Started Session 9 of User zuul.
Sep 30 17:27:26 compute-0 sshd-session[27989]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:27:27 compute-0 python3.9[28142]: ansible-ansible.legacy.ping Invoked with data=pong
Sep 30 17:27:28 compute-0 python3.9[28316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:27:29 compute-0 sudo[28466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqmxksznjxzepoywtkuhipqhgbounjrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253249.1852338-69-95401665252329/AnsiballZ_command.py'
Sep 30 17:27:29 compute-0 sudo[28466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:29 compute-0 python3.9[28468]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:27:29 compute-0 sudo[28466]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:30 compute-0 sudo[28619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymqmdoyzupuhxwipzibovmfxnrvqoeks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253250.118732-93-83672583063266/AnsiballZ_stat.py'
Sep 30 17:27:30 compute-0 sudo[28619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:30 compute-0 python3.9[28621]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:27:30 compute-0 sudo[28619]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:31 compute-0 sudo[28771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mujtifqubchufqaaxctmfrirfhumkofb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253250.9300916-109-257333458903250/AnsiballZ_file.py'
Sep 30 17:27:31 compute-0 sudo[28771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:31 compute-0 python3.9[28773]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:27:31 compute-0 sudo[28771]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:32 compute-0 sudo[28923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzaixwigrwenngaixtfdyczjmoxjvhxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253251.7783651-125-235754479113474/AnsiballZ_stat.py'
Sep 30 17:27:32 compute-0 sudo[28923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:32 compute-0 python3.9[28925]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:27:32 compute-0 sudo[28923]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:32 compute-0 sudo[29046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyekysuzxzathsfrljhjtxqzfetnyabg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253251.7783651-125-235754479113474/AnsiballZ_copy.py'
Sep 30 17:27:32 compute-0 sudo[29046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:33 compute-0 python3.9[29048]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253251.7783651-125-235754479113474/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:27:33 compute-0 sudo[29046]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:33 compute-0 sudo[29198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlpwfabqkzmymnhasjvzfbvylboeoynm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253253.2732975-155-171848301926510/AnsiballZ_setup.py'
Sep 30 17:27:33 compute-0 sudo[29198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:33 compute-0 python3.9[29200]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:27:34 compute-0 sudo[29198]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:34 compute-0 sudo[29354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvedebjjflnywstjekasdpoojdrocjgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253254.1988192-171-39191230674504/AnsiballZ_file.py'
Sep 30 17:27:34 compute-0 sudo[29354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:34 compute-0 python3.9[29356]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:27:34 compute-0 sudo[29354]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:35 compute-0 python3.9[29506]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:27:40 compute-0 python3.9[29761]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:27:41 compute-0 python3.9[29911]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:27:42 compute-0 python3.9[30065]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:27:42 compute-0 sudo[30221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bseasyghlpukzzldmldwxynvrrjwxulj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253262.6910048-267-263548873832367/AnsiballZ_setup.py'
Sep 30 17:27:42 compute-0 sudo[30221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:43 compute-0 python3.9[30223]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:27:43 compute-0 sudo[30221]: pam_unix(sudo:session): session closed for user root
Sep 30 17:27:43 compute-0 sudo[30305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqewtmdhqsckfgkjokibzsbjbmrkfdpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253262.6910048-267-263548873832367/AnsiballZ_dnf.py'
Sep 30 17:27:43 compute-0 sudo[30305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:27:44 compute-0 python3.9[30307]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:28:27 compute-0 systemd[1]: Reloading.
Sep 30 17:28:27 compute-0 systemd-rc-local-generator[30506]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:28:27 compute-0 systemd[1]: Starting dnf makecache...
Sep 30 17:28:27 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Sep 30 17:28:28 compute-0 dnf[30515]: Repository 'gating-repo' is missing name in configuration, using id.
Sep 30 17:28:28 compute-0 dnf[30515]: Failed determining last makecache time.
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-barbican-42b4c41831408a8e323 144 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 152 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-cinder-1c00d6490d88e436f26ef 161 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 systemd[1]: Reloading.
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-python-stevedore-c4acc5639fd2329372142 178 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-python-cloudkitty-tests-tempest-3961dc 167 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 systemd-rc-local-generator[30550]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-os-net-config-a7aafa88064e25852eddee77 160 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 189 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-python-designate-tests-tempest-347fdbc 180 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-glance-1fd12c29b339f30fe823e 176 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 168 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-manila-3c01b7181572c95dac462 177 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-python-whitebox-neutron-tests-tempest- 167 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-octavia-ba397f07a7331190208c 172 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-watcher-c014f81a8647287f6dcc 170 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-python-tcib-c895740e59940c0bad2e206b0f 157 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 168 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-swift-dc98a8463506ac520c469a 170 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 systemd[1]: Reloading.
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-python-tempestconf-8515371b7cceebd4282 183 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: delorean-openstack-heat-ui-013accbfd179753bc3f0 176 kB/s | 3.0 kB     00:00
Sep 30 17:28:28 compute-0 dnf[30515]: gating-repo                                     420 kB/s | 1.5 kB     00:00
Sep 30 17:28:28 compute-0 systemd-rc-local-generator[30603]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:28:28 compute-0 dnf[30515]: CentOS Stream 9 - BaseOS                         60 kB/s | 7.0 kB     00:00
Sep 30 17:28:28 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Sep 30 17:28:28 compute-0 dnf[30515]: CentOS Stream 9 - AppStream                      85 kB/s | 7.1 kB     00:00
Sep 30 17:28:28 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Sep 30 17:28:28 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Sep 30 17:28:28 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Sep 30 17:28:28 compute-0 dnf[30515]: CentOS Stream 9 - CRB                            79 kB/s | 6.9 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: CentOS Stream 9 - Extras packages                79 kB/s | 8.0 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: dlrn-antelope-testing                           178 kB/s | 3.0 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: dlrn-antelope-build-deps                        180 kB/s | 3.0 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: centos9-rabbitmq                                113 kB/s | 3.0 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: centos9-storage                                 108 kB/s | 3.0 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: centos9-opstools                                109 kB/s | 3.0 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: NFV SIG OpenvSwitch                              39 kB/s | 3.0 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: repo-setup-centos-appstream                     177 kB/s | 4.4 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: repo-setup-centos-baseos                        161 kB/s | 3.9 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: repo-setup-centos-highavailability              155 kB/s | 3.9 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: repo-setup-centos-powertools                    185 kB/s | 4.3 kB     00:00
Sep 30 17:28:29 compute-0 dnf[30515]: Extra Packages for Enterprise Linux 9 - x86_64  285 kB/s |  34 kB     00:00
Sep 30 17:28:30 compute-0 dnf[30515]: Metadata cache created.
Sep 30 17:28:30 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Sep 30 17:28:30 compute-0 systemd[1]: Finished dnf makecache.
Sep 30 17:28:30 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.837s CPU time.
Sep 30 17:29:35 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Sep 30 17:29:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:29:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 17:29:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:29:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:29:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:29:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:29:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:29:35 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Sep 30 17:29:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:29:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:29:35 compute-0 systemd[1]: Reloading.
Sep 30 17:29:35 compute-0 systemd-rc-local-generator[30937]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:29:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 17:29:36 compute-0 systemd[1]: Starting PackageKit Daemon...
Sep 30 17:29:36 compute-0 PackageKit[31204]: daemon start
Sep 30 17:29:36 compute-0 systemd[1]: Started PackageKit Daemon.
Sep 30 17:29:36 compute-0 sudo[30305]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:29:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:29:37 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.083s CPU time.
Sep 30 17:29:37 compute-0 systemd[1]: run-r0f8e93bb3e214984af98cf2abdcdb402.service: Deactivated successfully.
Sep 30 17:29:37 compute-0 sudo[31860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rinkknufwroebjicouyyaflnencwifbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253376.8460748-291-144235160932638/AnsiballZ_command.py'
Sep 30 17:29:37 compute-0 sudo[31860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:37 compute-0 python3.9[31862]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:29:38 compute-0 sudo[31860]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:39 compute-0 sudo[32141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvuyahdgdnzpdlzmwqvjkqeywhksiadi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253378.5625684-307-57303331996217/AnsiballZ_selinux.py'
Sep 30 17:29:39 compute-0 sudo[32141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:39 compute-0 python3.9[32143]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Sep 30 17:29:39 compute-0 sudo[32141]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:40 compute-0 sudo[32293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smykrlhbbusaqvbvhhpchqeqmvhelbsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253379.899519-329-202102275988542/AnsiballZ_command.py'
Sep 30 17:29:40 compute-0 sudo[32293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:40 compute-0 python3.9[32295]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Sep 30 17:29:41 compute-0 sudo[32293]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:41 compute-0 sudo[32446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnutcnqyjnzqmbtqjsqezlfpbuhlrsda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253381.5709648-345-186092993331497/AnsiballZ_file.py'
Sep 30 17:29:41 compute-0 sudo[32446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:43 compute-0 python3.9[32448]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:29:43 compute-0 sudo[32446]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:44 compute-0 sudo[32599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebqfaswvbnndiimrhlgtgiezgkeubef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253384.1280391-361-108743148701384/AnsiballZ_mount.py'
Sep 30 17:29:44 compute-0 sudo[32599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:44 compute-0 python3.9[32601]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Sep 30 17:29:44 compute-0 sudo[32599]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:45 compute-0 sudo[32751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntzazwzxlzoqofdxofydnasxrmajtxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253385.5299702-417-102628480995820/AnsiballZ_file.py'
Sep 30 17:29:45 compute-0 sudo[32751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:45 compute-0 python3.9[32753]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:29:46 compute-0 sudo[32751]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:48 compute-0 sudo[32903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sidernzctmhylpohxikbptfmramsquuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253387.9047754-433-119522251189708/AnsiballZ_stat.py'
Sep 30 17:29:48 compute-0 sudo[32903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:50 compute-0 python3.9[32905]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:29:50 compute-0 sudo[32903]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:50 compute-0 sudo[33026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpzmtwpsodgivbkrbctalxbhktsvarem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253387.9047754-433-119522251189708/AnsiballZ_copy.py'
Sep 30 17:29:50 compute-0 sudo[33026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:51 compute-0 python3.9[33028]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253387.9047754-433-119522251189708/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=77fa443739ff2ff1b18352a001fa075b3190ad3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:29:51 compute-0 sudo[33026]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:52 compute-0 sudo[33178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrbmucmwuqaytuvwechqfmqwldemqmjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253391.9279377-487-174468919763484/AnsiballZ_getent.py'
Sep 30 17:29:52 compute-0 sudo[33178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:52 compute-0 python3.9[33180]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Sep 30 17:29:52 compute-0 sudo[33178]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:53 compute-0 sudo[33331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufiduraofjjumsqksbodfwmlwflimqdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253392.9768026-503-111382775238441/AnsiballZ_group.py'
Sep 30 17:29:53 compute-0 sudo[33331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:53 compute-0 python3.9[33333]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 17:29:53 compute-0 groupadd[33334]: group added to /etc/group: name=qemu, GID=107
Sep 30 17:29:53 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:29:53 compute-0 groupadd[33334]: group added to /etc/gshadow: name=qemu
Sep 30 17:29:53 compute-0 groupadd[33334]: new group: name=qemu, GID=107
Sep 30 17:29:53 compute-0 sudo[33331]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:54 compute-0 sudo[33490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwnfyzxrfmtigertnewutkqbyzcgcaca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253393.9737318-519-138698246680875/AnsiballZ_user.py'
Sep 30 17:29:54 compute-0 sudo[33490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:54 compute-0 python3.9[33492]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 17:29:54 compute-0 useradd[33494]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Sep 30 17:29:54 compute-0 sudo[33490]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:55 compute-0 sudo[33650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkosktljoheevsghidwqdhygecwrkmsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253395.0740042-535-156460933047895/AnsiballZ_getent.py'
Sep 30 17:29:55 compute-0 sudo[33650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:55 compute-0 python3.9[33652]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Sep 30 17:29:55 compute-0 sudo[33650]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:55 compute-0 sudo[33803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kltkessuxfgqgxqfqoeycfyqyrfqyxqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253395.7134829-551-88177811856386/AnsiballZ_group.py'
Sep 30 17:29:55 compute-0 sudo[33803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:56 compute-0 python3.9[33805]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 17:29:56 compute-0 groupadd[33806]: group added to /etc/group: name=hugetlbfs, GID=42477
Sep 30 17:29:56 compute-0 groupadd[33806]: group added to /etc/gshadow: name=hugetlbfs
Sep 30 17:29:56 compute-0 groupadd[33806]: new group: name=hugetlbfs, GID=42477
Sep 30 17:29:56 compute-0 sudo[33803]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:56 compute-0 sudo[33961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjffojghtydkqnvylpniaoyubimsztsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253396.5252206-569-214767694730717/AnsiballZ_file.py'
Sep 30 17:29:56 compute-0 sudo[33961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:56 compute-0 python3.9[33963]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Sep 30 17:29:56 compute-0 sudo[33961]: pam_unix(sudo:session): session closed for user root
Sep 30 17:29:57 compute-0 sudo[34113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyyeztffqivnwwygpeycuqbvfamenprm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253397.3903658-591-201721842776369/AnsiballZ_dnf.py'
Sep 30 17:29:57 compute-0 sudo[34113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:29:57 compute-0 python3.9[34115]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:29:59 compute-0 sudo[34113]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:00 compute-0 sudo[34266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtcerzqktawkssiapcpfsshblikoazey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253399.8570511-607-53207921730595/AnsiballZ_file.py'
Sep 30 17:30:00 compute-0 sudo[34266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:00 compute-0 python3.9[34268]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:30:00 compute-0 sudo[34266]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:00 compute-0 sudo[34418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqpcftpzodryjouoqhbflyykxatpibmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253400.4935825-623-219682398056290/AnsiballZ_stat.py'
Sep 30 17:30:00 compute-0 sudo[34418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:01 compute-0 python3.9[34420]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:30:01 compute-0 sudo[34418]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:01 compute-0 sudo[34541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqcksszjbugkzrdeygtsvlrnzklcfzbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253400.4935825-623-219682398056290/AnsiballZ_copy.py'
Sep 30 17:30:01 compute-0 sudo[34541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:01 compute-0 python3.9[34543]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759253400.4935825-623-219682398056290/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:30:01 compute-0 sudo[34541]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:02 compute-0 sudo[34693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuyivwyropafnaxqegjwowlzyfgvpsky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253401.792428-653-207669164489679/AnsiballZ_systemd.py'
Sep 30 17:30:02 compute-0 sudo[34693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:02 compute-0 python3.9[34695]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:30:02 compute-0 systemd[1]: Starting Load Kernel Modules...
Sep 30 17:30:02 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Sep 30 17:30:02 compute-0 kernel: Bridge firewalling registered
Sep 30 17:30:02 compute-0 systemd-modules-load[34699]: Inserted module 'br_netfilter'
Sep 30 17:30:02 compute-0 systemd[1]: Finished Load Kernel Modules.
Sep 30 17:30:02 compute-0 sudo[34693]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:03 compute-0 sudo[34853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gefxbffrewktowhaojxupmauidkyqayc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253403.0197344-669-161394860304302/AnsiballZ_stat.py'
Sep 30 17:30:03 compute-0 sudo[34853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:03 compute-0 python3.9[34855]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:30:03 compute-0 sudo[34853]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:03 compute-0 sudo[34976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlifbkmogmtfewscbzyqmvyfmvlwztof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253403.0197344-669-161394860304302/AnsiballZ_copy.py'
Sep 30 17:30:03 compute-0 sudo[34976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:03 compute-0 python3.9[34978]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759253403.0197344-669-161394860304302/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:30:04 compute-0 sudo[34976]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:04 compute-0 sudo[35128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avgderfvpnxytnzlmdppzbubpjbdlfqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253404.5010886-705-60737073043667/AnsiballZ_dnf.py'
Sep 30 17:30:04 compute-0 sudo[35128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:05 compute-0 python3.9[35130]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:30:09 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Sep 30 17:30:09 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Sep 30 17:30:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:30:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:30:09 compute-0 systemd[1]: Reloading.
Sep 30 17:30:09 compute-0 systemd-rc-local-generator[35194]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:30:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 17:30:10 compute-0 sudo[35128]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:11 compute-0 python3.9[36422]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:30:12 compute-0 python3.9[37654]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Sep 30 17:30:12 compute-0 python3.9[38593]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:30:13 compute-0 sudo[39334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdssgfmnwfuqnmanyqotkqkvmcgscfuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253413.1152208-783-99073276225899/AnsiballZ_command.py'
Sep 30 17:30:13 compute-0 sudo[39334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:13 compute-0 python3.9[39336]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:30:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:30:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:30:13 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.165s CPU time.
Sep 30 17:30:13 compute-0 systemd[1]: run-rf9e3d591afed4ec5a28a973dd541897b.service: Deactivated successfully.
Sep 30 17:30:13 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Sep 30 17:30:14 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Sep 30 17:30:14 compute-0 sudo[39334]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:14 compute-0 sudo[39711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bphvrauiflrvnmtwkwaukprlpjqbeook ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253414.322236-801-88090908575233/AnsiballZ_systemd.py'
Sep 30 17:30:14 compute-0 sudo[39711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:14 compute-0 python3.9[39713]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:30:14 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Sep 30 17:30:14 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Sep 30 17:30:14 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Sep 30 17:30:14 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Sep 30 17:30:15 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Sep 30 17:30:15 compute-0 sudo[39711]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:15 compute-0 python3.9[39874]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Sep 30 17:30:18 compute-0 sudo[40024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnwsduzvlfjrzkojgurpghlrbmhurxck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253418.23048-915-8808578632702/AnsiballZ_systemd.py'
Sep 30 17:30:18 compute-0 sudo[40024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:18 compute-0 python3.9[40026]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:30:18 compute-0 systemd[1]: Reloading.
Sep 30 17:30:18 compute-0 systemd-rc-local-generator[40057]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:30:19 compute-0 sudo[40024]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:19 compute-0 sudo[40213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhebaigiiwitweumiymvpufenhvekszi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253419.2679033-915-227795122364224/AnsiballZ_systemd.py'
Sep 30 17:30:19 compute-0 sudo[40213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:19 compute-0 python3.9[40215]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:30:20 compute-0 systemd[1]: Reloading.
Sep 30 17:30:20 compute-0 systemd-rc-local-generator[40246]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:30:20 compute-0 sudo[40213]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:20 compute-0 sudo[40402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzoznmdgjmgeuzxhsczefmsoqpltziul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253420.545924-947-156715758523751/AnsiballZ_command.py'
Sep 30 17:30:20 compute-0 sudo[40402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:21 compute-0 python3.9[40404]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:30:21 compute-0 sudo[40402]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:21 compute-0 sudo[40555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juaojlozivoilvwjrmivjscmzqtejnqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253421.3291948-963-78684065047618/AnsiballZ_command.py'
Sep 30 17:30:21 compute-0 sudo[40555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:21 compute-0 python3.9[40557]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:30:22 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Sep 30 17:30:22 compute-0 sudo[40555]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:22 compute-0 sudo[40708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dddqvwbkdhxogjoemrglycrhaecspnsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253422.1855018-979-185407192280861/AnsiballZ_command.py'
Sep 30 17:30:22 compute-0 sudo[40708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:22 compute-0 python3.9[40710]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:30:23 compute-0 sudo[40708]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:24 compute-0 sudo[40870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbkrmhsyqmcmzkqektrlmoietezrwpqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253424.3261015-995-64470267091416/AnsiballZ_command.py'
Sep 30 17:30:24 compute-0 sudo[40870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:24 compute-0 python3.9[40872]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:30:24 compute-0 sudo[40870]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:25 compute-0 sudo[41023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsrmvnvvomsfpnqcvvzekbbanhwplgpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253425.0766137-1011-24773906194661/AnsiballZ_systemd.py'
Sep 30 17:30:25 compute-0 sudo[41023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:25 compute-0 python3.9[41025]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:30:25 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep 30 17:30:25 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Sep 30 17:30:25 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Sep 30 17:30:25 compute-0 systemd[1]: Starting Apply Kernel Variables...
Sep 30 17:30:25 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Sep 30 17:30:25 compute-0 systemd[1]: Finished Apply Kernel Variables.
Sep 30 17:30:25 compute-0 sudo[41023]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:26 compute-0 sshd-session[27992]: Connection closed by 192.168.122.30 port 43088
Sep 30 17:30:26 compute-0 sshd-session[27989]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:30:26 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Sep 30 17:30:26 compute-0 systemd[1]: session-9.scope: Consumed 2min 10.074s CPU time.
Sep 30 17:30:26 compute-0 systemd-logind[811]: Session 9 logged out. Waiting for processes to exit.
Sep 30 17:30:26 compute-0 systemd-logind[811]: Removed session 9.
Sep 30 17:30:32 compute-0 sshd-session[41055]: Accepted publickey for zuul from 192.168.122.30 port 57576 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:30:32 compute-0 systemd-logind[811]: New session 10 of user zuul.
Sep 30 17:30:32 compute-0 systemd[1]: Started Session 10 of User zuul.
Sep 30 17:30:32 compute-0 sshd-session[41055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:30:33 compute-0 python3.9[41208]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:30:34 compute-0 sudo[41362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dldumkmxkwnyvfoyenfixwawkrfccunu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253434.2943985-52-257766164581038/AnsiballZ_getent.py'
Sep 30 17:30:34 compute-0 sudo[41362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:34 compute-0 python3.9[41364]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Sep 30 17:30:34 compute-0 sudo[41362]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:35 compute-0 sudo[41515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgfreqzylukpeuvucboyjmaiixhudltp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253435.0901017-68-128295224004203/AnsiballZ_group.py'
Sep 30 17:30:35 compute-0 sudo[41515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:35 compute-0 python3.9[41517]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 17:30:35 compute-0 groupadd[41518]: group added to /etc/group: name=openvswitch, GID=42476
Sep 30 17:30:35 compute-0 groupadd[41518]: group added to /etc/gshadow: name=openvswitch
Sep 30 17:30:35 compute-0 groupadd[41518]: new group: name=openvswitch, GID=42476
Sep 30 17:30:35 compute-0 sudo[41515]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:36 compute-0 sudo[41673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaxerhoymqutgwshzjxvxfpndidvzfwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253435.8931427-84-241652729807732/AnsiballZ_user.py'
Sep 30 17:30:36 compute-0 sudo[41673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:36 compute-0 python3.9[41675]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 17:30:36 compute-0 useradd[41677]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Sep 30 17:30:36 compute-0 useradd[41677]: add 'openvswitch' to group 'hugetlbfs'
Sep 30 17:30:36 compute-0 useradd[41677]: add 'openvswitch' to shadow group 'hugetlbfs'
Sep 30 17:30:36 compute-0 sudo[41673]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:37 compute-0 sudo[41833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzurpgpmhemzfqjegymhbihmhoutjas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253437.175553-104-148573100078732/AnsiballZ_setup.py'
Sep 30 17:30:37 compute-0 sudo[41833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:37 compute-0 python3.9[41835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:30:38 compute-0 sudo[41833]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:38 compute-0 sudo[41917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dazavqtuissfnmyldmkymvandorfqeju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253437.175553-104-148573100078732/AnsiballZ_dnf.py'
Sep 30 17:30:38 compute-0 sudo[41917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:38 compute-0 python3.9[41919]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Sep 30 17:30:40 compute-0 sudo[41917]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:41 compute-0 sudo[42081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeqkkbosydkirbzrxrppjmrljzrutfqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253441.2085383-132-171297136539114/AnsiballZ_dnf.py'
Sep 30 17:30:41 compute-0 sudo[42081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:41 compute-0 python3.9[42083]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:30:52 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Sep 30 17:30:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:30:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 17:30:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:30:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:30:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:30:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:30:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:30:53 compute-0 groupadd[42106]: group added to /etc/group: name=unbound, GID=993
Sep 30 17:30:53 compute-0 groupadd[42106]: group added to /etc/gshadow: name=unbound
Sep 30 17:30:53 compute-0 groupadd[42106]: new group: name=unbound, GID=993
Sep 30 17:30:53 compute-0 useradd[42113]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Sep 30 17:30:53 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Sep 30 17:30:53 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Sep 30 17:30:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:30:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:30:54 compute-0 systemd[1]: Reloading.
Sep 30 17:30:54 compute-0 systemd-rc-local-generator[42610]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:30:54 compute-0 systemd-sysv-generator[42613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:30:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 17:30:55 compute-0 sudo[42081]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:30:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:30:55 compute-0 systemd[1]: run-r7ac571bdf02146b8bb35476803c3dcf6.service: Deactivated successfully.
Sep 30 17:30:55 compute-0 sudo[43182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evevtacxvvmkwsydjjfveqprupedektd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253455.3228953-148-195032693852145/AnsiballZ_systemd.py'
Sep 30 17:30:55 compute-0 sudo[43182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:56 compute-0 python3.9[43184]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:30:56 compute-0 systemd[1]: Reloading.
Sep 30 17:30:56 compute-0 systemd-rc-local-generator[43213]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:30:56 compute-0 systemd-sysv-generator[43217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:30:56 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Sep 30 17:30:56 compute-0 chown[43226]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Sep 30 17:30:56 compute-0 ovs-ctl[43231]: /etc/openvswitch/conf.db does not exist ... (warning).
Sep 30 17:30:56 compute-0 ovs-ctl[43231]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Sep 30 17:30:56 compute-0 ovs-ctl[43231]: Starting ovsdb-server [  OK  ]
Sep 30 17:30:56 compute-0 ovs-vsctl[43280]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Sep 30 17:30:56 compute-0 ovs-vsctl[43300]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"b0398922-aff5-46ba-afa7-58d09e28293c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Sep 30 17:30:56 compute-0 ovs-ctl[43231]: Configuring Open vSwitch system IDs [  OK  ]
Sep 30 17:30:56 compute-0 ovs-ctl[43231]: Enabling remote OVSDB managers [  OK  ]
Sep 30 17:30:56 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Sep 30 17:30:56 compute-0 ovs-vsctl[43306]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Sep 30 17:30:56 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Sep 30 17:30:56 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Sep 30 17:30:56 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Sep 30 17:30:57 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Sep 30 17:30:57 compute-0 ovs-ctl[43350]: Inserting openvswitch module [  OK  ]
Sep 30 17:30:57 compute-0 ovs-ctl[43319]: Starting ovs-vswitchd [  OK  ]
Sep 30 17:30:57 compute-0 ovs-vsctl[43367]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Sep 30 17:30:57 compute-0 ovs-ctl[43319]: Enabling remote OVSDB managers [  OK  ]
Sep 30 17:30:57 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Sep 30 17:30:57 compute-0 systemd[1]: Starting Open vSwitch...
Sep 30 17:30:57 compute-0 systemd[1]: Finished Open vSwitch.
Sep 30 17:30:57 compute-0 sudo[43182]: pam_unix(sudo:session): session closed for user root
Sep 30 17:30:58 compute-0 python3.9[43519]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:30:58 compute-0 sudo[43669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqrmyqjbtienekxuspuotlrcoiihqfds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253458.4746294-184-229386257100510/AnsiballZ_sefcontext.py'
Sep 30 17:30:58 compute-0 sudo[43669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:30:59 compute-0 python3.9[43671]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Sep 30 17:31:00 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Sep 30 17:31:00 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:31:00 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 17:31:00 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:31:00 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:31:00 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:31:00 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:31:00 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:31:00 compute-0 sudo[43669]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:01 compute-0 python3.9[43826]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:31:01 compute-0 sudo[43982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcbllexqiqwkunzubnemduhoveyywqwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253461.747508-220-223116087171069/AnsiballZ_dnf.py'
Sep 30 17:31:01 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Sep 30 17:31:01 compute-0 sudo[43982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:02 compute-0 python3.9[43984]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:31:03 compute-0 sudo[43982]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:04 compute-0 sudo[44135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-extrdflqacfowfdxbljybqlmvvpchntl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253463.8069382-236-264844790194953/AnsiballZ_command.py'
Sep 30 17:31:04 compute-0 sudo[44135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:04 compute-0 python3.9[44137]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:31:05 compute-0 sudo[44135]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:05 compute-0 sudo[44422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hblxkxpemewaexvciqtjwxksoksavdtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253465.430953-252-219062778862307/AnsiballZ_file.py'
Sep 30 17:31:05 compute-0 sudo[44422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:06 compute-0 python3.9[44424]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 17:31:06 compute-0 sudo[44422]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:06 compute-0 python3.9[44574]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:31:07 compute-0 sudo[44726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izluyfodqnkmfqyljzblkclcvedtwqlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253467.2484078-284-111159117572622/AnsiballZ_dnf.py'
Sep 30 17:31:07 compute-0 sudo[44726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:07 compute-0 python3.9[44728]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:31:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:31:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:31:09 compute-0 systemd[1]: Reloading.
Sep 30 17:31:09 compute-0 systemd-sysv-generator[44771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:31:09 compute-0 systemd-rc-local-generator[44768]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:31:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 17:31:10 compute-0 sudo[44726]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:31:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:31:10 compute-0 systemd[1]: run-r3f72921ea078464481cb202d783eb072.service: Deactivated successfully.
Sep 30 17:31:10 compute-0 sudo[45044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewgrqirwsylxixzusbdghjpiqaxkmiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253470.5152533-300-109555466413310/AnsiballZ_systemd.py'
Sep 30 17:31:10 compute-0 sudo[45044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:11 compute-0 python3.9[45046]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:31:11 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Sep 30 17:31:11 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Sep 30 17:31:11 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Sep 30 17:31:11 compute-0 systemd[1]: Stopping Network Manager...
Sep 30 17:31:11 compute-0 NetworkManager[3947]: <info>  [1759253471.1006] caught SIGTERM, shutting down normally.
Sep 30 17:31:11 compute-0 NetworkManager[3947]: <info>  [1759253471.1019] dhcp4 (eth0): canceled DHCP transaction
Sep 30 17:31:11 compute-0 NetworkManager[3947]: <info>  [1759253471.1019] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 17:31:11 compute-0 NetworkManager[3947]: <info>  [1759253471.1019] dhcp4 (eth0): state changed no lease
Sep 30 17:31:11 compute-0 NetworkManager[3947]: <info>  [1759253471.1021] manager: NetworkManager state is now CONNECTED_SITE
Sep 30 17:31:11 compute-0 NetworkManager[3947]: <info>  [1759253471.1114] exiting (success)
Sep 30 17:31:11 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 17:31:11 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 17:31:11 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Sep 30 17:31:11 compute-0 systemd[1]: Stopped Network Manager.
Sep 30 17:31:11 compute-0 systemd[1]: NetworkManager.service: Consumed 13.888s CPU time, 4.1M memory peak, read 0B from disk, written 15.0K to disk.
Sep 30 17:31:11 compute-0 systemd[1]: Starting Network Manager...
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.1761] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:17653e5b-2838-4126-901c-d60259dd6a79)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.1762] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.1816] manager[0x55dc0bb15090]: monitoring kernel firmware directory '/lib/firmware'.
Sep 30 17:31:11 compute-0 systemd[1]: Starting Hostname Service...
Sep 30 17:31:11 compute-0 systemd[1]: Started Hostname Service.
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2470] hostname: hostname: using hostnamed
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2471] hostname: static hostname changed from (none) to "compute-0"
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2476] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2480] manager[0x55dc0bb15090]: rfkill: Wi-Fi hardware radio set enabled
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2481] manager[0x55dc0bb15090]: rfkill: WWAN hardware radio set enabled
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2498] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2505] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2506] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2506] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2506] manager: Networking is enabled by state file
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2508] settings: Loaded settings plugin: keyfile (internal)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2513] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2534] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2542] dhcp: init: Using DHCP client 'internal'
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2544] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2548] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2552] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2558] device (lo): Activation: starting connection 'lo' (fc4e626b-e829-4908-a3d3-ce552dfc6be3)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2563] device (eth0): carrier: link connected
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2566] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2570] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2570] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2577] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2582] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2587] device (eth1): carrier: link connected
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2590] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2595] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (69237a25-976c-5886-ad18-fe086b52d43a) (indicated)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2596] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2602] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2608] device (eth1): Activation: starting connection 'ci-private-network' (69237a25-976c-5886-ad18-fe086b52d43a)
Sep 30 17:31:11 compute-0 systemd[1]: Started Network Manager.
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2618] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2624] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2626] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2628] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2630] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2635] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2636] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2639] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2646] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2650] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2652] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2660] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2671] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2677] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2679] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2683] device (lo): Activation: successful, device activated.
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2690] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2691] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2694] manager: NetworkManager state is now CONNECTED_LOCAL
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2696] device (eth1): Activation: successful, device activated.
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2704] dhcp4 (eth0): state changed new lease, address=38.102.83.202
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.2709] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Sep 30 17:31:11 compute-0 systemd[1]: Starting Network Manager Wait Online...
Sep 30 17:31:11 compute-0 sudo[45044]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.3221] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.3260] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.3261] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.3265] manager: NetworkManager state is now CONNECTED_SITE
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.3268] device (eth0): Activation: successful, device activated.
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.3272] manager: NetworkManager state is now CONNECTED_GLOBAL
Sep 30 17:31:11 compute-0 NetworkManager[45059]: <info>  [1759253471.3275] manager: startup complete
Sep 30 17:31:11 compute-0 systemd[1]: Finished Network Manager Wait Online.
Sep 30 17:31:11 compute-0 sudo[45270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asotaevygquuwkebswfbgjkdvcpnkfgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253471.6491816-316-4554403172613/AnsiballZ_dnf.py'
Sep 30 17:31:11 compute-0 sudo[45270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:12 compute-0 python3.9[45272]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:31:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:31:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:31:16 compute-0 systemd[1]: Reloading.
Sep 30 17:31:16 compute-0 systemd-rc-local-generator[45324]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:31:16 compute-0 systemd-sysv-generator[45327]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:31:16 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 17:31:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:31:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:31:17 compute-0 systemd[1]: run-r80341cbe0d224371a14af8ff4089686d.service: Deactivated successfully.
Sep 30 17:31:17 compute-0 sudo[45270]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:18 compute-0 sudo[45732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgtdryvtofvyspajufhyjsvnzybyiilb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253478.4689472-340-204326931405234/AnsiballZ_stat.py'
Sep 30 17:31:18 compute-0 sudo[45732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:18 compute-0 python3.9[45734]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:31:18 compute-0 sudo[45732]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:19 compute-0 sudo[45884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkisgekgcbqknbqpygizgsaaiaynckyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253479.212631-358-185059800291809/AnsiballZ_ini_file.py'
Sep 30 17:31:19 compute-0 sudo[45884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:19 compute-0 python3.9[45886]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:19 compute-0 sudo[45884]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:20 compute-0 sudo[46038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuiwimskdvgabxmojfuqofhevzmcxagd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253480.1670125-378-18195496740944/AnsiballZ_ini_file.py'
Sep 30 17:31:20 compute-0 sudo[46038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:20 compute-0 python3.9[46040]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:20 compute-0 sudo[46038]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:21 compute-0 sudo[46190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaqsdwenpwbbjgdwsfkzeohzmsjpbzgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253480.8104775-378-186509102842921/AnsiballZ_ini_file.py'
Sep 30 17:31:21 compute-0 sudo[46190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:21 compute-0 python3.9[46192]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:21 compute-0 sudo[46190]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:21 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 17:31:21 compute-0 sudo[46343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgbshhxqqckfaesrytwftukscsyeoptb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253481.5965478-408-192463979042049/AnsiballZ_ini_file.py'
Sep 30 17:31:21 compute-0 sudo[46343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:21 compute-0 python3.9[46345]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:22 compute-0 sudo[46343]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:22 compute-0 sudo[46495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntpcbsxbaewfzjyilytbyreoapgtzeku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253482.127901-408-139991279269787/AnsiballZ_ini_file.py'
Sep 30 17:31:22 compute-0 sudo[46495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:22 compute-0 python3.9[46497]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:22 compute-0 sudo[46495]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:23 compute-0 sudo[46647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paerkvwsehezifyrcaqoxejxfxtiqlbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253482.8570275-438-19489529989006/AnsiballZ_stat.py'
Sep 30 17:31:23 compute-0 sudo[46647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:23 compute-0 python3.9[46649]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:31:23 compute-0 sudo[46647]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:23 compute-0 sudo[46770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjyefxwegnnviecbxznygnvpeabmlfzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253482.8570275-438-19489529989006/AnsiballZ_copy.py'
Sep 30 17:31:23 compute-0 sudo[46770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:23 compute-0 python3.9[46772]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253482.8570275-438-19489529989006/.source _original_basename=.wluh8e6n follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:23 compute-0 sudo[46770]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:24 compute-0 sudo[46922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnpeqscotpupngybfbbzhhahgpkawwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253484.1400347-468-119286995320297/AnsiballZ_file.py'
Sep 30 17:31:24 compute-0 sudo[46922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:24 compute-0 python3.9[46924]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:24 compute-0 sudo[46922]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:25 compute-0 sudo[47074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwxkxursrhojlznsrxvqhabvxpayyyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253484.8228922-484-33309090901687/AnsiballZ_edpm_os_net_config_mappings.py'
Sep 30 17:31:25 compute-0 sudo[47074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:25 compute-0 python3.9[47076]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Sep 30 17:31:25 compute-0 sudo[47074]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:25 compute-0 sudo[47226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdjkuxotttbarpgtgmuksxginmxbfewd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253485.6655095-502-86783764699544/AnsiballZ_file.py'
Sep 30 17:31:25 compute-0 sudo[47226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:26 compute-0 python3.9[47228]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:26 compute-0 sudo[47226]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:26 compute-0 sudo[47378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfjxdhgjxyupmypzjujngqlxgfhsabyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253486.4893699-522-152330977283659/AnsiballZ_stat.py'
Sep 30 17:31:26 compute-0 sudo[47378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:26 compute-0 sudo[47378]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:27 compute-0 sudo[47501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbgcwsuridcucspkuofmzecdjxjyxkgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253486.4893699-522-152330977283659/AnsiballZ_copy.py'
Sep 30 17:31:27 compute-0 sudo[47501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:27 compute-0 sudo[47501]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:28 compute-0 sudo[47653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxpjetuhmtwrbeysrhfnswnquyrztkdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253487.7688105-552-236011260286894/AnsiballZ_slurp.py'
Sep 30 17:31:28 compute-0 sudo[47653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:28 compute-0 python3.9[47655]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Sep 30 17:31:28 compute-0 sudo[47653]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:29 compute-0 sudo[47828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmhamnginnrcxjkddrejnqzupmvilnjg ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253488.6007376-570-31570836861683/async_wrapper.py j98807834917 300 /home/zuul/.ansible/tmp/ansible-tmp-1759253488.6007376-570-31570836861683/AnsiballZ_edpm_os_net_config.py _'
Sep 30 17:31:29 compute-0 sudo[47828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:29 compute-0 ansible-async_wrapper.py[47830]: Invoked with j98807834917 300 /home/zuul/.ansible/tmp/ansible-tmp-1759253488.6007376-570-31570836861683/AnsiballZ_edpm_os_net_config.py _
Sep 30 17:31:29 compute-0 ansible-async_wrapper.py[47833]: Starting module and watcher
Sep 30 17:31:29 compute-0 ansible-async_wrapper.py[47833]: Start watching 47834 (300)
Sep 30 17:31:29 compute-0 ansible-async_wrapper.py[47834]: Start module (47834)
Sep 30 17:31:29 compute-0 ansible-async_wrapper.py[47830]: Return async_wrapper task started.
Sep 30 17:31:29 compute-0 sudo[47828]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:29 compute-0 python3.9[47835]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Sep 30 17:31:30 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Sep 30 17:31:30 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Sep 30 17:31:30 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Sep 30 17:31:30 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Sep 30 17:31:30 compute-0 kernel: cfg80211: failed to load regulatory.db
Sep 30 17:31:30 compute-0 sshd[1010]: Timeout before authentication for connection from 14.103.233.27 to 38.102.83.202, pid = 30833
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.4871] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.4892] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5382] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5384] audit: op="connection-add" uuid="fea3ed2c-3768-4414-960b-2f5a01c84e15" name="br-ex-br" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5397] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5398] audit: op="connection-add" uuid="a0f001b5-64f3-4cb4-b46f-e6bd6a33e622" name="br-ex-port" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5411] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5412] audit: op="connection-add" uuid="0d72f04b-a838-4409-b267-f810e63da9ab" name="eth1-port" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5427] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5428] audit: op="connection-add" uuid="de97852d-4d25-45df-9e58-09514ba92f58" name="vlan20-port" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5440] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5441] audit: op="connection-add" uuid="d3f521c2-b968-43aa-a3fc-7ec1e8dbd9ca" name="vlan21-port" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5452] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5453] audit: op="connection-add" uuid="b4c001be-ab53-405e-a846-1bb5b4cf7130" name="vlan22-port" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5466] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5467] audit: op="connection-add" uuid="b6142462-46b6-4acb-a888-cbd1156d957d" name="vlan23-port" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5486] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5501] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5503] audit: op="connection-add" uuid="dc4a59e7-8d16-46e9-9cd1-9ed16c16f279" name="br-ex-if" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5531] audit: op="connection-update" uuid="69237a25-976c-5886-ad18-fe086b52d43a" name="ci-private-network" args="ovs-interface.type,ipv4.never-default,ipv4.routing-rules,ipv4.method,ipv4.dns,ipv4.routes,ipv4.addresses,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ipv6.addresses,connection.controller,connection.slave-type,connection.timestamp,connection.master,connection.port-type,ovs-external-ids.data" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5549] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5551] audit: op="connection-add" uuid="70865d4f-ce4e-4a6f-9afb-b7292f0237f2" name="vlan20-if" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5566] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5567] audit: op="connection-add" uuid="15346b95-595c-4846-89e5-559d4a1b5cd9" name="vlan21-if" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5582] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5583] audit: op="connection-add" uuid="37a19487-e54a-4c9b-bd68-34fe459b2333" name="vlan22-if" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5600] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5601] audit: op="connection-add" uuid="83fc685c-4791-4f2c-be8f-cab3c0d6dc18" name="vlan23-if" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5615] audit: op="connection-delete" uuid="3ae83a79-f376-35d9-8615-d9dcf6285ffc" name="Wired connection 1" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5626] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5636] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5640] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (fea3ed2c-3768-4414-960b-2f5a01c84e15)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5641] audit: op="connection-activate" uuid="fea3ed2c-3768-4414-960b-2f5a01c84e15" name="br-ex-br" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5643] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5649] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5652] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a0f001b5-64f3-4cb4-b46f-e6bd6a33e622)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5654] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5658] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5663] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (0d72f04b-a838-4409-b267-f810e63da9ab)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5664] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5670] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5674] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (de97852d-4d25-45df-9e58-09514ba92f58)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5676] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5682] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5686] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (d3f521c2-b968-43aa-a3fc-7ec1e8dbd9ca)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5687] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5692] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5697] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (b4c001be-ab53-405e-a846-1bb5b4cf7130)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5699] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5706] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5710] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (b6142462-46b6-4acb-a888-cbd1156d957d)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5711] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5713] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5715] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5720] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5723] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5727] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (dc4a59e7-8d16-46e9-9cd1-9ed16c16f279)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5727] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5730] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5732] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5734] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5735] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5744] device (eth1): disconnecting for new activation request.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5745] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5747] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5749] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5750] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5752] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5756] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5759] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (70865d4f-ce4e-4a6f-9afb-b7292f0237f2)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5760] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5763] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5765] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5766] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5769] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5772] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5776] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (15346b95-595c-4846-89e5-559d4a1b5cd9)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5776] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5779] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5780] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5781] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5783] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5787] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5790] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (37a19487-e54a-4c9b-bd68-34fe459b2333)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5791] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5793] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5795] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5796] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5799] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5802] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5806] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (83fc685c-4791-4f2c-be8f-cab3c0d6dc18)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5806] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5809] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5811] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5812] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5814] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5826] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5827] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5830] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5832] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5838] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5841] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5843] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5846] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5847] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5850] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5853] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5855] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5856] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5860] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5862] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5865] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5866] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5869] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5872] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5874] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5875] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5879] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5881] dhcp4 (eth0): canceled DHCP transaction
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5882] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5882] dhcp4 (eth0): state changed no lease
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5883] dhcp4 (eth0): activation: beginning transaction (no timeout)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.5898] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47836 uid=0 result="fail" reason="Device is not activated"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6230] dhcp4 (eth0): state changed new lease, address=38.102.83.202
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6236] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Sep 30 17:31:31 compute-0 kernel: ovs-system: entered promiscuous mode
Sep 30 17:31:31 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 17:31:31 compute-0 systemd-udevd[47842]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 17:31:31 compute-0 kernel: Timeout policy base is empty
Sep 30 17:31:31 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6455] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6460] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Sep 30 17:31:31 compute-0 kernel: br-ex: entered promiscuous mode
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6601] device (eth1): Activation: starting connection 'ci-private-network' (69237a25-976c-5886-ad18-fe086b52d43a)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6609] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6610] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6611] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6613] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6615] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6616] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6617] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6620] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6629] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6632] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6639] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6645] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6649] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 kernel: vlan20: entered promiscuous mode
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6655] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6659] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 systemd-udevd[47841]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6663] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6667] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6671] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6677] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6681] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6684] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6688] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6692] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6696] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6709] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6710] device (eth1): released from controller device eth1
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6717] device (eth1): disconnecting for new activation request.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6717] audit: op="connection-activate" uuid="69237a25-976c-5886-ad18-fe086b52d43a" name="ci-private-network" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6721] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Sep 30 17:31:31 compute-0 kernel: vlan21: entered promiscuous mode
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6729] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6739] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6746] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6753] device (eth1): Activation: starting connection 'ci-private-network' (69237a25-976c-5886-ad18-fe086b52d43a)
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6776] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6779] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6786] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47836 uid=0 result="success"
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6794] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 kernel: vlan22: entered promiscuous mode
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6804] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6806] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6821] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6835] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6843] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6858] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Sep 30 17:31:31 compute-0 kernel: vlan23: entered promiscuous mode
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6875] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6926] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6929] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6938] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6942] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6949] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6950] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6951] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6957] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6963] device (eth1): Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6967] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6972] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6977] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6981] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.6991] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.7001] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.7010] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.7014] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.7019] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.7035] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.7069] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.7071] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 17:31:31 compute-0 NetworkManager[45059]: <info>  [1759253491.7082] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 17:31:32 compute-0 NetworkManager[45059]: <info>  [1759253492.8410] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47836 uid=0 result="success"
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.0616] checkpoint[0x55dc0baec950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.0619] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47836 uid=0 result="success"
Sep 30 17:31:33 compute-0 sudo[48196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuytqrlvtknoxqdmgzukulqcxdraccog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253492.6070867-570-93120611386207/AnsiballZ_async_status.py'
Sep 30 17:31:33 compute-0 sudo[48196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:33 compute-0 python3.9[48199]: ansible-ansible.legacy.async_status Invoked with jid=j98807834917.47830 mode=status _async_dir=/root/.ansible_async
Sep 30 17:31:33 compute-0 sudo[48196]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.3534] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47836 uid=0 result="success"
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.3546] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47836 uid=0 result="success"
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.5540] audit: op="networking-control" arg="global-dns-configuration" pid=47836 uid=0 result="success"
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.5632] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.5684] audit: op="networking-control" arg="global-dns-configuration" pid=47836 uid=0 result="success"
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.5710] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47836 uid=0 result="success"
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.7234] checkpoint[0x55dc0baeca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Sep 30 17:31:33 compute-0 NetworkManager[45059]: <info>  [1759253493.7239] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47836 uid=0 result="success"
Sep 30 17:31:33 compute-0 ansible-async_wrapper.py[47834]: Module complete (47834)
Sep 30 17:31:34 compute-0 ansible-async_wrapper.py[47833]: Done in kid B.
Sep 30 17:31:36 compute-0 sudo[48302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bldaglkfgwarixzbulnmrmtlaiiqjgdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253492.6070867-570-93120611386207/AnsiballZ_async_status.py'
Sep 30 17:31:36 compute-0 sudo[48302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:36 compute-0 python3.9[48304]: ansible-ansible.legacy.async_status Invoked with jid=j98807834917.47830 mode=status _async_dir=/root/.ansible_async
Sep 30 17:31:36 compute-0 sudo[48302]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:36 compute-0 sudo[48402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adqtiecyjaeqyfdqjrpaagzivirmxqef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253492.6070867-570-93120611386207/AnsiballZ_async_status.py'
Sep 30 17:31:36 compute-0 sudo[48402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:37 compute-0 python3.9[48404]: ansible-ansible.legacy.async_status Invoked with jid=j98807834917.47830 mode=cleanup _async_dir=/root/.ansible_async
Sep 30 17:31:37 compute-0 sudo[48402]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:37 compute-0 sudo[48554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umjscdamhbfigpeuliraraqjcseurstd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253497.5287194-624-252128496501593/AnsiballZ_stat.py'
Sep 30 17:31:37 compute-0 sudo[48554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:38 compute-0 python3.9[48556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:31:38 compute-0 sudo[48554]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:38 compute-0 sudo[48677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygatrbbsqymdlqfhogtsdrnjyqszranj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253497.5287194-624-252128496501593/AnsiballZ_copy.py'
Sep 30 17:31:38 compute-0 sudo[48677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:38 compute-0 python3.9[48679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253497.5287194-624-252128496501593/.source.returncode _original_basename=.3gek6weo follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:38 compute-0 sudo[48677]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:39 compute-0 sudo[48829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkrqlhtkobceipwoatufuwktjpztifer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253499.094573-656-170560520320638/AnsiballZ_stat.py'
Sep 30 17:31:39 compute-0 sudo[48829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:39 compute-0 python3.9[48831]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:31:39 compute-0 sudo[48829]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:39 compute-0 sudo[48952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjuobzvkykmtajoxbyaoknvqijpiutbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253499.094573-656-170560520320638/AnsiballZ_copy.py'
Sep 30 17:31:39 compute-0 sudo[48952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:40 compute-0 python3.9[48954]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253499.094573-656-170560520320638/.source.cfg _original_basename=.h1zdnk7l follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:31:40 compute-0 sudo[48952]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:40 compute-0 sudo[49105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjmdiwmvpsmrchpkfurgycxhyiitmrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253500.3694398-686-169498793718055/AnsiballZ_systemd.py'
Sep 30 17:31:40 compute-0 sudo[49105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:40 compute-0 python3.9[49107]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:31:41 compute-0 systemd[1]: Reloading Network Manager...
Sep 30 17:31:41 compute-0 NetworkManager[45059]: <info>  [1759253501.0259] audit: op="reload" arg="0" pid=49111 uid=0 result="success"
Sep 30 17:31:41 compute-0 NetworkManager[45059]: <info>  [1759253501.0265] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Sep 30 17:31:41 compute-0 systemd[1]: Reloaded Network Manager.
Sep 30 17:31:41 compute-0 sudo[49105]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:41 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 17:31:41 compute-0 sshd-session[41058]: Connection closed by 192.168.122.30 port 57576
Sep 30 17:31:41 compute-0 sshd-session[41055]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:31:41 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Sep 30 17:31:41 compute-0 systemd[1]: session-10.scope: Consumed 47.587s CPU time.
Sep 30 17:31:41 compute-0 systemd-logind[811]: Session 10 logged out. Waiting for processes to exit.
Sep 30 17:31:41 compute-0 systemd-logind[811]: Removed session 10.
Sep 30 17:31:46 compute-0 sshd-session[49143]: Accepted publickey for zuul from 192.168.122.30 port 36888 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:31:46 compute-0 systemd-logind[811]: New session 11 of user zuul.
Sep 30 17:31:46 compute-0 systemd[1]: Started Session 11 of User zuul.
Sep 30 17:31:46 compute-0 sshd-session[49143]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:31:47 compute-0 python3.9[49297]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:31:48 compute-0 python3.9[49451]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:31:49 compute-0 python3.9[49644]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:31:50 compute-0 sshd-session[49147]: Connection closed by 192.168.122.30 port 36888
Sep 30 17:31:50 compute-0 sshd-session[49143]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:31:50 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Sep 30 17:31:50 compute-0 systemd[1]: session-11.scope: Consumed 2.175s CPU time.
Sep 30 17:31:50 compute-0 systemd-logind[811]: Session 11 logged out. Waiting for processes to exit.
Sep 30 17:31:50 compute-0 systemd-logind[811]: Removed session 11.
Sep 30 17:31:51 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 17:31:55 compute-0 sshd-session[49673]: Accepted publickey for zuul from 192.168.122.30 port 35488 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:31:55 compute-0 systemd-logind[811]: New session 12 of user zuul.
Sep 30 17:31:55 compute-0 systemd[1]: Started Session 12 of User zuul.
Sep 30 17:31:55 compute-0 sshd-session[49673]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:31:56 compute-0 python3.9[49827]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:31:57 compute-0 python3.9[49981]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:31:58 compute-0 sudo[50135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxuawsynyhdumhzvxdmxhllwhapysiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253518.2048385-60-214128942681831/AnsiballZ_setup.py'
Sep 30 17:31:58 compute-0 sudo[50135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:58 compute-0 python3.9[50137]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:31:58 compute-0 sudo[50135]: pam_unix(sudo:session): session closed for user root
Sep 30 17:31:59 compute-0 sudo[50220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hshsbkidwwxadlsxlplyvjhgdlywlgun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253518.2048385-60-214128942681831/AnsiballZ_dnf.py'
Sep 30 17:31:59 compute-0 sudo[50220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:31:59 compute-0 python3.9[50222]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:32:00 compute-0 sudo[50220]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:01 compute-0 sudo[50373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfhupvgaagzmnrbznxpiodhxktqegycd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253521.0058904-84-68954800942777/AnsiballZ_setup.py'
Sep 30 17:32:01 compute-0 sudo[50373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:01 compute-0 python3.9[50375]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:32:01 compute-0 sudo[50373]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:02 compute-0 sudo[50569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-andujmyekfrblvkhpdyujworegnclsye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253522.1936336-106-216877295568885/AnsiballZ_file.py'
Sep 30 17:32:02 compute-0 sudo[50569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:02 compute-0 python3.9[50571]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:02 compute-0 sudo[50569]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:03 compute-0 sudo[50721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxxfvgsrmmyomvhimhaghojwaoyvaxtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253522.9787822-122-90070946459969/AnsiballZ_command.py'
Sep 30 17:32:03 compute-0 sudo[50721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:03 compute-0 python3.9[50723]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2453425133-merged.mount: Deactivated successfully.
Sep 30 17:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3904798512-merged.mount: Deactivated successfully.
Sep 30 17:32:03 compute-0 podman[50724]: 2025-09-30 17:32:03.72114454 +0000 UTC m=+0.045455135 system refresh
Sep 30 17:32:03 compute-0 sudo[50721]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:04 compute-0 sudo[50883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcvpjnnmdrgurcwhyqruavlwhwsyztxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253524.0186293-138-189821606730472/AnsiballZ_stat.py'
Sep 30 17:32:04 compute-0 sudo[50883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:04 compute-0 python3.9[50885]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:04 compute-0 sudo[50883]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:32:05 compute-0 sudo[51006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghnqjjclqrpqahjabwwgwprljnoesjmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253524.0186293-138-189821606730472/AnsiballZ_copy.py'
Sep 30 17:32:05 compute-0 sudo[51006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:05 compute-0 python3.9[51008]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253524.0186293-138-189821606730472/.source.json follow=False _original_basename=podman_network_config.j2 checksum=cf5c36ddbb28c028040278d994d82f89a78a0293 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:05 compute-0 sudo[51006]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:05 compute-0 sudo[51158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efpqaormeefsrpyuvcjlanenhyaewruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253525.611855-168-126665316181667/AnsiballZ_stat.py'
Sep 30 17:32:05 compute-0 sudo[51158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:06 compute-0 python3.9[51160]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:06 compute-0 sudo[51158]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:06 compute-0 sudo[51281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwwbflrtmzmgcmkasslrcnyheyylbhgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253525.611855-168-126665316181667/AnsiballZ_copy.py'
Sep 30 17:32:06 compute-0 sudo[51281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:06 compute-0 python3.9[51283]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759253525.611855-168-126665316181667/.source.conf follow=False _original_basename=registries.conf.j2 checksum=648aa3ef81d3efca9d74ba4d007f7d21b2d62a41 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:32:06 compute-0 sudo[51281]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:07 compute-0 sudo[51433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnnvyxgdjzqbkfovjzuujuiukbfspmto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253526.9597518-200-140914155507131/AnsiballZ_ini_file.py'
Sep 30 17:32:07 compute-0 sudo[51433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:07 compute-0 python3.9[51435]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:32:07 compute-0 sudo[51433]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:07 compute-0 sudo[51585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahsvzldkkcwkqykfdzzwqwksnqohqsvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253527.709932-200-266053154120971/AnsiballZ_ini_file.py'
Sep 30 17:32:07 compute-0 sudo[51585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:08 compute-0 python3.9[51587]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:32:08 compute-0 sudo[51585]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:08 compute-0 sudo[51737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guqdognegjvnrwghdylsxqauzyzhiwws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253528.29002-200-242932269578700/AnsiballZ_ini_file.py'
Sep 30 17:32:08 compute-0 sudo[51737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:08 compute-0 python3.9[51739]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:32:08 compute-0 sudo[51737]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:09 compute-0 sudo[51889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-facyaieqmgcjqtsimmkvyjmamlmpejlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253529.0284262-200-78534443501392/AnsiballZ_ini_file.py'
Sep 30 17:32:09 compute-0 sudo[51889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:09 compute-0 python3.9[51891]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:32:09 compute-0 sudo[51889]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:09 compute-0 sudo[52041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojgmtfmdivvntfkpfoqhiwolejefuwvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253529.7346826-262-143908048866300/AnsiballZ_dnf.py'
Sep 30 17:32:09 compute-0 sudo[52041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:10 compute-0 python3.9[52043]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:32:11 compute-0 sudo[52041]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:12 compute-0 sudo[52195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhbsnhqmronskggqssdrmyubqhcgcuqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253531.9288938-284-132019334060197/AnsiballZ_setup.py'
Sep 30 17:32:12 compute-0 sudo[52195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:12 compute-0 python3.9[52197]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:32:12 compute-0 sudo[52195]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:12 compute-0 sudo[52350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epgzjfxwzgesbexcdufiblvicbtxwolw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253532.7744262-300-86885714730009/AnsiballZ_stat.py'
Sep 30 17:32:12 compute-0 sudo[52350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:13 compute-0 python3.9[52352]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:32:13 compute-0 sudo[52350]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:13 compute-0 sshd-session[52045]: Connection closed by authenticating user operator 80.94.95.116 port 34912 [preauth]
Sep 30 17:32:13 compute-0 sudo[52502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgljcgczqjnvziaxopudczvlpiournrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253533.4676194-318-87185170940867/AnsiballZ_stat.py'
Sep 30 17:32:13 compute-0 sudo[52502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:13 compute-0 python3.9[52504]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:32:13 compute-0 sudo[52502]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:14 compute-0 sudo[52654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dypyhackaqyjqqxoqmkilvwywwktrlgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253534.2811153-338-30030581773173/AnsiballZ_service_facts.py'
Sep 30 17:32:14 compute-0 sudo[52654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:14 compute-0 python3.9[52656]: ansible-service_facts Invoked
Sep 30 17:32:14 compute-0 sshd[1010]: Timeout before authentication for connection from 14.103.233.27 to 38.102.83.202, pid = 39684
Sep 30 17:32:14 compute-0 network[52673]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:32:14 compute-0 network[52674]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:32:14 compute-0 network[52675]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:32:17 compute-0 sudo[52654]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:19 compute-0 sudo[52960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbjusenpgelsdoevdvmzdwaamyuwzzal ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759253539.550918-364-37885742458221/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759253539.550918-364-37885742458221/args'
Sep 30 17:32:19 compute-0 sudo[52960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:19 compute-0 sudo[52960]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:20 compute-0 sudo[53127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlamozplzqjhtkihwhxrblxdbjujwdng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253540.2830331-386-128698342522898/AnsiballZ_dnf.py'
Sep 30 17:32:20 compute-0 sudo[53127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:20 compute-0 python3.9[53129]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:32:22 compute-0 sudo[53127]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:23 compute-0 sudo[53280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydmnwiapwkybfvfyzzpftsjfjhizbwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253542.514285-412-205446314881080/AnsiballZ_package_facts.py'
Sep 30 17:32:23 compute-0 sudo[53280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:23 compute-0 python3.9[53282]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Sep 30 17:32:23 compute-0 sudo[53280]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:24 compute-0 sudo[53432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezidmqlaugasekvzlbgbrrbxfjmjmwvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253544.2461472-432-185424117395487/AnsiballZ_stat.py'
Sep 30 17:32:24 compute-0 sudo[53432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:24 compute-0 python3.9[53434]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:24 compute-0 sudo[53432]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:25 compute-0 sudo[53557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noehohmzfbdsxdozttoledfezevdihvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253544.2461472-432-185424117395487/AnsiballZ_copy.py'
Sep 30 17:32:25 compute-0 sudo[53557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:25 compute-0 sshd[1010]: drop connection #0 from [14.103.233.27]:33488 on [38.102.83.202]:22 penalty: exceeded LoginGraceTime
Sep 30 17:32:25 compute-0 python3.9[53559]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253544.2461472-432-185424117395487/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:25 compute-0 sudo[53557]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:26 compute-0 sudo[53711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipywejqeaholxsgpuychzkzibhfpchyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253545.7330294-462-268043433477129/AnsiballZ_stat.py'
Sep 30 17:32:26 compute-0 sudo[53711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:26 compute-0 python3.9[53713]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:26 compute-0 sudo[53711]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:26 compute-0 sudo[53836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aidxzfycdqcropvpuluavrmbmgauygdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253545.7330294-462-268043433477129/AnsiballZ_copy.py'
Sep 30 17:32:26 compute-0 sudo[53836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:26 compute-0 python3.9[53838]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253545.7330294-462-268043433477129/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:26 compute-0 sudo[53836]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:28 compute-0 sudo[53990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icgfmlloulqeekevbtttyumutzmoafgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253547.6525557-504-226336146348631/AnsiballZ_lineinfile.py'
Sep 30 17:32:28 compute-0 sudo[53990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:28 compute-0 python3.9[53992]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:28 compute-0 sudo[53990]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:29 compute-0 sudo[54144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leoyofhnrkpungzaknyibvjtsxfmnwvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253549.2543106-534-109443993412604/AnsiballZ_setup.py'
Sep 30 17:32:29 compute-0 sudo[54144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:29 compute-0 python3.9[54146]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:32:30 compute-0 sudo[54144]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:30 compute-0 sudo[54228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsztsqivxmtobanmlxxrjjzythzmwqye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253549.2543106-534-109443993412604/AnsiballZ_systemd.py'
Sep 30 17:32:30 compute-0 sudo[54228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:30 compute-0 python3.9[54230]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:32:30 compute-0 sudo[54228]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:31 compute-0 sudo[54382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mybkiuicmaeqmtblgcabearmosegsfun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253551.610014-566-189383412001378/AnsiballZ_setup.py'
Sep 30 17:32:31 compute-0 sudo[54382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:32 compute-0 python3.9[54384]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:32:32 compute-0 sudo[54382]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:32 compute-0 sudo[54466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbamfteckyyayxavapwykqqikhneiogk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253551.610014-566-189383412001378/AnsiballZ_systemd.py'
Sep 30 17:32:32 compute-0 sudo[54466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:32 compute-0 python3.9[54468]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:32:32 compute-0 chronyd[808]: chronyd exiting
Sep 30 17:32:32 compute-0 systemd[1]: Stopping NTP client/server...
Sep 30 17:32:32 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Sep 30 17:32:32 compute-0 systemd[1]: Stopped NTP client/server.
Sep 30 17:32:32 compute-0 systemd[1]: Starting NTP client/server...
Sep 30 17:32:32 compute-0 chronyd[54477]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Sep 30 17:32:32 compute-0 chronyd[54477]: Frequency -26.003 +/- 0.230 ppm read from /var/lib/chrony/drift
Sep 30 17:32:32 compute-0 chronyd[54477]: Loaded seccomp filter (level 2)
Sep 30 17:32:32 compute-0 systemd[1]: Started NTP client/server.
Sep 30 17:32:33 compute-0 sudo[54466]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:33 compute-0 sshd-session[49676]: Connection closed by 192.168.122.30 port 35488
Sep 30 17:32:33 compute-0 sshd-session[49673]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:32:33 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Sep 30 17:32:33 compute-0 systemd[1]: session-12.scope: Consumed 23.371s CPU time.
Sep 30 17:32:33 compute-0 systemd-logind[811]: Session 12 logged out. Waiting for processes to exit.
Sep 30 17:32:33 compute-0 systemd-logind[811]: Removed session 12.
Sep 30 17:32:39 compute-0 sshd-session[54503]: Accepted publickey for zuul from 192.168.122.30 port 58414 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:32:39 compute-0 systemd-logind[811]: New session 13 of user zuul.
Sep 30 17:32:39 compute-0 systemd[1]: Started Session 13 of User zuul.
Sep 30 17:32:39 compute-0 sshd-session[54503]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:32:40 compute-0 sudo[54656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkzqfpaieewmxxwabjepcqldoqmwjkts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253559.7130883-24-102270029059289/AnsiballZ_file.py'
Sep 30 17:32:40 compute-0 sudo[54656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:40 compute-0 python3.9[54658]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:40 compute-0 sudo[54656]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:41 compute-0 sudo[54808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgvhgwdpjyjyqqwbhazjgnsmseonckmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253560.6174135-48-26982239577702/AnsiballZ_stat.py'
Sep 30 17:32:41 compute-0 sudo[54808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:41 compute-0 python3.9[54810]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:41 compute-0 sudo[54808]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:41 compute-0 sudo[54931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffwsvuwpvzxptyukxsgbdcvxsdpdoqrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253560.6174135-48-26982239577702/AnsiballZ_copy.py'
Sep 30 17:32:41 compute-0 sudo[54931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:41 compute-0 python3.9[54933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253560.6174135-48-26982239577702/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:41 compute-0 sudo[54931]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:42 compute-0 sshd-session[54506]: Connection closed by 192.168.122.30 port 58414
Sep 30 17:32:42 compute-0 sshd-session[54503]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:32:42 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Sep 30 17:32:42 compute-0 systemd[1]: session-13.scope: Consumed 1.467s CPU time.
Sep 30 17:32:42 compute-0 systemd-logind[811]: Session 13 logged out. Waiting for processes to exit.
Sep 30 17:32:42 compute-0 systemd-logind[811]: Removed session 13.
Sep 30 17:32:48 compute-0 sshd-session[54958]: Accepted publickey for zuul from 192.168.122.30 port 46776 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:32:48 compute-0 systemd-logind[811]: New session 14 of user zuul.
Sep 30 17:32:48 compute-0 systemd[1]: Started Session 14 of User zuul.
Sep 30 17:32:48 compute-0 sshd-session[54958]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:32:49 compute-0 python3.9[55111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:32:50 compute-0 sudo[55265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsanujfbteekxngxctlrpwpcanlrgepf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253569.8821259-46-258652736304257/AnsiballZ_file.py'
Sep 30 17:32:50 compute-0 sudo[55265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:50 compute-0 python3.9[55267]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:50 compute-0 sudo[55265]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:51 compute-0 sudo[55440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnrsassxedxdycalbimqncnjnssbcjoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253570.680369-62-40143729966660/AnsiballZ_stat.py'
Sep 30 17:32:51 compute-0 sudo[55440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:51 compute-0 python3.9[55442]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:51 compute-0 sudo[55440]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:52 compute-0 sudo[55563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfslkpbgtoznwjxvlmrybbqjsanqugdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253570.680369-62-40143729966660/AnsiballZ_copy.py'
Sep 30 17:32:52 compute-0 sudo[55563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:52 compute-0 python3.9[55565]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759253570.680369-62-40143729966660/.source.json _original_basename=.u4a_wwsn follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:52 compute-0 sudo[55563]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:53 compute-0 sudo[55715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecodllrndkwpqrmrecywtsfrofnvaqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253573.1492953-108-170101783159147/AnsiballZ_stat.py'
Sep 30 17:32:53 compute-0 sudo[55715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:53 compute-0 python3.9[55717]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:53 compute-0 sudo[55715]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:53 compute-0 sudo[55838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrnsvrgwifhpalclwkcjooormiqvjxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253573.1492953-108-170101783159147/AnsiballZ_copy.py'
Sep 30 17:32:53 compute-0 sudo[55838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:54 compute-0 python3.9[55840]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253573.1492953-108-170101783159147/.source _original_basename=.9xt40a72 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:54 compute-0 sudo[55838]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:54 compute-0 sudo[55990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwvyuunzznvavtrlsmilgqhftmcabmsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253574.435355-140-116171867272994/AnsiballZ_file.py'
Sep 30 17:32:54 compute-0 sudo[55990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:55 compute-0 python3.9[55992]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:32:55 compute-0 sudo[55990]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:55 compute-0 sudo[56142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-istzzpjdrvsuqgvyepffwpognhuwhsow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253575.2493286-156-266925587304851/AnsiballZ_stat.py'
Sep 30 17:32:55 compute-0 sudo[56142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:55 compute-0 python3.9[56144]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:55 compute-0 sudo[56142]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:56 compute-0 sudo[56265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvflipvkpkgxuzedbuvncgcqcgwfasfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253575.2493286-156-266925587304851/AnsiballZ_copy.py'
Sep 30 17:32:56 compute-0 sudo[56265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:56 compute-0 python3.9[56267]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759253575.2493286-156-266925587304851/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:32:56 compute-0 sudo[56265]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:56 compute-0 sudo[56417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymzyghgittnciqdkjemxzprokmypjaau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253576.550903-156-150096007816520/AnsiballZ_stat.py'
Sep 30 17:32:56 compute-0 sudo[56417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:57 compute-0 python3.9[56419]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:57 compute-0 sudo[56417]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:57 compute-0 sudo[56540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysywxeimwyhttbljjjgcisqcbaamynck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253576.550903-156-150096007816520/AnsiballZ_copy.py'
Sep 30 17:32:57 compute-0 sudo[56540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:57 compute-0 python3.9[56542]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759253576.550903-156-150096007816520/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:32:57 compute-0 sudo[56540]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:58 compute-0 sudo[56692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofebjxurbzqnhdkooxffxoluwueialdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253577.969273-214-147924458547688/AnsiballZ_file.py'
Sep 30 17:32:58 compute-0 sudo[56692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:58 compute-0 python3.9[56694]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:58 compute-0 sudo[56692]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:59 compute-0 sudo[56844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-andumyhfvmibljvsurddqsdptlrlftbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253578.7739062-230-126017541001936/AnsiballZ_stat.py'
Sep 30 17:32:59 compute-0 sudo[56844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:59 compute-0 python3.9[56846]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:32:59 compute-0 sudo[56844]: pam_unix(sudo:session): session closed for user root
Sep 30 17:32:59 compute-0 sudo[56967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nttmbgoftakcrelwsroulttomvcciosu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253578.7739062-230-126017541001936/AnsiballZ_copy.py'
Sep 30 17:32:59 compute-0 sudo[56967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:32:59 compute-0 python3.9[56969]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253578.7739062-230-126017541001936/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:32:59 compute-0 sudo[56967]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:00 compute-0 sudo[57119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqmmivwntlsncyipdzxqqwzixsgfhosp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253580.0537636-260-32329165839133/AnsiballZ_stat.py'
Sep 30 17:33:00 compute-0 sudo[57119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:00 compute-0 python3.9[57121]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:33:00 compute-0 sudo[57119]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:00 compute-0 sudo[57242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkanqglabuzlhldbdfrpdmuxatewpfcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253580.0537636-260-32329165839133/AnsiballZ_copy.py'
Sep 30 17:33:00 compute-0 sudo[57242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:01 compute-0 python3.9[57244]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253580.0537636-260-32329165839133/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:33:01 compute-0 sudo[57242]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:01 compute-0 sudo[57394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjioiybwiijlkybredegoprxrrrsnapk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253581.3943436-290-269360369215823/AnsiballZ_systemd.py'
Sep 30 17:33:01 compute-0 sudo[57394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:02 compute-0 python3.9[57396]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:33:02 compute-0 systemd[1]: Reloading.
Sep 30 17:33:02 compute-0 systemd-rc-local-generator[57422]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:33:02 compute-0 systemd-sysv-generator[57426]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:33:02 compute-0 systemd[1]: Reloading.
Sep 30 17:33:02 compute-0 systemd-rc-local-generator[57458]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:33:02 compute-0 systemd-sysv-generator[57463]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:33:02 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Sep 30 17:33:02 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Sep 30 17:33:02 compute-0 sudo[57394]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:03 compute-0 sudo[57621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvopeixzvhbsbmuahcbebxggcsrrvskm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253583.131282-306-134808553334592/AnsiballZ_stat.py'
Sep 30 17:33:03 compute-0 sudo[57621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:03 compute-0 python3.9[57623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:33:03 compute-0 sudo[57621]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:03 compute-0 sudo[57744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbuqhycydoawasqqzpxhtnjajzaserbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253583.131282-306-134808553334592/AnsiballZ_copy.py'
Sep 30 17:33:03 compute-0 sudo[57744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:04 compute-0 python3.9[57746]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253583.131282-306-134808553334592/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:33:04 compute-0 sudo[57744]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:04 compute-0 sudo[57896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sukfylrgpklsugoqnruwaoreesrhlmyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253584.3501031-336-158636513700342/AnsiballZ_stat.py'
Sep 30 17:33:04 compute-0 sudo[57896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:04 compute-0 python3.9[57898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:33:04 compute-0 sudo[57896]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:05 compute-0 sudo[58019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqzgsldsowianwdnvxeqkecwqgfmkril ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253584.3501031-336-158636513700342/AnsiballZ_copy.py'
Sep 30 17:33:05 compute-0 sudo[58019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:05 compute-0 python3.9[58021]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253584.3501031-336-158636513700342/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:33:05 compute-0 sudo[58019]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:05 compute-0 sudo[58171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seaaxapakruxqmjlzllfnmvgwkznhfxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253585.707749-366-208077720985577/AnsiballZ_systemd.py'
Sep 30 17:33:05 compute-0 sudo[58171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:06 compute-0 python3.9[58173]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:33:06 compute-0 systemd[1]: Reloading.
Sep 30 17:33:06 compute-0 systemd-rc-local-generator[58202]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:33:06 compute-0 systemd-sysv-generator[58206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:33:06 compute-0 systemd[1]: Reloading.
Sep 30 17:33:06 compute-0 systemd-sysv-generator[58240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:33:06 compute-0 systemd-rc-local-generator[58237]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:33:06 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 17:33:06 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 17:33:06 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 17:33:06 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 17:33:06 compute-0 sudo[58171]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:07 compute-0 python3.9[58398]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:33:07 compute-0 network[58415]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:33:07 compute-0 network[58416]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:33:07 compute-0 network[58417]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:33:11 compute-0 sudo[58679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwgviqzidowzbwfiyqqjgehbzsxevwxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253590.9308612-398-250207916201860/AnsiballZ_systemd.py'
Sep 30 17:33:11 compute-0 sudo[58679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:11 compute-0 python3.9[58681]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:33:11 compute-0 systemd[1]: Reloading.
Sep 30 17:33:11 compute-0 systemd-rc-local-generator[58709]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:33:11 compute-0 systemd-sysv-generator[58712]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:33:11 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Sep 30 17:33:12 compute-0 iptables.init[58721]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Sep 30 17:33:12 compute-0 iptables.init[58721]: iptables: Flushing firewall rules: [  OK  ]
Sep 30 17:33:12 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Sep 30 17:33:12 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Sep 30 17:33:12 compute-0 sudo[58679]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:12 compute-0 sudo[58915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejebtkczhxwlxjrxotchqgbdgdflttny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253592.2752883-398-118728741544133/AnsiballZ_systemd.py'
Sep 30 17:33:12 compute-0 sudo[58915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:12 compute-0 python3.9[58917]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:33:12 compute-0 sudo[58915]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:13 compute-0 sudo[59069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eafzaaxxjziliiyhoxxgbchtyywvpbsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253593.5718098-430-79346889257302/AnsiballZ_systemd.py'
Sep 30 17:33:13 compute-0 sudo[59069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:14 compute-0 python3.9[59071]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:33:14 compute-0 systemd[1]: Reloading.
Sep 30 17:33:14 compute-0 systemd-rc-local-generator[59102]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:33:14 compute-0 systemd-sysv-generator[59105]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:33:14 compute-0 systemd[1]: Starting Netfilter Tables...
Sep 30 17:33:14 compute-0 systemd[1]: Finished Netfilter Tables.
Sep 30 17:33:14 compute-0 sudo[59069]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:15 compute-0 sudo[59262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqyxqplzpiprgnoebekhvjhpnddtegfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253594.8429768-446-93323591366190/AnsiballZ_command.py'
Sep 30 17:33:15 compute-0 sudo[59262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:15 compute-0 python3.9[59264]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:33:15 compute-0 sudo[59262]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:16 compute-0 sudo[59415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alsfwyelrvnekspxaevdjscfuqoxvvph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253595.9682703-474-117960229432804/AnsiballZ_stat.py'
Sep 30 17:33:16 compute-0 sudo[59415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:16 compute-0 python3.9[59417]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:33:16 compute-0 sudo[59415]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:16 compute-0 sudo[59540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqpzlitfccktkdncpshhvngwgvmpzzlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253595.9682703-474-117960229432804/AnsiballZ_copy.py'
Sep 30 17:33:16 compute-0 sudo[59540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:17 compute-0 python3.9[59542]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253595.9682703-474-117960229432804/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:33:17 compute-0 sudo[59540]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:17 compute-0 python3.9[59693]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:33:17 compute-0 polkitd[6289]: Registered Authentication Agent for unix-process:59695:219662 (system bus name :1.526 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 30 17:33:42 compute-0 polkitd[6289]: Unregistered Authentication Agent for unix-process:59695:219662 (system bus name :1.526, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 30 17:33:42 compute-0 polkit-agent-helper-1[59707]: pam_unix(polkit-1:auth): conversation failed
Sep 30 17:33:42 compute-0 polkit-agent-helper-1[59707]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Sep 30 17:33:42 compute-0 polkitd[6289]: Operator of unix-process:59695:219662 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.525 [<unknown>] (owned by unix-user:zuul)
Sep 30 17:33:43 compute-0 sshd-session[54961]: Connection closed by 192.168.122.30 port 46776
Sep 30 17:33:43 compute-0 sshd-session[54958]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:33:43 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Sep 30 17:33:43 compute-0 systemd[1]: session-14.scope: Consumed 18.093s CPU time.
Sep 30 17:33:43 compute-0 systemd-logind[811]: Session 14 logged out. Waiting for processes to exit.
Sep 30 17:33:43 compute-0 systemd-logind[811]: Removed session 14.
Sep 30 17:33:53 compute-0 sshd[1010]: drop connection #0 from [14.103.233.27]:52794 on [38.102.83.202]:22 penalty: exceeded LoginGraceTime
Sep 30 17:33:56 compute-0 sshd-session[59733]: Accepted publickey for zuul from 192.168.122.30 port 45358 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:33:56 compute-0 systemd-logind[811]: New session 15 of user zuul.
Sep 30 17:33:56 compute-0 systemd[1]: Started Session 15 of User zuul.
Sep 30 17:33:56 compute-0 sshd-session[59733]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:33:57 compute-0 python3.9[59886]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:33:58 compute-0 sudo[60040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwnjhpqqwgzdpfxvnowymikutdrqwhtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253637.792362-46-169745874957180/AnsiballZ_file.py'
Sep 30 17:33:58 compute-0 sudo[60040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:58 compute-0 python3.9[60042]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:33:58 compute-0 sudo[60040]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:59 compute-0 sudo[60215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyrithykhtojoatcvyfbljekglavvsdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253638.6415198-62-206230008666657/AnsiballZ_stat.py'
Sep 30 17:33:59 compute-0 sudo[60215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:59 compute-0 python3.9[60217]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:33:59 compute-0 sudo[60215]: pam_unix(sudo:session): session closed for user root
Sep 30 17:33:59 compute-0 sudo[60293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njfaewhnhlbeuzbemqgkzhkcdsonvdov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253638.6415198-62-206230008666657/AnsiballZ_file.py'
Sep 30 17:33:59 compute-0 sudo[60293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:33:59 compute-0 python3.9[60295]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.v9azj9kt recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:33:59 compute-0 sudo[60293]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:00 compute-0 sudo[60445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oexyttnypsnksjrletsqjvwqjzpygfrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253640.240937-102-175480078888843/AnsiballZ_stat.py'
Sep 30 17:34:00 compute-0 sudo[60445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:00 compute-0 python3.9[60447]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:00 compute-0 sudo[60445]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:00 compute-0 sudo[60523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihlzntakvnhvgtlbxynwergimbhndklo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253640.240937-102-175480078888843/AnsiballZ_file.py'
Sep 30 17:34:00 compute-0 sudo[60523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:01 compute-0 python3.9[60525]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.kuqqh8af recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:01 compute-0 sudo[60523]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:01 compute-0 sudo[60675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thddjijlsrtfxugincmurewrlielnycr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253641.4926188-128-166820842274569/AnsiballZ_file.py'
Sep 30 17:34:01 compute-0 sudo[60675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:01 compute-0 python3.9[60677]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:34:01 compute-0 sudo[60675]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:02 compute-0 sudo[60827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdlhgwoyviqgtyozowqexraqeicavtue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253642.2382824-144-126952714523878/AnsiballZ_stat.py'
Sep 30 17:34:02 compute-0 sudo[60827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:02 compute-0 python3.9[60829]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:02 compute-0 sudo[60827]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:02 compute-0 sudo[60905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kibcnyrxvsplyfyghnhjtnpmlpalkisp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253642.2382824-144-126952714523878/AnsiballZ_file.py'
Sep 30 17:34:02 compute-0 sudo[60905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:03 compute-0 python3.9[60907]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:34:03 compute-0 sudo[60905]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:03 compute-0 sudo[61057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmosftjncygniwzzqpchmbwuymbpjzqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253643.362895-144-30781617158403/AnsiballZ_stat.py'
Sep 30 17:34:03 compute-0 sudo[61057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:03 compute-0 python3.9[61059]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:03 compute-0 sudo[61057]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:04 compute-0 sudo[61135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zikymajluwgccobyvzjgzktqiicazctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253643.362895-144-30781617158403/AnsiballZ_file.py'
Sep 30 17:34:04 compute-0 sudo[61135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:04 compute-0 python3.9[61137]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:34:04 compute-0 sudo[61135]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:04 compute-0 sudo[61287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrspmwicrfvvhziwqoakorbhqiexjvpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253644.602366-190-178508817060486/AnsiballZ_file.py'
Sep 30 17:34:04 compute-0 sudo[61287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:05 compute-0 python3.9[61289]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:05 compute-0 sudo[61287]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:05 compute-0 sudo[61439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmjgbuymdodvfrpcqqqihwxedvosbsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253645.341537-206-53963412148967/AnsiballZ_stat.py'
Sep 30 17:34:05 compute-0 sudo[61439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:05 compute-0 python3.9[61441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:05 compute-0 sudo[61439]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:06 compute-0 sudo[61517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsdeftkdkvqfutgopyluetvxidomrwyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253645.341537-206-53963412148967/AnsiballZ_file.py'
Sep 30 17:34:06 compute-0 sudo[61517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:06 compute-0 python3.9[61519]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:06 compute-0 sudo[61517]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:06 compute-0 sudo[61669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnqatmxpsngxkehydhxcljknngesffux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253646.6633687-230-108530124304843/AnsiballZ_stat.py'
Sep 30 17:34:06 compute-0 sudo[61669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:07 compute-0 python3.9[61671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:07 compute-0 sudo[61669]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:07 compute-0 sudo[61747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmziijwafmfgcgfanzxuidtcdszrgnkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253646.6633687-230-108530124304843/AnsiballZ_file.py'
Sep 30 17:34:07 compute-0 sudo[61747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:07 compute-0 python3.9[61749]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:07 compute-0 sudo[61747]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:08 compute-0 sudo[61899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehmczpteiifcqfwbojelcerywwhlgdur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253648.0596876-254-13444313986930/AnsiballZ_systemd.py'
Sep 30 17:34:08 compute-0 sudo[61899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:08 compute-0 python3.9[61901]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:34:08 compute-0 systemd[1]: Reloading.
Sep 30 17:34:09 compute-0 systemd-rc-local-generator[61926]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:34:09 compute-0 systemd-sysv-generator[61931]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:34:09 compute-0 sudo[61899]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:09 compute-0 sudo[62087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzqbotbugvesgwrvvqznjxolssaipnit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253649.526467-270-278471198757603/AnsiballZ_stat.py'
Sep 30 17:34:09 compute-0 sudo[62087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:10 compute-0 python3.9[62089]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:10 compute-0 sudo[62087]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:10 compute-0 sudo[62165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-surcmrtwhvvwamlbunshftqtmdnnhhno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253649.526467-270-278471198757603/AnsiballZ_file.py'
Sep 30 17:34:10 compute-0 sudo[62165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:10 compute-0 python3.9[62167]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:10 compute-0 sudo[62165]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:11 compute-0 sudo[62317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhuhklywbgqlpnybkokgxpsshufqysbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253650.7948477-294-7390195169881/AnsiballZ_stat.py'
Sep 30 17:34:11 compute-0 sudo[62317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:11 compute-0 python3.9[62319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:11 compute-0 sudo[62317]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:11 compute-0 sudo[62395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ramtlxbwldkoffphregaqiydsgaskehy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253650.7948477-294-7390195169881/AnsiballZ_file.py'
Sep 30 17:34:11 compute-0 sudo[62395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:11 compute-0 python3.9[62397]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:11 compute-0 sudo[62395]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:12 compute-0 sudo[62547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-espxnziuflbqfgakfkhrfgvjhlszqdee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253652.0911908-318-229330257988451/AnsiballZ_systemd.py'
Sep 30 17:34:12 compute-0 sudo[62547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:12 compute-0 python3.9[62549]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:34:12 compute-0 systemd[1]: Reloading.
Sep 30 17:34:12 compute-0 systemd-rc-local-generator[62577]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:34:12 compute-0 systemd-sysv-generator[62580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:34:13 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 17:34:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 17:34:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 17:34:13 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 17:34:13 compute-0 sudo[62547]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:13 compute-0 python3.9[62741]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:34:13 compute-0 network[62758]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:34:13 compute-0 network[62759]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:34:13 compute-0 network[62760]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:34:19 compute-0 sudo[63021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmgkmeyrsjbtiiwxkrjmlgmjfembkrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253659.1399794-370-222051965092260/AnsiballZ_stat.py'
Sep 30 17:34:19 compute-0 sudo[63021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:19 compute-0 python3.9[63023]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:19 compute-0 sudo[63021]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:19 compute-0 sudo[63099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umqggnwcuoeznfimrbgxgsfvgfxqlkty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253659.1399794-370-222051965092260/AnsiballZ_file.py'
Sep 30 17:34:19 compute-0 sudo[63099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:20 compute-0 python3.9[63101]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:20 compute-0 sudo[63099]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:20 compute-0 sudo[63251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpwyemhcmizakugslqsjlfezxjpganpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253660.6058729-396-244666054549334/AnsiballZ_file.py'
Sep 30 17:34:20 compute-0 sudo[63251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:21 compute-0 python3.9[63253]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:21 compute-0 sudo[63251]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:21 compute-0 sudo[63403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bshbrpaosshsktaaybmilzywuoqjeeyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253661.458169-412-255595512945141/AnsiballZ_stat.py'
Sep 30 17:34:21 compute-0 sudo[63403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:22 compute-0 python3.9[63405]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:22 compute-0 sudo[63403]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:22 compute-0 sudo[63526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmwcqksnrvukvftrftmebnbpuictflai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253661.458169-412-255595512945141/AnsiballZ_copy.py'
Sep 30 17:34:22 compute-0 sudo[63526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:22 compute-0 python3.9[63528]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253661.458169-412-255595512945141/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:22 compute-0 sudo[63526]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:23 compute-0 sudo[63678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioahxpngdugqikvskaykxrgazlpdrhwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253663.0484185-448-280961061698303/AnsiballZ_timezone.py'
Sep 30 17:34:23 compute-0 sudo[63678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:23 compute-0 python3.9[63680]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Sep 30 17:34:23 compute-0 systemd[1]: Starting Time & Date Service...
Sep 30 17:34:23 compute-0 systemd[1]: Started Time & Date Service.
Sep 30 17:34:23 compute-0 sudo[63678]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:24 compute-0 sudo[63834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adxyugvqpeyyqdovebepqwkqonmamrox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253664.1826298-466-127226027781774/AnsiballZ_file.py'
Sep 30 17:34:24 compute-0 sudo[63834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:24 compute-0 python3.9[63836]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:24 compute-0 sudo[63834]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:25 compute-0 sudo[63986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpmjpjlgxslstlzrgibeighbladtzdxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253664.9123478-482-86302157270596/AnsiballZ_stat.py'
Sep 30 17:34:25 compute-0 sudo[63986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:25 compute-0 python3.9[63988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:25 compute-0 sudo[63986]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:25 compute-0 sudo[64109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqxvoqlixcofgcvavwomlmwkcbfowjgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253664.9123478-482-86302157270596/AnsiballZ_copy.py'
Sep 30 17:34:25 compute-0 sudo[64109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:25 compute-0 python3.9[64111]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253664.9123478-482-86302157270596/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:25 compute-0 sudo[64109]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:26 compute-0 sudo[64261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpwauatkktschytnsmhrjqntqfdqluul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253666.2859123-512-142940758730200/AnsiballZ_stat.py'
Sep 30 17:34:26 compute-0 sudo[64261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:26 compute-0 python3.9[64263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:26 compute-0 sudo[64261]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:27 compute-0 sudo[64384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-timpvbbgvocvjthdnbhkxveyixlyeuzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253666.2859123-512-142940758730200/AnsiballZ_copy.py'
Sep 30 17:34:27 compute-0 sudo[64384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:27 compute-0 python3.9[64386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759253666.2859123-512-142940758730200/.source.yaml _original_basename=.nqsb8abc follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:27 compute-0 sudo[64384]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:28 compute-0 sudo[64536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhvggauwwgqxwdvrqgxitkcbxheefbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253667.8172517-542-218538747031483/AnsiballZ_stat.py'
Sep 30 17:34:28 compute-0 sudo[64536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:28 compute-0 python3.9[64538]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:28 compute-0 sudo[64536]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:28 compute-0 sudo[64659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grjxsvivprqiihgqwtreoxuguoohivgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253667.8172517-542-218538747031483/AnsiballZ_copy.py'
Sep 30 17:34:28 compute-0 sudo[64659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:28 compute-0 python3.9[64661]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253667.8172517-542-218538747031483/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:29 compute-0 sudo[64659]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:29 compute-0 sudo[64811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbjbeggojvunmkaxpmxqrpcefhjismbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253669.2591994-572-220489216659622/AnsiballZ_command.py'
Sep 30 17:34:29 compute-0 sudo[64811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:29 compute-0 python3.9[64813]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:34:29 compute-0 sudo[64811]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:30 compute-0 sudo[64964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnghboccfhzydppfnvoysqxruuzlcsgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253670.1312582-588-68262114702285/AnsiballZ_command.py'
Sep 30 17:34:30 compute-0 sudo[64964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:30 compute-0 python3.9[64966]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:34:30 compute-0 sudo[64964]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:31 compute-0 sudo[65117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwpaagtccljhcndwuwkrkehjlppodeqc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759253670.837761-604-150077835814537/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 17:34:31 compute-0 sudo[65117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:31 compute-0 python3[65119]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 17:34:31 compute-0 sudo[65117]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:32 compute-0 sudo[65269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yabodebyjusxafgybfvocmbjugwlppxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253671.808235-620-31282026879887/AnsiballZ_stat.py'
Sep 30 17:34:32 compute-0 sudo[65269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:32 compute-0 python3.9[65271]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:32 compute-0 sudo[65269]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:32 compute-0 sudo[65392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsqlzghkvslhrmbaixoabuamebzxeppg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253671.808235-620-31282026879887/AnsiballZ_copy.py'
Sep 30 17:34:32 compute-0 sudo[65392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:32 compute-0 python3.9[65394]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253671.808235-620-31282026879887/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:32 compute-0 sudo[65392]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:33 compute-0 sudo[65544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwhwkowkgjdmhpbkuejpbkswaplbtrci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253673.2191668-650-105005753805609/AnsiballZ_stat.py'
Sep 30 17:34:33 compute-0 sudo[65544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:33 compute-0 python3.9[65546]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:33 compute-0 sudo[65544]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:34 compute-0 sudo[65667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixkimhiuwanqxjysxdgyozgrrwcfpylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253673.2191668-650-105005753805609/AnsiballZ_copy.py'
Sep 30 17:34:34 compute-0 sudo[65667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:34 compute-0 python3.9[65669]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253673.2191668-650-105005753805609/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:34 compute-0 sudo[65667]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:34 compute-0 sudo[65819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdhmiqruzluocsxzuuqmrbsrxfjkqfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253674.5080261-680-96648050167918/AnsiballZ_stat.py'
Sep 30 17:34:34 compute-0 sudo[65819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:34 compute-0 python3.9[65821]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:35 compute-0 sudo[65819]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:35 compute-0 sudo[65942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxknrpsrfseasosabayczdewolbavns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253674.5080261-680-96648050167918/AnsiballZ_copy.py'
Sep 30 17:34:35 compute-0 sudo[65942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:35 compute-0 python3.9[65944]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253674.5080261-680-96648050167918/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:35 compute-0 sudo[65942]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:36 compute-0 sudo[66094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgvkjtbzklgulqoyaexwrzgbutymjhga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253675.9991455-710-39713903907548/AnsiballZ_stat.py'
Sep 30 17:34:36 compute-0 sudo[66094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:36 compute-0 python3.9[66096]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:36 compute-0 sudo[66094]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:36 compute-0 sudo[66217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccxnittszonvibleoheekfjsxcmrwsbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253675.9991455-710-39713903907548/AnsiballZ_copy.py'
Sep 30 17:34:36 compute-0 sudo[66217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:37 compute-0 python3.9[66219]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253675.9991455-710-39713903907548/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:37 compute-0 sudo[66217]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:37 compute-0 sudo[66369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbpqhgqchpsrdolhdiqebgfbzwpiyipk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253677.4253345-740-38192013370246/AnsiballZ_stat.py'
Sep 30 17:34:37 compute-0 sudo[66369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:37 compute-0 python3.9[66371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:34:37 compute-0 sudo[66369]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:38 compute-0 sudo[66492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkrtjqwajruozequfizhcugrhnbxfvox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253677.4253345-740-38192013370246/AnsiballZ_copy.py'
Sep 30 17:34:38 compute-0 sudo[66492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:38 compute-0 python3.9[66494]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759253677.4253345-740-38192013370246/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:38 compute-0 sudo[66492]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:39 compute-0 sudo[66644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fidqanvypfamdicxuavxqybxcllvnlca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253678.7880359-770-204421702817740/AnsiballZ_file.py'
Sep 30 17:34:39 compute-0 sudo[66644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:39 compute-0 python3.9[66646]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:39 compute-0 sudo[66644]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:39 compute-0 sudo[66796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjlwnmsvokitcnbfptfdspqpdfmljgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253679.4508286-786-57091328488670/AnsiballZ_command.py'
Sep 30 17:34:39 compute-0 sudo[66796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:39 compute-0 python3.9[66798]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:34:39 compute-0 sudo[66796]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:40 compute-0 sudo[66955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-punkppzthdzoygmexstaqomuhvomlcka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253680.234114-802-218652220592005/AnsiballZ_blockinfile.py'
Sep 30 17:34:40 compute-0 sudo[66955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:40 compute-0 python3.9[66957]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:40 compute-0 sudo[66955]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:41 compute-0 sudo[67108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpwhmrtlxllhaaxiamkfgzkbhsjqbfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253681.2169232-820-155995090714589/AnsiballZ_file.py'
Sep 30 17:34:41 compute-0 sudo[67108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:41 compute-0 python3.9[67110]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:41 compute-0 sudo[67108]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:42 compute-0 sudo[67260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxbdsrtolmuqlhinjrbmneyjnxaeremx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253681.8014617-820-84113609373562/AnsiballZ_file.py'
Sep 30 17:34:42 compute-0 sudo[67260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:42 compute-0 chronyd[54477]: Selected source 162.159.200.1 (pool.ntp.org)
Sep 30 17:34:42 compute-0 python3.9[67262]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:42 compute-0 sudo[67260]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:43 compute-0 sudo[67412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubdqlusmmuovncwcvubvqlidsbktxlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253682.6578507-850-166542412412693/AnsiballZ_mount.py'
Sep 30 17:34:43 compute-0 sudo[67412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:43 compute-0 python3.9[67414]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Sep 30 17:34:43 compute-0 sudo[67412]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:43 compute-0 sudo[67565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpjaspotwraalyhpdklnaikhtekcngx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253683.4525328-850-269362549627100/AnsiballZ_mount.py'
Sep 30 17:34:43 compute-0 sudo[67565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:43 compute-0 python3.9[67567]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Sep 30 17:34:43 compute-0 sudo[67565]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:44 compute-0 sshd-session[59736]: Connection closed by 192.168.122.30 port 45358
Sep 30 17:34:44 compute-0 sshd-session[59733]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:34:44 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Sep 30 17:34:44 compute-0 systemd[1]: session-15.scope: Consumed 29.834s CPU time.
Sep 30 17:34:44 compute-0 systemd-logind[811]: Session 15 logged out. Waiting for processes to exit.
Sep 30 17:34:44 compute-0 systemd-logind[811]: Removed session 15.
Sep 30 17:34:49 compute-0 sshd-session[67593]: Accepted publickey for zuul from 192.168.122.30 port 44824 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:34:50 compute-0 systemd-logind[811]: New session 16 of user zuul.
Sep 30 17:34:50 compute-0 systemd[1]: Started Session 16 of User zuul.
Sep 30 17:34:50 compute-0 sshd-session[67593]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:34:50 compute-0 sudo[67746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlalewxwlioquoidwsfnjnpgsnpaceiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253690.104752-17-191868706728619/AnsiballZ_tempfile.py'
Sep 30 17:34:50 compute-0 sudo[67746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:50 compute-0 python3.9[67748]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Sep 30 17:34:50 compute-0 sudo[67746]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:51 compute-0 sudo[67898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njbvhfwwtrpazmrwtdrcdhkewbwrsqsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253690.9888465-41-110318895450638/AnsiballZ_stat.py'
Sep 30 17:34:51 compute-0 sudo[67898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:51 compute-0 python3.9[67900]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:34:51 compute-0 sudo[67898]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:52 compute-0 sudo[68050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvlkktlhzgiuftyottaiqxxhxqdbacap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253691.9936254-61-157912307280193/AnsiballZ_setup.py'
Sep 30 17:34:52 compute-0 sudo[68050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:52 compute-0 python3.9[68052]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:34:52 compute-0 sudo[68050]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:53 compute-0 sudo[68202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovzixhvjvpypaqrzdxvpxxpdnxqpohaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253693.038192-78-159688998751737/AnsiballZ_blockinfile.py'
Sep 30 17:34:53 compute-0 sudo[68202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:53 compute-0 python3.9[68204]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc3938ID3mnGjsgZen6kQCNM5mkVWqANJocuRy3sXOMN63dtyjhBgL73dvNVc4/MHyzxPDQuzK8tshXHOcqwYvNyllWa9UhEuAdhcNXcRKSELVxmBLRWZx/tsxp7Ws5/jqm87BYWYBOH23DCI96hjzPNZvDj8g24u1gnFFIDlGQELa7bj3YLXw2mWWadQeLxX35z9zMP39YZLf/2F8zAFy27zfi5U4Ni1I6YXvTL+DNwg7Ulluud4fY+sf3ds4pU5htK63pEPYw1f4eI/82wYgnmmEjUqBXGUraTbHG7EoY0kg8bnebUO02l1uSbV+YM/5LNKomXhUy/kUhb9l1uqNuqXvimRH4xVgJ9Mn0cJ2WGhlnkU1gqx0p1FNE01EWx7Spbz4uwVESHAmr67aymcw0Da5R+P9sI5lMqVNJHUeQiAq9bA3X9EbU9oIBIzoZCm7x5N8UpcvzrK0tNMaVLymDnsI8Rkc1MJpuTboQqnsrWs1q2SxaKY2vfqidEBk+Xs=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILy+MaglT2Cqq/Z1fTckQQdU2y2eh3D3Okv7pfMd4ZvV
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKG527neZJvIIF0UdmoBKFMIwvlh64Ua1Pir0KM8tM5Fy8tZbjiOY/Dz3agm+i5OWkd7fXEaYOfPR/rFSi9+L8s=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/5i5Z2213BrqRzlkXUPKi2V6ft/sZowq7ddp+Dq/QqjnkediXByZsJmLLkuA+smrhUZwo5vyubq2HeUmZ3U1EenWRQdC11cJP4ll9+UV0iP2vlc4cMh+DV62ujsM8T15I5/7JnPBcvrGrTJTmpQSQoCm2yD2q/v4Kx2V27sLj8ZlX64zDSBOYy+KhjhBuUM4gEbyrRzO2PqNsMeDrDGr3QFiyGNe8qS2KHmuEa4QFJnumNPJrxYBdTjcsKMZOeuVw2a33JPia0kDgKtaNDV7Izq8h9DlidYk1/aPo6MhfwzYDkRUaKSVhqM1oEDQWc40AK7EX4S00KLr5Nix8bd2nqEZsbD5lk/6wKNR1xdutyZt0GcnOEVJB7+VWN6Y3COvwe9Q1GSKCAhMthkn0Vd9ZvrwiFVKpMUyWD1b74vjHcDu8UOcJlVoqol0jJYEqDCy6mRh0l4Q2PfmyFpVMJ1ib1hV4dPIfzJIkuON6jMedqsKPGZnio8U1E/EMWBlaVn8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICeWw7E9xsgcxKn1cBOcDfvvFIX4M5Blc+gMQNI96O43
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG/CY8xOSIOy2V9qTWOkLlPGEg36qW1s4MO9P37ZVKfdA8ded8m++iIKGFCGxQiTUk0W+13bPq0LIPsJgw+4osM=
                                             create=True mode=0644 path=/tmp/ansible.wkc4mkeb state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:53 compute-0 sudo[68202]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:53 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Sep 30 17:34:54 compute-0 sudo[68356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utvtdkwodtntesjdnwmxdtotacqorvjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253693.8351827-94-88567800492718/AnsiballZ_command.py'
Sep 30 17:34:54 compute-0 sudo[68356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:54 compute-0 python3.9[68358]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.wkc4mkeb' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:34:54 compute-0 sudo[68356]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:55 compute-0 sudo[68510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsgelpnyhouvccibirzkfkcyoeohkznu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253694.5976706-110-239863020501806/AnsiballZ_file.py'
Sep 30 17:34:55 compute-0 sudo[68510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:34:55 compute-0 python3.9[68512]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.wkc4mkeb state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:34:55 compute-0 sudo[68510]: pam_unix(sudo:session): session closed for user root
Sep 30 17:34:55 compute-0 sshd-session[67596]: Connection closed by 192.168.122.30 port 44824
Sep 30 17:34:55 compute-0 sshd-session[67593]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:34:55 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Sep 30 17:34:55 compute-0 systemd[1]: session-16.scope: Consumed 3.256s CPU time.
Sep 30 17:34:55 compute-0 systemd-logind[811]: Session 16 logged out. Waiting for processes to exit.
Sep 30 17:34:55 compute-0 systemd-logind[811]: Removed session 16.
Sep 30 17:35:01 compute-0 sshd-session[68538]: Accepted publickey for zuul from 192.168.122.30 port 41088 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:35:01 compute-0 systemd-logind[811]: New session 17 of user zuul.
Sep 30 17:35:01 compute-0 systemd[1]: Started Session 17 of User zuul.
Sep 30 17:35:01 compute-0 sshd-session[68538]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:35:02 compute-0 python3.9[68691]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:35:03 compute-0 sudo[68845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swalsaqgtxtrtvohbgrmsuzsghwcrhnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253703.0324068-44-51415105653585/AnsiballZ_systemd.py'
Sep 30 17:35:03 compute-0 sudo[68845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:03 compute-0 python3.9[68847]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Sep 30 17:35:03 compute-0 sudo[68845]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:04 compute-0 sudo[68999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsqbporwqmeqcuedlecqtxixqoncbwfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253704.154021-60-222847514197113/AnsiballZ_systemd.py'
Sep 30 17:35:04 compute-0 sudo[68999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:04 compute-0 python3.9[69001]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:35:04 compute-0 sudo[68999]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:05 compute-0 sudo[69152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwjgwimivsehwhiywljxuboewxjcxfgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253704.9762256-78-53224278164677/AnsiballZ_command.py'
Sep 30 17:35:05 compute-0 sudo[69152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:05 compute-0 python3.9[69154]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:35:05 compute-0 sudo[69152]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:06 compute-0 sudo[69305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejqufnwgavcjrydlrtgbqxnhypzyikrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253705.8683627-94-273588429349854/AnsiballZ_stat.py'
Sep 30 17:35:06 compute-0 sudo[69305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:06 compute-0 python3.9[69307]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:35:06 compute-0 sudo[69305]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:06 compute-0 sudo[69459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veisuqezqbhxhebrivzwojnavzmkfymo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253706.656973-110-130277509821497/AnsiballZ_command.py'
Sep 30 17:35:06 compute-0 sudo[69459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:07 compute-0 python3.9[69461]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:35:07 compute-0 sudo[69459]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:07 compute-0 sudo[69614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gooocqcyaluzpgqziovabrgnbdmpjnyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253707.3629599-126-215705764420311/AnsiballZ_file.py'
Sep 30 17:35:07 compute-0 sudo[69614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:07 compute-0 python3.9[69616]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:35:07 compute-0 sudo[69614]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:08 compute-0 sshd-session[68541]: Connection closed by 192.168.122.30 port 41088
Sep 30 17:35:08 compute-0 sshd-session[68538]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:35:08 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Sep 30 17:35:08 compute-0 systemd[1]: session-17.scope: Consumed 4.401s CPU time.
Sep 30 17:35:08 compute-0 systemd-logind[811]: Session 17 logged out. Waiting for processes to exit.
Sep 30 17:35:08 compute-0 systemd-logind[811]: Removed session 17.
Sep 30 17:35:13 compute-0 sshd-session[69641]: Accepted publickey for zuul from 192.168.122.30 port 57064 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:35:13 compute-0 systemd-logind[811]: New session 18 of user zuul.
Sep 30 17:35:13 compute-0 systemd[1]: Started Session 18 of User zuul.
Sep 30 17:35:13 compute-0 sshd-session[69641]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:35:14 compute-0 python3.9[69794]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:35:15 compute-0 sudo[69948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcsrzxmkawgzgyicqzxdmtvskusaykcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253715.4407644-48-99460532238847/AnsiballZ_setup.py'
Sep 30 17:35:15 compute-0 sudo[69948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:16 compute-0 python3.9[69950]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:35:16 compute-0 sudo[69948]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:16 compute-0 sudo[70032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqjbqoeqqqbftaqbcemjzvpxcvgtlslt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759253715.4407644-48-99460532238847/AnsiballZ_dnf.py'
Sep 30 17:35:16 compute-0 sudo[70032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:16 compute-0 python3.9[70034]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Sep 30 17:35:18 compute-0 sudo[70032]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:19 compute-0 python3.9[70185]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:35:20 compute-0 python3.9[70336]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 17:35:21 compute-0 python3.9[70486]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:35:21 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:35:21 compute-0 python3.9[70637]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:35:22 compute-0 sshd-session[69644]: Connection closed by 192.168.122.30 port 57064
Sep 30 17:35:22 compute-0 sshd-session[69641]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:35:22 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Sep 30 17:35:22 compute-0 systemd[1]: session-18.scope: Consumed 5.902s CPU time.
Sep 30 17:35:22 compute-0 systemd-logind[811]: Session 18 logged out. Waiting for processes to exit.
Sep 30 17:35:22 compute-0 systemd-logind[811]: Removed session 18.
Sep 30 17:35:30 compute-0 sshd-session[70662]: Accepted publickey for zuul from 38.102.83.36 port 51828 ssh2: RSA SHA256:DFNImqpR4L6Frzap1o3GpslEX6xER8N06/GWUjaeSng
Sep 30 17:35:30 compute-0 systemd-logind[811]: New session 19 of user zuul.
Sep 30 17:35:30 compute-0 systemd[1]: Started Session 19 of User zuul.
Sep 30 17:35:30 compute-0 sshd-session[70662]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:35:30 compute-0 sudo[70738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yncrftkayomxxffewbwoeegdimfunjpy ; /usr/bin/python3'
Sep 30 17:35:30 compute-0 sudo[70738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:30 compute-0 useradd[70742]: new group: name=ceph-admin, GID=42478
Sep 30 17:35:30 compute-0 useradd[70742]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Sep 30 17:35:30 compute-0 sudo[70738]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:30 compute-0 sudo[70824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofjmcujjnxbbopoujhsamwzdxxbjvdds ; /usr/bin/python3'
Sep 30 17:35:30 compute-0 sudo[70824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:31 compute-0 sudo[70824]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:31 compute-0 sudo[70897]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzfctnrinzaecdgzfqsetlsxlphfhzbm ; /usr/bin/python3'
Sep 30 17:35:31 compute-0 sudo[70897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:31 compute-0 sudo[70897]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:32 compute-0 sudo[70947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipaatiistfgenqsfguhnvfccbirzuqet ; /usr/bin/python3'
Sep 30 17:35:32 compute-0 sudo[70947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:32 compute-0 sudo[70947]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:32 compute-0 sudo[70973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udemglukfxspbinbypojmqaitdjwmjvi ; /usr/bin/python3'
Sep 30 17:35:32 compute-0 sudo[70973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:32 compute-0 sudo[70973]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:32 compute-0 sudo[70999]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jonrzzktzugvuianpksicusqrqrjmmem ; /usr/bin/python3'
Sep 30 17:35:32 compute-0 sudo[70999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:32 compute-0 sudo[70999]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:33 compute-0 sudo[71025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qytlbbsczblbfrihgpqjfyuvfombykdf ; /usr/bin/python3'
Sep 30 17:35:33 compute-0 sudo[71025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:33 compute-0 sudo[71025]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:33 compute-0 sudo[71103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajdjwfwyzszhatvhwntmrxjkkrdvteaz ; /usr/bin/python3'
Sep 30 17:35:33 compute-0 sudo[71103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:33 compute-0 sudo[71103]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:34 compute-0 sudo[71176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwoesphtzvchdikfibgotbxdohkmlsua ; /usr/bin/python3'
Sep 30 17:35:34 compute-0 sudo[71176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:34 compute-0 sudo[71176]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:34 compute-0 sudo[71278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sldhzdxzgaztokijceybmjkftpxixeij ; /usr/bin/python3'
Sep 30 17:35:34 compute-0 sudo[71278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:34 compute-0 sudo[71278]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:34 compute-0 sudo[71351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxubrymeeevlufvwktskloowmvzjbqd ; /usr/bin/python3'
Sep 30 17:35:34 compute-0 sudo[71351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:35 compute-0 sudo[71351]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:35 compute-0 sudo[71401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbenitpcfyqvwcoagdvwqoudwzrlxjcs ; /usr/bin/python3'
Sep 30 17:35:35 compute-0 sudo[71401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:35 compute-0 python3[71403]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:35:36 compute-0 sudo[71401]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:37 compute-0 sudo[71496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amyacfraenhqoabfcfjwqvklpawnmpfv ; /usr/bin/python3'
Sep 30 17:35:37 compute-0 sudo[71496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:37 compute-0 python3[71498]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Sep 30 17:35:38 compute-0 sudo[71496]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:39 compute-0 sudo[71523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooopbyiajrhltdnurzpontauuupedynk ; /usr/bin/python3'
Sep 30 17:35:39 compute-0 sudo[71523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:39 compute-0 python3[71525]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:35:39 compute-0 sudo[71523]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:39 compute-0 sudo[71549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsmpnijvukzkfafzcyqbwxxfumbhoqfi ; /usr/bin/python3'
Sep 30 17:35:39 compute-0 sudo[71549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:39 compute-0 python3[71551]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:35:39 compute-0 kernel: loop: module loaded
Sep 30 17:35:39 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Sep 30 17:35:39 compute-0 sudo[71549]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:39 compute-0 sudo[71584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evxzsnqydxgciykdqxwdguaqmmtetine ; /usr/bin/python3'
Sep 30 17:35:39 compute-0 sudo[71584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:39 compute-0 python3[71586]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:35:40 compute-0 lvm[71589]: PV /dev/loop3 not used.
Sep 30 17:35:40 compute-0 lvm[71591]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:35:40 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Sep 30 17:35:40 compute-0 lvm[71596]:   1 logical volume(s) in volume group "ceph_vg0" now active
Sep 30 17:35:40 compute-0 lvm[71601]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:35:40 compute-0 lvm[71601]: VG ceph_vg0 finished
Sep 30 17:35:40 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Sep 30 17:35:40 compute-0 sudo[71584]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:40 compute-0 sudo[71678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exvedbvflfmuourljcbgiqydcwsrkwko ; /usr/bin/python3'
Sep 30 17:35:40 compute-0 sudo[71678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:40 compute-0 python3[71680]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:35:40 compute-0 sudo[71678]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:40 compute-0 sudo[71751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xskeytkurgocorcsgfguzadvdcujmrti ; /usr/bin/python3'
Sep 30 17:35:40 compute-0 sudo[71751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:41 compute-0 python3[71753]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759253740.5120032-33296-86947175139511/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:35:41 compute-0 sudo[71751]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:41 compute-0 sudo[71801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzpgvmgntogvrvyrnenhkurzubtuwyls ; /usr/bin/python3'
Sep 30 17:35:41 compute-0 sudo[71801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:42 compute-0 python3[71803]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:35:42 compute-0 systemd[1]: Reloading.
Sep 30 17:35:42 compute-0 systemd-rc-local-generator[71830]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:35:42 compute-0 systemd-sysv-generator[71835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:35:42 compute-0 systemd[1]: Starting Ceph OSD losetup...
Sep 30 17:35:42 compute-0 bash[71844]: /dev/loop3: [64513]:4194938 (/var/lib/ceph-osd-0.img)
Sep 30 17:35:42 compute-0 systemd[1]: Finished Ceph OSD losetup.
Sep 30 17:35:42 compute-0 lvm[71845]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:35:42 compute-0 lvm[71845]: VG ceph_vg0 finished
Sep 30 17:35:42 compute-0 sudo[71801]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:44 compute-0 python3[71869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:35:46 compute-0 sudo[71960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yytqlprmacvwcizyqmbllrjeibkxxrot ; /usr/bin/python3'
Sep 30 17:35:46 compute-0 sudo[71960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:47 compute-0 python3[71962]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Sep 30 17:35:50 compute-0 sudo[71960]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:51 compute-0 sudo[72018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiogvpfaolsrrqgrzlwlnpgarkkehcyb ; /usr/bin/python3'
Sep 30 17:35:51 compute-0 sudo[72018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:51 compute-0 python3[72020]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Sep 30 17:35:54 compute-0 groupadd[72030]: group added to /etc/group: name=cephadm, GID=992
Sep 30 17:35:54 compute-0 groupadd[72030]: group added to /etc/gshadow: name=cephadm
Sep 30 17:35:54 compute-0 groupadd[72030]: new group: name=cephadm, GID=992
Sep 30 17:35:54 compute-0 useradd[72037]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Sep 30 17:35:55 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:35:55 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:35:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:35:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:35:55 compute-0 systemd[1]: run-r08ed6f9335794ed4aad527f7f17f764b.service: Deactivated successfully.
Sep 30 17:35:55 compute-0 sudo[72018]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:55 compute-0 sudo[72137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmmlffguzzcdgioyyiwexfuzdnoypzpe ; /usr/bin/python3'
Sep 30 17:35:55 compute-0 sudo[72137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:56 compute-0 python3[72139]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:35:56 compute-0 sudo[72137]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:56 compute-0 sudo[72165]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxlfpsuwieslkqfbnewprknvwhzghadq ; /usr/bin/python3'
Sep 30 17:35:56 compute-0 sudo[72165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:56 compute-0 python3[72167]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:35:56 compute-0 sudo[72165]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:57 compute-0 sudo[72229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxknmbdskwrguaxavzrerrltguqcytj ; /usr/bin/python3'
Sep 30 17:35:57 compute-0 sudo[72229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:57 compute-0 python3[72231]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:35:57 compute-0 sudo[72229]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:57 compute-0 sudo[72255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzphjlhtqfeppxjbhgmcziavxrkcxcdq ; /usr/bin/python3'
Sep 30 17:35:57 compute-0 sudo[72255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:57 compute-0 python3[72257]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:35:57 compute-0 sudo[72255]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:35:58 compute-0 sudo[72333]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naxjqncialbetctxexrojpiopqfdxrlg ; /usr/bin/python3'
Sep 30 17:35:58 compute-0 sudo[72333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:58 compute-0 python3[72335]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:35:58 compute-0 sudo[72333]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:58 compute-0 sudo[72406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hamrmlpwqntxwthkkeocbzlkbalnnrpk ; /usr/bin/python3'
Sep 30 17:35:58 compute-0 sudo[72406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:58 compute-0 python3[72408]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759253758.0541382-33466-231363624901942/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=f7c182f47a3c4591d892e261f9c145b62baac781 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:35:58 compute-0 sudo[72406]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:59 compute-0 sudo[72508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztzrrhfhumzckiolztvbpnowsptvfote ; /usr/bin/python3'
Sep 30 17:35:59 compute-0 sudo[72508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:59 compute-0 python3[72510]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:35:59 compute-0 sudo[72508]: pam_unix(sudo:session): session closed for user root
Sep 30 17:35:59 compute-0 sudo[72581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlcljxapjmnmlfmzwkcsomfgjurrqqww ; /usr/bin/python3'
Sep 30 17:35:59 compute-0 sudo[72581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:35:59 compute-0 python3[72583]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759253759.2275395-33484-244069283748176/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:35:59 compute-0 sudo[72581]: pam_unix(sudo:session): session closed for user root
Sep 30 17:36:00 compute-0 sudo[72631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdrihiuhmwmmgspidkxtzzbccnqeylqi ; /usr/bin/python3'
Sep 30 17:36:00 compute-0 sudo[72631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:36:00 compute-0 python3[72633]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:36:00 compute-0 sudo[72631]: pam_unix(sudo:session): session closed for user root
Sep 30 17:36:00 compute-0 sudo[72659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imsszallojztdcorcicsqioescjnkxoo ; /usr/bin/python3'
Sep 30 17:36:00 compute-0 sudo[72659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:36:00 compute-0 python3[72661]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:36:00 compute-0 sudo[72659]: pam_unix(sudo:session): session closed for user root
Sep 30 17:36:00 compute-0 sudo[72687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnuxurbiaylhhujeglqorvcenlupxhky ; /usr/bin/python3'
Sep 30 17:36:00 compute-0 sudo[72687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:36:00 compute-0 python3[72689]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:36:01 compute-0 sudo[72687]: pam_unix(sudo:session): session closed for user root
Sep 30 17:36:01 compute-0 sudo[72715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eantqajbbdgldjaapqaracabjrntrlrk ; /usr/bin/python3'
Sep 30 17:36:01 compute-0 sudo[72715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:36:01 compute-0 python3[72717]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:36:01 compute-0 sshd-session[72721]: Accepted publickey for ceph-admin from 192.168.122.100 port 57972 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:36:01 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Sep 30 17:36:01 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Sep 30 17:36:01 compute-0 systemd-logind[811]: New session 20 of user ceph-admin.
Sep 30 17:36:01 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Sep 30 17:36:01 compute-0 systemd[1]: Starting User Manager for UID 42477...
Sep 30 17:36:01 compute-0 systemd[72725]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:36:01 compute-0 systemd[72725]: Queued start job for default target Main User Target.
Sep 30 17:36:01 compute-0 systemd[72725]: Created slice User Application Slice.
Sep 30 17:36:01 compute-0 systemd[72725]: Started Mark boot as successful after the user session has run 2 minutes.
Sep 30 17:36:01 compute-0 systemd[72725]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 17:36:01 compute-0 systemd[72725]: Reached target Paths.
Sep 30 17:36:01 compute-0 systemd[72725]: Reached target Timers.
Sep 30 17:36:01 compute-0 systemd[72725]: Starting D-Bus User Message Bus Socket...
Sep 30 17:36:01 compute-0 systemd[72725]: Starting Create User's Volatile Files and Directories...
Sep 30 17:36:01 compute-0 systemd[72725]: Listening on D-Bus User Message Bus Socket.
Sep 30 17:36:01 compute-0 systemd[72725]: Reached target Sockets.
Sep 30 17:36:01 compute-0 systemd[72725]: Finished Create User's Volatile Files and Directories.
Sep 30 17:36:01 compute-0 systemd[72725]: Reached target Basic System.
Sep 30 17:36:01 compute-0 systemd[72725]: Reached target Main User Target.
Sep 30 17:36:01 compute-0 systemd[72725]: Startup finished in 126ms.
Sep 30 17:36:01 compute-0 systemd[1]: Started User Manager for UID 42477.
Sep 30 17:36:01 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Sep 30 17:36:01 compute-0 sshd-session[72721]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:36:01 compute-0 sudo[72741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Sep 30 17:36:01 compute-0 sudo[72741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:36:01 compute-0 sudo[72741]: pam_unix(sudo:session): session closed for user root
Sep 30 17:36:01 compute-0 sshd-session[72740]: Received disconnect from 192.168.122.100 port 57972:11: disconnected by user
Sep 30 17:36:01 compute-0 sshd-session[72740]: Disconnected from user ceph-admin 192.168.122.100 port 57972
Sep 30 17:36:01 compute-0 sshd-session[72721]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:36:01 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Sep 30 17:36:01 compute-0 systemd-logind[811]: Session 20 logged out. Waiting for processes to exit.
Sep 30 17:36:01 compute-0 systemd-logind[811]: Removed session 20.
Sep 30 17:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3987574875-lower\x2dmapped.mount: Deactivated successfully.
Sep 30 17:36:12 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Sep 30 17:36:12 compute-0 systemd[72725]: Activating special unit Exit the Session...
Sep 30 17:36:12 compute-0 systemd[72725]: Stopped target Main User Target.
Sep 30 17:36:12 compute-0 systemd[72725]: Stopped target Basic System.
Sep 30 17:36:12 compute-0 systemd[72725]: Stopped target Paths.
Sep 30 17:36:12 compute-0 systemd[72725]: Stopped target Sockets.
Sep 30 17:36:12 compute-0 systemd[72725]: Stopped target Timers.
Sep 30 17:36:12 compute-0 systemd[72725]: Stopped Mark boot as successful after the user session has run 2 minutes.
Sep 30 17:36:12 compute-0 systemd[72725]: Stopped Daily Cleanup of User's Temporary Directories.
Sep 30 17:36:12 compute-0 systemd[72725]: Closed D-Bus User Message Bus Socket.
Sep 30 17:36:12 compute-0 systemd[72725]: Stopped Create User's Volatile Files and Directories.
Sep 30 17:36:12 compute-0 systemd[72725]: Removed slice User Application Slice.
Sep 30 17:36:12 compute-0 systemd[72725]: Reached target Shutdown.
Sep 30 17:36:12 compute-0 systemd[72725]: Finished Exit the Session.
Sep 30 17:36:12 compute-0 systemd[72725]: Reached target Exit the Session.
Sep 30 17:36:12 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Sep 30 17:36:12 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Sep 30 17:36:12 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Sep 30 17:36:12 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Sep 30 17:36:12 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Sep 30 17:36:12 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Sep 30 17:36:12 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Sep 30 17:36:23 compute-0 podman[72819]: 2025-09-30 17:36:23.15606394 +0000 UTC m=+20.982273716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:23 compute-0 podman[72880]: 2025-09-30 17:36:23.289015132 +0000 UTC m=+0.108618581 container create cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8 (image=quay.io/ceph/ceph:v19, name=crazy_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 17:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3241990612-merged.mount: Deactivated successfully.
Sep 30 17:36:23 compute-0 podman[72880]: 2025-09-30 17:36:23.205091805 +0000 UTC m=+0.024695114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:23 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Sep 30 17:36:23 compute-0 systemd[1]: Started libpod-conmon-cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8.scope.
Sep 30 17:36:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:23 compute-0 podman[72880]: 2025-09-30 17:36:23.564678425 +0000 UTC m=+0.384281714 container init cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8 (image=quay.io/ceph/ceph:v19, name=crazy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:23 compute-0 podman[72880]: 2025-09-30 17:36:23.577024726 +0000 UTC m=+0.396628015 container start cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8 (image=quay.io/ceph/ceph:v19, name=crazy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:23 compute-0 crazy_leakey[72896]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Sep 30 17:36:23 compute-0 systemd[1]: libpod-cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8.scope: Deactivated successfully.
Sep 30 17:36:23 compute-0 podman[72880]: 2025-09-30 17:36:23.68506829 +0000 UTC m=+0.504671579 container attach cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8 (image=quay.io/ceph/ceph:v19, name=crazy_leakey, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:23 compute-0 podman[72880]: 2025-09-30 17:36:23.686005767 +0000 UTC m=+0.505609066 container died cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8 (image=quay.io/ceph/ceph:v19, name=crazy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 17:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6040b054031dc9120b9aa0cf6a519c655a4f0911be90708ce796e44561cb7dc9-merged.mount: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[72880]: 2025-09-30 17:36:24.027897414 +0000 UTC m=+0.847500743 container remove cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8 (image=quay.io/ceph/ceph:v19, name=crazy_leakey, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:24 compute-0 systemd[1]: libpod-conmon-cb30aabe5a2d50e50be2162cbaf78f16e0516340e5ef39b8756eb1e8402aa1e8.scope: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[72914]: 2025-09-30 17:36:24.091507154 +0000 UTC m=+0.043552130 container create d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef (image=quay.io/ceph/ceph:v19, name=quirky_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:24 compute-0 systemd[1]: Started libpod-conmon-d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef.scope.
Sep 30 17:36:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:24 compute-0 podman[72914]: 2025-09-30 17:36:24.07027579 +0000 UTC m=+0.022320806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:24 compute-0 podman[72914]: 2025-09-30 17:36:24.178606542 +0000 UTC m=+0.130651548 container init d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef (image=quay.io/ceph/ceph:v19, name=quirky_taussig, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:24 compute-0 podman[72914]: 2025-09-30 17:36:24.185183209 +0000 UTC m=+0.137228185 container start d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef (image=quay.io/ceph/ceph:v19, name=quirky_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:36:24 compute-0 quirky_taussig[72930]: 167 167
Sep 30 17:36:24 compute-0 systemd[1]: libpod-d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef.scope: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[72914]: 2025-09-30 17:36:24.190306085 +0000 UTC m=+0.142351101 container attach d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef (image=quay.io/ceph/ceph:v19, name=quirky_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:36:24 compute-0 podman[72914]: 2025-09-30 17:36:24.19084861 +0000 UTC m=+0.142893606 container died d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef (image=quay.io/ceph/ceph:v19, name=quirky_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 17:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d2e46f385dd0a156b98e52544548a051b98f4524ea4d2ba9ef92efd0ce1853d-merged.mount: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[72914]: 2025-09-30 17:36:24.223790488 +0000 UTC m=+0.175835464 container remove d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef (image=quay.io/ceph/ceph:v19, name=quirky_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:24 compute-0 systemd[1]: libpod-conmon-d005d66b27637e01a9d65da8ac89396c9224f436f87c214aaeeb1e65190713ef.scope: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[72948]: 2025-09-30 17:36:24.291921226 +0000 UTC m=+0.047070210 container create b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8 (image=quay.io/ceph/ceph:v19, name=romantic_engelbart, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:36:24 compute-0 systemd[1]: Started libpod-conmon-b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8.scope.
Sep 30 17:36:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:24 compute-0 podman[72948]: 2025-09-30 17:36:24.354250709 +0000 UTC m=+0.109399703 container init b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8 (image=quay.io/ceph/ceph:v19, name=romantic_engelbart, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:24 compute-0 podman[72948]: 2025-09-30 17:36:24.359491479 +0000 UTC m=+0.114640493 container start b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8 (image=quay.io/ceph/ceph:v19, name=romantic_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:36:24 compute-0 podman[72948]: 2025-09-30 17:36:24.363350938 +0000 UTC m=+0.118499922 container attach b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8 (image=quay.io/ceph/ceph:v19, name=romantic_engelbart, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:36:24 compute-0 podman[72948]: 2025-09-30 17:36:24.269805807 +0000 UTC m=+0.024954821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:24 compute-0 romantic_engelbart[72965]: AQAYFdxoiFSWFhAAt29GAdmVPIvOR+aihu3ofw==
Sep 30 17:36:24 compute-0 systemd[1]: libpod-b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8.scope: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[72948]: 2025-09-30 17:36:24.382586626 +0000 UTC m=+0.137735620 container died b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8 (image=quay.io/ceph/ceph:v19, name=romantic_engelbart, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:24 compute-0 podman[72948]: 2025-09-30 17:36:24.418042144 +0000 UTC m=+0.173191128 container remove b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8 (image=quay.io/ceph/ceph:v19, name=romantic_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:24 compute-0 systemd[1]: libpod-conmon-b1882c8e9c4a36e7ec49dbc67eeef4bc73cd5cb84fa555388e78d698d44c58d8.scope: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[72983]: 2025-09-30 17:36:24.471569017 +0000 UTC m=+0.036199421 container create 6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303 (image=quay.io/ceph/ceph:v19, name=relaxed_carson, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:36:24 compute-0 systemd[1]: Started libpod-conmon-6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303.scope.
Sep 30 17:36:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:24 compute-0 podman[72983]: 2025-09-30 17:36:24.533635793 +0000 UTC m=+0.098266227 container init 6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303 (image=quay.io/ceph/ceph:v19, name=relaxed_carson, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:36:24 compute-0 podman[72983]: 2025-09-30 17:36:24.538417319 +0000 UTC m=+0.103047723 container start 6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303 (image=quay.io/ceph/ceph:v19, name=relaxed_carson, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:36:24 compute-0 podman[72983]: 2025-09-30 17:36:24.551216113 +0000 UTC m=+0.115846517 container attach 6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303 (image=quay.io/ceph/ceph:v19, name=relaxed_carson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 17:36:24 compute-0 podman[72983]: 2025-09-30 17:36:24.456281722 +0000 UTC m=+0.020912146 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:24 compute-0 relaxed_carson[73000]: AQAYFdxozEpgIRAA+NeLWjJZ23vmC+qvWsKFAw==
Sep 30 17:36:24 compute-0 systemd[1]: libpod-6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303.scope: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[72983]: 2025-09-30 17:36:24.56374666 +0000 UTC m=+0.128377074 container died 6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303 (image=quay.io/ceph/ceph:v19, name=relaxed_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:36:24 compute-0 podman[72983]: 2025-09-30 17:36:24.642999785 +0000 UTC m=+0.207630189 container remove 6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303 (image=quay.io/ceph/ceph:v19, name=relaxed_carson, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:24 compute-0 systemd[1]: libpod-conmon-6c27527719618d0f469c8a2bc138deaa757f2a36d70a63ef875c3212f3e37303.scope: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[73018]: 2025-09-30 17:36:24.712735479 +0000 UTC m=+0.049593772 container create c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3 (image=quay.io/ceph/ceph:v19, name=cool_wing, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:36:24 compute-0 systemd[1]: Started libpod-conmon-c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3.scope.
Sep 30 17:36:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:24 compute-0 podman[73018]: 2025-09-30 17:36:24.691996589 +0000 UTC m=+0.028854882 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:24 compute-0 podman[73018]: 2025-09-30 17:36:24.84604031 +0000 UTC m=+0.182898613 container init c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3 (image=quay.io/ceph/ceph:v19, name=cool_wing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:36:24 compute-0 podman[73018]: 2025-09-30 17:36:24.851745103 +0000 UTC m=+0.188603376 container start c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3 (image=quay.io/ceph/ceph:v19, name=cool_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 17:36:24 compute-0 podman[73018]: 2025-09-30 17:36:24.879000949 +0000 UTC m=+0.215859212 container attach c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3 (image=quay.io/ceph/ceph:v19, name=cool_wing, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:24 compute-0 cool_wing[73034]: AQAYFdxoaXL8NBAAJBRg4+2oxYWLJaLrC2Parw==
Sep 30 17:36:24 compute-0 systemd[1]: libpod-c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3.scope: Deactivated successfully.
Sep 30 17:36:24 compute-0 podman[73018]: 2025-09-30 17:36:24.89517386 +0000 UTC m=+0.232032133 container died c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3 (image=quay.io/ceph/ceph:v19, name=cool_wing, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:36:24 compute-0 podman[73018]: 2025-09-30 17:36:24.931474653 +0000 UTC m=+0.268332916 container remove c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3 (image=quay.io/ceph/ceph:v19, name=cool_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 17:36:24 compute-0 systemd[1]: libpod-conmon-c3b95c1ce156bfdf4b7638e650045fe76ea8acce2ba2ab2a23dc7eabb463aac3.scope: Deactivated successfully.
Sep 30 17:36:25 compute-0 podman[73055]: 2025-09-30 17:36:25.024004355 +0000 UTC m=+0.073248925 container create c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5 (image=quay.io/ceph/ceph:v19, name=gifted_swanson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:36:25 compute-0 systemd[1]: Started libpod-conmon-c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5.scope.
Sep 30 17:36:25 compute-0 podman[73055]: 2025-09-30 17:36:24.971271164 +0000 UTC m=+0.020515754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb6ce879652d51ba585034a8e5da983dd4889c4c54492ca9b0b77aed086b373/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:25 compute-0 podman[73055]: 2025-09-30 17:36:25.087939873 +0000 UTC m=+0.137184443 container init c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5 (image=quay.io/ceph/ceph:v19, name=gifted_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:36:25 compute-0 podman[73055]: 2025-09-30 17:36:25.093587454 +0000 UTC m=+0.142832024 container start c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5 (image=quay.io/ceph/ceph:v19, name=gifted_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:25 compute-0 podman[73055]: 2025-09-30 17:36:25.098175224 +0000 UTC m=+0.147419804 container attach c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5 (image=quay.io/ceph/ceph:v19, name=gifted_swanson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:25 compute-0 gifted_swanson[73071]: /usr/bin/monmaptool: monmap file /tmp/monmap
Sep 30 17:36:25 compute-0 gifted_swanson[73071]: setting min_mon_release = quincy
Sep 30 17:36:25 compute-0 gifted_swanson[73071]: /usr/bin/monmaptool: set fsid to 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:25 compute-0 gifted_swanson[73071]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Sep 30 17:36:25 compute-0 systemd[1]: libpod-c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5.scope: Deactivated successfully.
Sep 30 17:36:25 compute-0 podman[73078]: 2025-09-30 17:36:25.162457603 +0000 UTC m=+0.025441305 container died c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5 (image=quay.io/ceph/ceph:v19, name=gifted_swanson, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bb6ce879652d51ba585034a8e5da983dd4889c4c54492ca9b0b77aed086b373-merged.mount: Deactivated successfully.
Sep 30 17:36:25 compute-0 podman[73078]: 2025-09-30 17:36:25.202711398 +0000 UTC m=+0.065695090 container remove c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5 (image=quay.io/ceph/ceph:v19, name=gifted_swanson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:25 compute-0 systemd[1]: libpod-conmon-c7c278d0174019b54d81d0fcc03b1925b638115b959da622eb2ef5ae3d42e2a5.scope: Deactivated successfully.
Sep 30 17:36:25 compute-0 podman[73093]: 2025-09-30 17:36:25.261939653 +0000 UTC m=+0.035220473 container create a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae (image=quay.io/ceph/ceph:v19, name=amazing_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:25 compute-0 systemd[1]: Started libpod-conmon-a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae.scope.
Sep 30 17:36:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66fb85b24aad68dd893f15a401f0f83da9a309056434c52a87f4b4914d42f0/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66fb85b24aad68dd893f15a401f0f83da9a309056434c52a87f4b4914d42f0/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66fb85b24aad68dd893f15a401f0f83da9a309056434c52a87f4b4914d42f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf66fb85b24aad68dd893f15a401f0f83da9a309056434c52a87f4b4914d42f0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:25 compute-0 podman[73093]: 2025-09-30 17:36:25.313227303 +0000 UTC m=+0.086508153 container init a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae (image=quay.io/ceph/ceph:v19, name=amazing_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:36:25 compute-0 podman[73093]: 2025-09-30 17:36:25.320024276 +0000 UTC m=+0.093305106 container start a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae (image=quay.io/ceph/ceph:v19, name=amazing_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:25 compute-0 podman[73093]: 2025-09-30 17:36:25.325223784 +0000 UTC m=+0.098504624 container attach a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae (image=quay.io/ceph/ceph:v19, name=amazing_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:36:25 compute-0 podman[73093]: 2025-09-30 17:36:25.246174635 +0000 UTC m=+0.019455495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:25 compute-0 systemd[1]: libpod-a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae.scope: Deactivated successfully.
Sep 30 17:36:25 compute-0 podman[73093]: 2025-09-30 17:36:25.406903137 +0000 UTC m=+0.180183987 container died a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae (image=quay.io/ceph/ceph:v19, name=amazing_boyd, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:25 compute-0 podman[73093]: 2025-09-30 17:36:25.443268552 +0000 UTC m=+0.216549382 container remove a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae (image=quay.io/ceph/ceph:v19, name=amazing_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 17:36:25 compute-0 systemd[1]: libpod-conmon-a0a1978a18e6359886afee38e22063e2843373d106fc7a9f68e6dd2ffe2367ae.scope: Deactivated successfully.
Sep 30 17:36:25 compute-0 systemd[1]: Reloading.
Sep 30 17:36:25 compute-0 systemd-rc-local-generator[73175]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:36:25 compute-0 systemd-sysv-generator[73179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf66fb85b24aad68dd893f15a401f0f83da9a309056434c52a87f4b4914d42f0-merged.mount: Deactivated successfully.
Sep 30 17:36:25 compute-0 systemd[1]: Reloading.
Sep 30 17:36:25 compute-0 systemd-rc-local-generator[73212]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:36:25 compute-0 systemd-sysv-generator[73215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:36:25 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Sep 30 17:36:25 compute-0 systemd[1]: Reloading.
Sep 30 17:36:25 compute-0 systemd-rc-local-generator[73252]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:36:26 compute-0 systemd-sysv-generator[73256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:36:26 compute-0 systemd[1]: Reached target Ceph cluster 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:36:26 compute-0 systemd[1]: Reloading.
Sep 30 17:36:26 compute-0 systemd-rc-local-generator[73288]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:36:26 compute-0 systemd-sysv-generator[73292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:36:26 compute-0 systemd[1]: Reloading.
Sep 30 17:36:26 compute-0 systemd-rc-local-generator[73330]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:36:26 compute-0 systemd-sysv-generator[73333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:36:26 compute-0 systemd[1]: Created slice Slice /system/ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:36:26 compute-0 systemd[1]: Reached target System Time Set.
Sep 30 17:36:26 compute-0 systemd[1]: Reached target System Time Synchronized.
Sep 30 17:36:26 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:26 compute-0 podman[73385]: 2025-09-30 17:36:26.950758531 +0000 UTC m=+0.035872772 container create 75febbdcf57f1445796b17825a0af2a33fe519dda3e5164dea316fe9b6d76335 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8301e99e3b57703e5387cd0c80f731c88b836b5205fbc33a8b0e70ea18ee80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8301e99e3b57703e5387cd0c80f731c88b836b5205fbc33a8b0e70ea18ee80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8301e99e3b57703e5387cd0c80f731c88b836b5205fbc33a8b0e70ea18ee80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8301e99e3b57703e5387cd0c80f731c88b836b5205fbc33a8b0e70ea18ee80/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:27 compute-0 podman[73385]: 2025-09-30 17:36:27.00591251 +0000 UTC m=+0.091026771 container init 75febbdcf57f1445796b17825a0af2a33fe519dda3e5164dea316fe9b6d76335 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:27 compute-0 podman[73385]: 2025-09-30 17:36:27.011785467 +0000 UTC m=+0.096899708 container start 75febbdcf57f1445796b17825a0af2a33fe519dda3e5164dea316fe9b6d76335 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:27 compute-0 bash[73385]: 75febbdcf57f1445796b17825a0af2a33fe519dda3e5164dea316fe9b6d76335
Sep 30 17:36:27 compute-0 podman[73385]: 2025-09-30 17:36:26.93597891 +0000 UTC m=+0.021093171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:27 compute-0 systemd[1]: Started Ceph mon.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:36:27 compute-0 ceph-mon[73405]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Sep 30 17:36:27 compute-0 ceph-mon[73405]: pidfile_write: ignore empty --pid-file
Sep 30 17:36:27 compute-0 ceph-mon[73405]: load: jerasure load: lrc 
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: RocksDB version: 7.9.2
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Git sha 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Compile date 2025-07-17 03:12:14
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: DB SUMMARY
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: DB Session ID:  JJIPTOSGLZNIGINW79E7
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: CURRENT file:  CURRENT
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: IDENTITY file:  IDENTITY
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                         Options.error_if_exists: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                       Options.create_if_missing: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                         Options.paranoid_checks: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.flush_verify_memtable_count: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                                     Options.env: 0x55ab8e14ec20
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                                      Options.fs: PosixFileSystem
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                                Options.info_log: 0x55ab8eb58d60
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.max_file_opening_threads: 16
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                              Options.statistics: (nil)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                               Options.use_fsync: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                       Options.max_log_file_size: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                   Options.log_file_time_to_roll: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                       Options.keep_log_file_num: 1000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                    Options.recycle_log_file_num: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                         Options.allow_fallocate: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                        Options.allow_mmap_reads: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                       Options.allow_mmap_writes: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                        Options.use_direct_reads: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:          Options.create_missing_column_families: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                              Options.db_log_dir: 
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                                 Options.wal_dir: 
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.table_cache_numshardbits: 6
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                         Options.WAL_ttl_seconds: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                       Options.WAL_size_limit_MB: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.manifest_preallocation_size: 4194304
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                     Options.is_fd_close_on_exec: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                   Options.advise_random_on_open: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                    Options.db_write_buffer_size: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                    Options.write_buffer_manager: 0x55ab8eb5d900
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.access_hint_on_compaction_start: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                      Options.use_adaptive_mutex: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                            Options.rate_limiter: (nil)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                       Options.wal_recovery_mode: 2
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.enable_thread_tracking: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.enable_pipelined_write: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.unordered_write: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.write_thread_max_yield_usec: 100
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                               Options.row_cache: None
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                              Options.wal_filter: None
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.avoid_flush_during_recovery: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.allow_ingest_behind: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.two_write_queues: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.manual_wal_flush: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.wal_compression: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.atomic_flush: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                 Options.persist_stats_to_disk: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                 Options.write_dbid_to_manifest: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                 Options.log_readahead_size: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                 Options.best_efforts_recovery: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.allow_data_in_errors: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.db_host_id: __hostname__
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.enforce_single_del_contracts: true
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.max_background_jobs: 2
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.max_background_compactions: -1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.max_subcompactions: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.delayed_write_rate : 16777216
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.max_total_wal_size: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                   Options.stats_dump_period_sec: 600
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                 Options.stats_persist_period_sec: 600
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                          Options.max_open_files: -1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                          Options.bytes_per_sync: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                      Options.wal_bytes_per_sync: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                   Options.strict_bytes_per_sync: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:       Options.compaction_readahead_size: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.max_background_flushes: -1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Compression algorithms supported:
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         kZSTD supported: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         kXpressCompression supported: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         kBZip2Compression supported: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         kZSTDNotFinalCompression supported: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         kLZ4Compression supported: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         kZlibCompression supported: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         kLZ4HCCompression supported: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         kSnappyCompression supported: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Fast CRC32 supported: Supported on x86
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: DMutex implementation: pthread_mutex_t
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:           Options.merge_operator: 
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:        Options.compaction_filter: None
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8eb58500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab8eb7d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:        Options.write_buffer_size: 33554432
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:  Options.max_write_buffer_number: 2
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:          Options.compression: NoCompression
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.num_levels: 7
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6f6abc8b-413d-4a47-b5c7-406ff28f77d6
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253787059510, "job": 1, "event": "recovery_started", "wal_files": [4]}
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253787061780, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "JJIPTOSGLZNIGINW79E7", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253787061875, "job": 1, "event": "recovery_finished"}
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ab8eb7ee00
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: DB pointer 0x55ab8ec88000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 17:36:27 compute-0 ceph-mon[73405]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.19 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.19 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab8eb7d350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 17:36:27 compute-0 ceph-mon[73405]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@-1(???) e0 preinit fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(probing) e0 win_standalone_election
Sep 30 17:36:27 compute-0 ceph-mon[73405]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(probing) e1 win_standalone_election
Sep 30 17:36:27 compute-0 ceph-mon[73405]: paxos.0).electionLogic(2) init, last seen epoch 2
Sep 30 17:36:27 compute-0 podman[73406]: 2025-09-30 17:36:27.091549686 +0000 UTC m=+0.045927037 container create 66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425 (image=quay.io/ceph/ceph:v19, name=naughty_hugle, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : monmap epoch 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : last_changed 2025-09-30T17:36:25.121133+0000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : created 2025-09-30T17:36:25.121133+0000
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : election_strategy: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864116,os=Linux}
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).mds e1 new map
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-09-30T17:36:27:095553+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : fsmap 
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mkfs 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 17:36:27 compute-0 systemd[1]: Started libpod-conmon-66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425.scope.
Sep 30 17:36:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7d33e5eefe3648ded338f77f77295eb8522c07049917fdfa07560b315fceca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7d33e5eefe3648ded338f77f77295eb8522c07049917fdfa07560b315fceca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7d33e5eefe3648ded338f77f77295eb8522c07049917fdfa07560b315fceca/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:27 compute-0 podman[73406]: 2025-09-30 17:36:27.068854261 +0000 UTC m=+0.023231642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:27 compute-0 podman[73406]: 2025-09-30 17:36:27.172388676 +0000 UTC m=+0.126766037 container init 66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425 (image=quay.io/ceph/ceph:v19, name=naughty_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Sep 30 17:36:27 compute-0 podman[73406]: 2025-09-30 17:36:27.178015086 +0000 UTC m=+0.132392437 container start 66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425 (image=quay.io/ceph/ceph:v19, name=naughty_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:27 compute-0 podman[73406]: 2025-09-30 17:36:27.181931898 +0000 UTC m=+0.136309269 container attach 66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425 (image=quay.io/ceph/ceph:v19, name=naughty_hugle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Sep 30 17:36:27 compute-0 ceph-mon[73405]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014316848' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:   cluster:
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     id:     63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     health: HEALTH_OK
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:  
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:   services:
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     mon: 1 daemons, quorum compute-0 (age 0.26836s)
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     mgr: no daemons active
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     osd: 0 osds: 0 up, 0 in
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:  
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:   data:
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     pools:   0 pools, 0 pgs
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     objects: 0 objects, 0 B
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     usage:   0 B used, 0 B / 0 B avail
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:     pgs:     
Sep 30 17:36:27 compute-0 naughty_hugle[73460]:  
Sep 30 17:36:27 compute-0 systemd[1]: libpod-66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425.scope: Deactivated successfully.
Sep 30 17:36:27 compute-0 podman[73406]: 2025-09-30 17:36:27.378702226 +0000 UTC m=+0.333079577 container died 66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425 (image=quay.io/ceph/ceph:v19, name=naughty_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:36:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d7d33e5eefe3648ded338f77f77295eb8522c07049917fdfa07560b315fceca-merged.mount: Deactivated successfully.
Sep 30 17:36:28 compute-0 podman[73406]: 2025-09-30 17:36:28.012872658 +0000 UTC m=+0.967250009 container remove 66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425 (image=quay.io/ceph/ceph:v19, name=naughty_hugle, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 17:36:28 compute-0 systemd[1]: libpod-conmon-66877fe8f6a2f70fbaf475acf7e81ff861078bce480f8bc7d8980a99c09e8425.scope: Deactivated successfully.
Sep 30 17:36:28 compute-0 podman[73498]: 2025-09-30 17:36:28.056142499 +0000 UTC m=+0.021971536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:28 compute-0 podman[73498]: 2025-09-30 17:36:28.211771297 +0000 UTC m=+0.177600304 container create f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5 (image=quay.io/ceph/ceph:v19, name=exciting_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:28 compute-0 ceph-mon[73405]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 17:36:28 compute-0 ceph-mon[73405]: monmap epoch 1
Sep 30 17:36:28 compute-0 ceph-mon[73405]: fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:28 compute-0 ceph-mon[73405]: last_changed 2025-09-30T17:36:25.121133+0000
Sep 30 17:36:28 compute-0 ceph-mon[73405]: created 2025-09-30T17:36:25.121133+0000
Sep 30 17:36:28 compute-0 ceph-mon[73405]: min_mon_release 19 (squid)
Sep 30 17:36:28 compute-0 ceph-mon[73405]: election_strategy: 1
Sep 30 17:36:28 compute-0 ceph-mon[73405]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 17:36:28 compute-0 ceph-mon[73405]: fsmap 
Sep 30 17:36:28 compute-0 ceph-mon[73405]: osdmap e1: 0 total, 0 up, 0 in
Sep 30 17:36:28 compute-0 ceph-mon[73405]: mgrmap e1: no daemons active
Sep 30 17:36:28 compute-0 ceph-mon[73405]: from='client.? 192.168.122.100:0/2014316848' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 17:36:28 compute-0 systemd[1]: Started libpod-conmon-f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5.scope.
Sep 30 17:36:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2a65a00943d1947d5c10841fde474658cfb4801c4ac4a5705a6790431c5de4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2a65a00943d1947d5c10841fde474658cfb4801c4ac4a5705a6790431c5de4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2a65a00943d1947d5c10841fde474658cfb4801c4ac4a5705a6790431c5de4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2a65a00943d1947d5c10841fde474658cfb4801c4ac4a5705a6790431c5de4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:28 compute-0 podman[73498]: 2025-09-30 17:36:28.497857067 +0000 UTC m=+0.463686094 container init f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5 (image=quay.io/ceph/ceph:v19, name=exciting_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:36:28 compute-0 podman[73498]: 2025-09-30 17:36:28.504084284 +0000 UTC m=+0.469913331 container start f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5 (image=quay.io/ceph/ceph:v19, name=exciting_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 17:36:28 compute-0 podman[73498]: 2025-09-30 17:36:28.550588466 +0000 UTC m=+0.516417503 container attach f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5 (image=quay.io/ceph/ceph:v19, name=exciting_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:28 compute-0 ceph-mon[73405]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Sep 30 17:36:28 compute-0 ceph-mon[73405]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1596877054' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 17:36:28 compute-0 ceph-mon[73405]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1596877054' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Sep 30 17:36:28 compute-0 exciting_gates[73514]: 
Sep 30 17:36:28 compute-0 exciting_gates[73514]: [global]
Sep 30 17:36:28 compute-0 exciting_gates[73514]:         fsid = 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:28 compute-0 exciting_gates[73514]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Sep 30 17:36:28 compute-0 systemd[1]: libpod-f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5.scope: Deactivated successfully.
Sep 30 17:36:28 compute-0 podman[73498]: 2025-09-30 17:36:28.856641713 +0000 UTC m=+0.822470740 container died f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5 (image=quay.io/ceph/ceph:v19, name=exciting_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e2a65a00943d1947d5c10841fde474658cfb4801c4ac4a5705a6790431c5de4-merged.mount: Deactivated successfully.
Sep 30 17:36:29 compute-0 ceph-mon[73405]: from='client.? 192.168.122.100:0/1596877054' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 17:36:29 compute-0 ceph-mon[73405]: from='client.? 192.168.122.100:0/1596877054' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Sep 30 17:36:29 compute-0 podman[73498]: 2025-09-30 17:36:29.49480382 +0000 UTC m=+1.460632827 container remove f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5 (image=quay.io/ceph/ceph:v19, name=exciting_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 17:36:29 compute-0 systemd[1]: libpod-conmon-f6ca09b045d8a6a00cf145b58be4c78902798a3582302a2bcd8f4da302c728b5.scope: Deactivated successfully.
Sep 30 17:36:29 compute-0 podman[73552]: 2025-09-30 17:36:29.622172013 +0000 UTC m=+0.112658666 container create 437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0 (image=quay.io/ceph/ceph:v19, name=pensive_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 17:36:29 compute-0 podman[73552]: 2025-09-30 17:36:29.531985428 +0000 UTC m=+0.022472101 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:29 compute-0 systemd[1]: Started libpod-conmon-437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0.scope.
Sep 30 17:36:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cbd60ff87cabfe732b7382f051ba00e4948228a9e507a23fb8b9f1e828f22a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cbd60ff87cabfe732b7382f051ba00e4948228a9e507a23fb8b9f1e828f22a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cbd60ff87cabfe732b7382f051ba00e4948228a9e507a23fb8b9f1e828f22a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cbd60ff87cabfe732b7382f051ba00e4948228a9e507a23fb8b9f1e828f22a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:30 compute-0 podman[73552]: 2025-09-30 17:36:30.108770748 +0000 UTC m=+0.599257491 container init 437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0 (image=quay.io/ceph/ceph:v19, name=pensive_murdock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:36:30 compute-0 podman[73552]: 2025-09-30 17:36:30.115015155 +0000 UTC m=+0.605501798 container start 437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0 (image=quay.io/ceph/ceph:v19, name=pensive_murdock, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:36:30 compute-0 podman[73552]: 2025-09-30 17:36:30.259707112 +0000 UTC m=+0.750193785 container attach 437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0 (image=quay.io/ceph/ceph:v19, name=pensive_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:36:30 compute-0 ceph-mon[73405]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:36:30 compute-0 ceph-mon[73405]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017224872' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:36:30 compute-0 systemd[1]: libpod-437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0.scope: Deactivated successfully.
Sep 30 17:36:30 compute-0 podman[73552]: 2025-09-30 17:36:30.313876103 +0000 UTC m=+0.804362766 container died 437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0 (image=quay.io/ceph/ceph:v19, name=pensive_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:30 compute-0 ceph-mon[73405]: from='client.? 192.168.122.100:0/1017224872' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-77cbd60ff87cabfe732b7382f051ba00e4948228a9e507a23fb8b9f1e828f22a-merged.mount: Deactivated successfully.
Sep 30 17:36:30 compute-0 podman[73552]: 2025-09-30 17:36:30.982374083 +0000 UTC m=+1.472860756 container remove 437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0 (image=quay.io/ceph/ceph:v19, name=pensive_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:36:31 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:36:31 compute-0 systemd[1]: libpod-conmon-437c5757c1294a29b348f25f4cd2a79668d5239eb92057f6145f8fddd9b18ae0.scope: Deactivated successfully.
Sep 30 17:36:31 compute-0 ceph-mon[73405]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Sep 30 17:36:31 compute-0 ceph-mon[73405]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Sep 30 17:36:31 compute-0 ceph-mon[73405]: mon.compute-0@0(leader) e1 shutdown
Sep 30 17:36:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73401]: 2025-09-30T17:36:31.205+0000 7f4189a02640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Sep 30 17:36:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73401]: 2025-09-30T17:36:31.205+0000 7f4189a02640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Sep 30 17:36:31 compute-0 ceph-mon[73405]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Sep 30 17:36:31 compute-0 ceph-mon[73405]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Sep 30 17:36:31 compute-0 podman[73636]: 2025-09-30 17:36:31.266805116 +0000 UTC m=+0.152409238 container died 75febbdcf57f1445796b17825a0af2a33fe519dda3e5164dea316fe9b6d76335 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 17:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b8301e99e3b57703e5387cd0c80f731c88b836b5205fbc33a8b0e70ea18ee80-merged.mount: Deactivated successfully.
Sep 30 17:36:31 compute-0 podman[73636]: 2025-09-30 17:36:31.755012416 +0000 UTC m=+0.640616548 container remove 75febbdcf57f1445796b17825a0af2a33fe519dda3e5164dea316fe9b6d76335 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:36:31 compute-0 bash[73636]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0
Sep 30 17:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 17:36:31 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@mon.compute-0.service: Deactivated successfully.
Sep 30 17:36:31 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:36:31 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:36:32 compute-0 podman[73736]: 2025-09-30 17:36:32.069936025 +0000 UTC m=+0.020439162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:32 compute-0 podman[73736]: 2025-09-30 17:36:32.176520277 +0000 UTC m=+0.127023424 container create 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf5a3761bdf784c09ae4a5c3b1c07ba6e24ad070f488f20760181970b093c97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf5a3761bdf784c09ae4a5c3b1c07ba6e24ad070f488f20760181970b093c97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf5a3761bdf784c09ae4a5c3b1c07ba6e24ad070f488f20760181970b093c97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf5a3761bdf784c09ae4a5c3b1c07ba6e24ad070f488f20760181970b093c97/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:32 compute-0 podman[73736]: 2025-09-30 17:36:32.666646801 +0000 UTC m=+0.617149948 container init 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:36:32 compute-0 podman[73736]: 2025-09-30 17:36:32.672017274 +0000 UTC m=+0.622520391 container start 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:36:32 compute-0 ceph-mon[73755]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 17:36:32 compute-0 ceph-mon[73755]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Sep 30 17:36:32 compute-0 ceph-mon[73755]: pidfile_write: ignore empty --pid-file
Sep 30 17:36:32 compute-0 ceph-mon[73755]: load: jerasure load: lrc 
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: RocksDB version: 7.9.2
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Git sha 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Compile date 2025-07-17 03:12:14
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: DB SUMMARY
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: DB Session ID:  VEKEW5JSCHGYP3QRRVQQ
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: CURRENT file:  CURRENT
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: IDENTITY file:  IDENTITY
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60447 ; 
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                         Options.error_if_exists: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                       Options.create_if_missing: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                         Options.paranoid_checks: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.flush_verify_memtable_count: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                                     Options.env: 0x55e76c629c20
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                                      Options.fs: PosixFileSystem
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                                Options.info_log: 0x55e76de12e20
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.max_file_opening_threads: 16
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                              Options.statistics: (nil)
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                               Options.use_fsync: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                       Options.max_log_file_size: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                   Options.log_file_time_to_roll: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                       Options.keep_log_file_num: 1000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                    Options.recycle_log_file_num: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                         Options.allow_fallocate: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                        Options.allow_mmap_reads: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                       Options.allow_mmap_writes: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                        Options.use_direct_reads: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:          Options.create_missing_column_families: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                              Options.db_log_dir: 
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                                 Options.wal_dir: 
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.table_cache_numshardbits: 6
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                         Options.WAL_ttl_seconds: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                       Options.WAL_size_limit_MB: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.manifest_preallocation_size: 4194304
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                     Options.is_fd_close_on_exec: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                   Options.advise_random_on_open: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                    Options.db_write_buffer_size: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                    Options.write_buffer_manager: 0x55e76de17900
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.access_hint_on_compaction_start: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                      Options.use_adaptive_mutex: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                            Options.rate_limiter: (nil)
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                       Options.wal_recovery_mode: 2
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.enable_thread_tracking: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.enable_pipelined_write: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.unordered_write: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.write_thread_max_yield_usec: 100
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                               Options.row_cache: None
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                              Options.wal_filter: None
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.avoid_flush_during_recovery: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.allow_ingest_behind: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.two_write_queues: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.manual_wal_flush: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.wal_compression: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.atomic_flush: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                 Options.persist_stats_to_disk: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                 Options.write_dbid_to_manifest: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                 Options.log_readahead_size: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                 Options.best_efforts_recovery: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.allow_data_in_errors: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.db_host_id: __hostname__
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.enforce_single_del_contracts: true
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.max_background_jobs: 2
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.max_background_compactions: -1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.max_subcompactions: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.delayed_write_rate : 16777216
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.max_total_wal_size: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                   Options.stats_dump_period_sec: 600
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                 Options.stats_persist_period_sec: 600
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                          Options.max_open_files: -1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                          Options.bytes_per_sync: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                      Options.wal_bytes_per_sync: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                   Options.strict_bytes_per_sync: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:       Options.compaction_readahead_size: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.max_background_flushes: -1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Compression algorithms supported:
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         kZSTD supported: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         kXpressCompression supported: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         kBZip2Compression supported: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         kZSTDNotFinalCompression supported: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         kLZ4Compression supported: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         kZlibCompression supported: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         kLZ4HCCompression supported: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         kSnappyCompression supported: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Fast CRC32 supported: Supported on x86
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: DMutex implementation: pthread_mutex_t
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:           Options.merge_operator: 
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:        Options.compaction_filter: None
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e76de12aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e76de37350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:        Options.write_buffer_size: 33554432
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:  Options.max_write_buffer_number: 2
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:          Options.compression: NoCompression
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.num_levels: 7
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6f6abc8b-413d-4a47-b5c7-406ff28f77d6
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253792709276, "job": 1, "event": "recovery_started", "wal_files": [9]}
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253792951212, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 150, "table_properties": {"data_size": 58402, "index_size": 187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3267, "raw_average_key_size": 29, "raw_value_size": 55820, "raw_average_value_size": 512, "num_data_blocks": 9, "num_entries": 109, "num_filter_entries": 109, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253792, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253792951445, "job": 1, "event": "recovery_finished"}
Sep 30 17:36:32 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Sep 30 17:36:32 compute-0 bash[73736]: 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341
Sep 30 17:36:33 compute-0 systemd[1]: Started Ceph mon.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:36:33 compute-0 podman[73777]: 2025-09-30 17:36:33.048805614 +0000 UTC m=+0.023972863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:33 compute-0 podman[73777]: 2025-09-30 17:36:33.220720386 +0000 UTC m=+0.195887615 container create 1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6 (image=quay.io/ceph/ceph:v19, name=compassionate_blackwell, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 17:36:33 compute-0 systemd[1]: Started libpod-conmon-1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6.scope.
Sep 30 17:36:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376b15efa604f6ba9f7d9ae37279ceedf059e610221cd92944895ac44843e39b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376b15efa604f6ba9f7d9ae37279ceedf059e610221cd92944895ac44843e39b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/376b15efa604f6ba9f7d9ae37279ceedf059e610221cd92944895ac44843e39b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:33 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:36:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e76de38e00
Sep 30 17:36:33 compute-0 ceph-mon[73755]: rocksdb: DB pointer 0x55e76df42000
Sep 30 17:36:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 17:36:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1.3 total, 1.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.44 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                            Sum      2/0   60.44 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.3 total, 1.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 17:36:33 compute-0 ceph-mon[73755]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(???) e1 preinit fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(???).mds e1 new map
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-09-30T17:36:27:095553+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Sep 30 17:36:33 compute-0 ceph-mon[73755]: mon.compute-0@0(probing) e1 win_standalone_election
Sep 30 17:36:33 compute-0 ceph-mon[73755]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Sep 30 17:36:34 compute-0 podman[73777]: 2025-09-30 17:36:34.88550932 +0000 UTC m=+1.860676559 container init 1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6 (image=quay.io/ceph/ceph:v19, name=compassionate_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 17:36:34 compute-0 podman[73777]: 2025-09-30 17:36:34.898739227 +0000 UTC m=+1.873906496 container start 1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6 (image=quay.io/ceph/ceph:v19, name=compassionate_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 17:36:35 compute-0 ceph-mon[73755]: mon.compute-0@0(probing) e1 handle_auth_request failed to assign global_id
Sep 30 17:36:35 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : monmap epoch 1
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : last_changed 2025-09-30T17:36:25.121133+0000
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : created 2025-09-30T17:36:25.121133+0000
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : election_strategy: 1
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 17:36:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap 
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Sep 30 17:36:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Sep 30 17:36:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Sep 30 17:36:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Sep 30 17:36:35 compute-0 podman[73777]: 2025-09-30 17:36:35.61588709 +0000 UTC m=+2.591054319 container attach 1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6 (image=quay.io/ceph/ceph:v19, name=compassionate_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:35 compute-0 ceph-mon[73755]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 17:36:35 compute-0 ceph-mon[73755]: monmap epoch 1
Sep 30 17:36:35 compute-0 ceph-mon[73755]: fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:35 compute-0 ceph-mon[73755]: last_changed 2025-09-30T17:36:25.121133+0000
Sep 30 17:36:35 compute-0 ceph-mon[73755]: created 2025-09-30T17:36:25.121133+0000
Sep 30 17:36:35 compute-0 ceph-mon[73755]: min_mon_release 19 (squid)
Sep 30 17:36:35 compute-0 ceph-mon[73755]: election_strategy: 1
Sep 30 17:36:35 compute-0 ceph-mon[73755]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 17:36:35 compute-0 ceph-mon[73755]: fsmap 
Sep 30 17:36:35 compute-0 ceph-mon[73755]: osdmap e1: 0 total, 0 up, 0 in
Sep 30 17:36:35 compute-0 ceph-mon[73755]: mgrmap e1: no daemons active
Sep 30 17:36:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Sep 30 17:36:36 compute-0 systemd[1]: libpod-1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6.scope: Deactivated successfully.
Sep 30 17:36:36 compute-0 podman[73777]: 2025-09-30 17:36:36.526300222 +0000 UTC m=+3.501467451 container died 1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6 (image=quay.io/ceph/ceph:v19, name=compassionate_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-376b15efa604f6ba9f7d9ae37279ceedf059e610221cd92944895ac44843e39b-merged.mount: Deactivated successfully.
Sep 30 17:36:36 compute-0 podman[73777]: 2025-09-30 17:36:36.939024195 +0000 UTC m=+3.914191424 container remove 1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6 (image=quay.io/ceph/ceph:v19, name=compassionate_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 17:36:36 compute-0 systemd[1]: libpod-conmon-1ad3bab2ea83a7dfb726a1a3676c3ebd9a83660b4e8a033868021c61290541e6.scope: Deactivated successfully.
Sep 30 17:36:37 compute-0 podman[73851]: 2025-09-30 17:36:37.002265664 +0000 UTC m=+0.043649943 container create 7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb (image=quay.io/ceph/ceph:v19, name=jovial_gates, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Sep 30 17:36:37 compute-0 systemd[1]: Started libpod-conmon-7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb.scope.
Sep 30 17:36:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d73656e3e93afe4f1208329185d26fb98707577287a521a06b437762051e53f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d73656e3e93afe4f1208329185d26fb98707577287a521a06b437762051e53f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d73656e3e93afe4f1208329185d26fb98707577287a521a06b437762051e53f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:37 compute-0 podman[73851]: 2025-09-30 17:36:36.980753602 +0000 UTC m=+0.022137911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:37 compute-0 podman[73851]: 2025-09-30 17:36:37.090648569 +0000 UTC m=+0.132032868 container init 7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb (image=quay.io/ceph/ceph:v19, name=jovial_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Sep 30 17:36:37 compute-0 podman[73851]: 2025-09-30 17:36:37.097404651 +0000 UTC m=+0.138788950 container start 7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb (image=quay.io/ceph/ceph:v19, name=jovial_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 17:36:37 compute-0 podman[73851]: 2025-09-30 17:36:37.10439611 +0000 UTC m=+0.145780409 container attach 7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb (image=quay.io/ceph/ceph:v19, name=jovial_gates, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 17:36:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Sep 30 17:36:37 compute-0 systemd[1]: libpod-7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb.scope: Deactivated successfully.
Sep 30 17:36:37 compute-0 podman[73851]: 2025-09-30 17:36:37.330016169 +0000 UTC m=+0.371400448 container died 7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb (image=quay.io/ceph/ceph:v19, name=jovial_gates, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 17:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d73656e3e93afe4f1208329185d26fb98707577287a521a06b437762051e53f-merged.mount: Deactivated successfully.
Sep 30 17:36:37 compute-0 podman[73851]: 2025-09-30 17:36:37.6913785 +0000 UTC m=+0.732762779 container remove 7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb (image=quay.io/ceph/ceph:v19, name=jovial_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:37 compute-0 systemd[1]: libpod-conmon-7be3df20a5b24f76bbb7400a6a7c1f9e0228647998058720756765374ac787fb.scope: Deactivated successfully.
Sep 30 17:36:37 compute-0 systemd[1]: Reloading.
Sep 30 17:36:37 compute-0 systemd-rc-local-generator[73932]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:36:37 compute-0 systemd-sysv-generator[73936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:36:38 compute-0 systemd[1]: Reloading.
Sep 30 17:36:38 compute-0 systemd-rc-local-generator[73966]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:36:38 compute-0 systemd-sysv-generator[73972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:36:38 compute-0 systemd[1]: Starting Ceph mgr.compute-0.efvthf for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:36:38 compute-0 podman[74031]: 2025-09-30 17:36:38.632452825 +0000 UTC m=+0.020145504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:38 compute-0 podman[74031]: 2025-09-30 17:36:38.756924415 +0000 UTC m=+0.144617064 container create 41da77fe255c30263fc822234ab302b835812432b5cc5858c17c80fd6c4f751f (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 17:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a467b50e5a000da7b12af6d87af7b6bba6c05d9d8d837397a14710c9c218b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a467b50e5a000da7b12af6d87af7b6bba6c05d9d8d837397a14710c9c218b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a467b50e5a000da7b12af6d87af7b6bba6c05d9d8d837397a14710c9c218b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a467b50e5a000da7b12af6d87af7b6bba6c05d9d8d837397a14710c9c218b/merged/var/lib/ceph/mgr/ceph-compute-0.efvthf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:39 compute-0 podman[74031]: 2025-09-30 17:36:39.033821264 +0000 UTC m=+0.421514003 container init 41da77fe255c30263fc822234ab302b835812432b5cc5858c17c80fd6c4f751f (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:36:39 compute-0 podman[74031]: 2025-09-30 17:36:39.038985751 +0000 UTC m=+0.426678400 container start 41da77fe255c30263fc822234ab302b835812432b5cc5858c17c80fd6c4f751f (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:36:39 compute-0 ceph-mgr[74051]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 17:36:39 compute-0 ceph-mgr[74051]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 17:36:39 compute-0 ceph-mgr[74051]: pidfile_write: ignore empty --pid-file
Sep 30 17:36:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'alerts'
Sep 30 17:36:39 compute-0 ceph-mgr[74051]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:36:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'balancer'
Sep 30 17:36:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:39.232+0000 7f58a9d90140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:36:39 compute-0 ceph-mgr[74051]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:36:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'cephadm'
Sep 30 17:36:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:39.318+0000 7f58a9d90140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:36:39 compute-0 bash[74031]: 41da77fe255c30263fc822234ab302b835812432b5cc5858c17c80fd6c4f751f
Sep 30 17:36:39 compute-0 systemd[1]: Started Ceph mgr.compute-0.efvthf for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:36:39 compute-0 podman[74083]: 2025-09-30 17:36:39.801505454 +0000 UTC m=+0.020711630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:39 compute-0 podman[74083]: 2025-09-30 17:36:39.940775727 +0000 UTC m=+0.159981883 container create ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:39 compute-0 systemd[1]: Started libpod-conmon-ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798.scope.
Sep 30 17:36:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472d55cc597dfeddd04db1ceb19a36e3dc610ad8c24a3ad3064eba56d946f641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472d55cc597dfeddd04db1ceb19a36e3dc610ad8c24a3ad3064eba56d946f641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472d55cc597dfeddd04db1ceb19a36e3dc610ad8c24a3ad3064eba56d946f641/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:40 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'crash'
Sep 30 17:36:40 compute-0 podman[74083]: 2025-09-30 17:36:40.122114266 +0000 UTC m=+0.341320422 container init ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Sep 30 17:36:40 compute-0 podman[74083]: 2025-09-30 17:36:40.131453232 +0000 UTC m=+0.350659388 container start ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:40 compute-0 podman[74083]: 2025-09-30 17:36:40.171068629 +0000 UTC m=+0.390274805 container attach ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:36:40 compute-0 ceph-mgr[74051]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:36:40 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'dashboard'
Sep 30 17:36:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:40.173+0000 7f58a9d90140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:36:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 17:36:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758311494' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 17:36:40 compute-0 jovial_volhard[74100]: 
Sep 30 17:36:40 compute-0 jovial_volhard[74100]: {
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "health": {
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "status": "HEALTH_OK",
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "checks": {},
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "mutes": []
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     },
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "election_epoch": 5,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "quorum": [
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         0
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     ],
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "quorum_names": [
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "compute-0"
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     ],
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "quorum_age": 5,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "monmap": {
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "epoch": 1,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "min_mon_release_name": "squid",
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_mons": 1
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     },
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "osdmap": {
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "epoch": 1,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_osds": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_up_osds": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "osd_up_since": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_in_osds": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "osd_in_since": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_remapped_pgs": 0
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     },
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "pgmap": {
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "pgs_by_state": [],
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_pgs": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_pools": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_objects": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "data_bytes": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "bytes_used": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "bytes_avail": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "bytes_total": 0
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     },
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "fsmap": {
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "epoch": 1,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "btime": "2025-09-30T17:36:27:095553+0000",
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "by_rank": [],
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "up:standby": 0
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     },
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "mgrmap": {
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "available": false,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "num_standbys": 0,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "modules": [
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:             "iostat",
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:             "nfs",
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:             "restful"
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         ],
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "services": {}
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     },
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "servicemap": {
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "epoch": 1,
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "modified": "2025-09-30T17:36:27.098259+0000",
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:         "services": {}
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     },
Sep 30 17:36:40 compute-0 jovial_volhard[74100]:     "progress_events": {}
Sep 30 17:36:40 compute-0 jovial_volhard[74100]: }
Sep 30 17:36:40 compute-0 systemd[1]: libpod-ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798.scope: Deactivated successfully.
Sep 30 17:36:40 compute-0 podman[74083]: 2025-09-30 17:36:40.33137085 +0000 UTC m=+0.550577096 container died ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-472d55cc597dfeddd04db1ceb19a36e3dc610ad8c24a3ad3064eba56d946f641-merged.mount: Deactivated successfully.
Sep 30 17:36:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2758311494' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 17:36:40 compute-0 podman[74083]: 2025-09-30 17:36:40.420768873 +0000 UTC m=+0.639975029 container remove ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798 (image=quay.io/ceph/ceph:v19, name=jovial_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:40 compute-0 systemd[1]: libpod-conmon-ce479323c99757ea8f9654248e773288c9a3cfa9573d7b7343e5f44574c11798.scope: Deactivated successfully.
Sep 30 17:36:40 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'devicehealth'
Sep 30 17:36:40 compute-0 ceph-mgr[74051]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:36:40 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 17:36:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:40.854+0000 7f58a9d90140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:36:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 17:36:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 17:36:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]:   from numpy import show_config as show_numpy_config
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:36:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:41.027+0000 7f58a9d90140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'influx'
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'insights'
Sep 30 17:36:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:41.100+0000 7f58a9d90140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'iostat'
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'k8sevents'
Sep 30 17:36:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:41.243+0000 7f58a9d90140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'localpool'
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 17:36:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mirroring'
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'nfs'
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'orchestrator'
Sep 30 17:36:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:42.307+0000 7f58a9d90140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 17:36:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:42.550+0000 7f58a9d90140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 podman[74138]: 2025-09-30 17:36:42.467770373 +0000 UTC m=+0.024292063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:42 compute-0 podman[74138]: 2025-09-30 17:36:42.622166625 +0000 UTC m=+0.178688295 container create 78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142 (image=quay.io/ceph/ceph:v19, name=frosty_black, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_support'
Sep 30 17:36:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:42.631+0000 7f58a9d90140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 17:36:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:42.701+0000 7f58a9d90140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 systemd[1]: Started libpod-conmon-78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142.scope.
Sep 30 17:36:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a219ed6f7c3562d64869b597272288f52896691722be2c7dc75914e27c1ccf1d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a219ed6f7c3562d64869b597272288f52896691722be2c7dc75914e27c1ccf1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a219ed6f7c3562d64869b597272288f52896691722be2c7dc75914e27c1ccf1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'progress'
Sep 30 17:36:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:42.786+0000 7f58a9d90140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 podman[74138]: 2025-09-30 17:36:42.798793111 +0000 UTC m=+0.355314821 container init 78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142 (image=quay.io/ceph/ceph:v19, name=frosty_black, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:42 compute-0 podman[74138]: 2025-09-30 17:36:42.807398665 +0000 UTC m=+0.363920335 container start 78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142 (image=quay.io/ceph/ceph:v19, name=frosty_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 17:36:42 compute-0 podman[74138]: 2025-09-30 17:36:42.816370881 +0000 UTC m=+0.372892571 container attach 78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142 (image=quay.io/ceph/ceph:v19, name=frosty_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'prometheus'
Sep 30 17:36:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:42.862+0000 7f58a9d90140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:36:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 17:36:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075675291' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 17:36:42 compute-0 frosty_black[74155]: 
Sep 30 17:36:42 compute-0 frosty_black[74155]: {
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "health": {
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "status": "HEALTH_OK",
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "checks": {},
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "mutes": []
Sep 30 17:36:42 compute-0 frosty_black[74155]:     },
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "election_epoch": 5,
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "quorum": [
Sep 30 17:36:42 compute-0 frosty_black[74155]:         0
Sep 30 17:36:42 compute-0 frosty_black[74155]:     ],
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "quorum_names": [
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "compute-0"
Sep 30 17:36:42 compute-0 frosty_black[74155]:     ],
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "quorum_age": 7,
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "monmap": {
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "epoch": 1,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "min_mon_release_name": "squid",
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_mons": 1
Sep 30 17:36:42 compute-0 frosty_black[74155]:     },
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "osdmap": {
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "epoch": 1,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_osds": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_up_osds": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "osd_up_since": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_in_osds": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "osd_in_since": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_remapped_pgs": 0
Sep 30 17:36:42 compute-0 frosty_black[74155]:     },
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "pgmap": {
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "pgs_by_state": [],
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_pgs": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_pools": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_objects": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "data_bytes": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "bytes_used": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "bytes_avail": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "bytes_total": 0
Sep 30 17:36:42 compute-0 frosty_black[74155]:     },
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "fsmap": {
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "epoch": 1,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "btime": "2025-09-30T17:36:27:095553+0000",
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "by_rank": [],
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "up:standby": 0
Sep 30 17:36:42 compute-0 frosty_black[74155]:     },
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "mgrmap": {
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "available": false,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "num_standbys": 0,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "modules": [
Sep 30 17:36:42 compute-0 frosty_black[74155]:             "iostat",
Sep 30 17:36:42 compute-0 frosty_black[74155]:             "nfs",
Sep 30 17:36:42 compute-0 frosty_black[74155]:             "restful"
Sep 30 17:36:42 compute-0 frosty_black[74155]:         ],
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "services": {}
Sep 30 17:36:42 compute-0 frosty_black[74155]:     },
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "servicemap": {
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "epoch": 1,
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "modified": "2025-09-30T17:36:27.098259+0000",
Sep 30 17:36:42 compute-0 frosty_black[74155]:         "services": {}
Sep 30 17:36:42 compute-0 frosty_black[74155]:     },
Sep 30 17:36:42 compute-0 frosty_black[74155]:     "progress_events": {}
Sep 30 17:36:42 compute-0 frosty_black[74155]: }
Sep 30 17:36:43 compute-0 systemd[1]: libpod-78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142.scope: Deactivated successfully.
Sep 30 17:36:43 compute-0 podman[74138]: 2025-09-30 17:36:43.013628972 +0000 UTC m=+0.570150642 container died 78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142 (image=quay.io/ceph/ceph:v19, name=frosty_black, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:36:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2075675291' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 17:36:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a219ed6f7c3562d64869b597272288f52896691722be2c7dc75914e27c1ccf1d-merged.mount: Deactivated successfully.
Sep 30 17:36:43 compute-0 podman[74138]: 2025-09-30 17:36:43.112846025 +0000 UTC m=+0.669367695 container remove 78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142 (image=quay.io/ceph/ceph:v19, name=frosty_black, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:43 compute-0 systemd[1]: libpod-conmon-78fb1eac8094b4a83a5a2b803ffe014b4162dbc8515d4694d81f1bf18577d142.scope: Deactivated successfully.
Sep 30 17:36:43 compute-0 ceph-mgr[74051]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:36:43 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rbd_support'
Sep 30 17:36:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:43.232+0000 7f58a9d90140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:36:43 compute-0 ceph-mgr[74051]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:36:43 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'restful'
Sep 30 17:36:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:43.333+0000 7f58a9d90140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:36:43 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rgw'
Sep 30 17:36:43 compute-0 ceph-mgr[74051]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:36:43 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rook'
Sep 30 17:36:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:43.805+0000 7f58a9d90140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'selftest'
Sep 30 17:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:44.393+0000 7f58a9d90140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'snap_schedule'
Sep 30 17:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:44.477+0000 7f58a9d90140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'stats'
Sep 30 17:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:44.585+0000 7f58a9d90140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'status'
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telegraf'
Sep 30 17:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:44.762+0000 7f58a9d90140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:36:44 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telemetry'
Sep 30 17:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:44.839+0000 7f58a9d90140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 ceph-mgr[74051]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 17:36:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:45.004+0000 7f58a9d90140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 ceph-mgr[74051]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'volumes'
Sep 30 17:36:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:45.232+0000 7f58a9d90140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 podman[74193]: 2025-09-30 17:36:45.158719982 +0000 UTC m=+0.021701718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:45 compute-0 ceph-mgr[74051]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'zabbix'
Sep 30 17:36:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:45.513+0000 7f58a9d90140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 ceph-mgr[74051]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:45.585+0000 7f58a9d90140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:36:45 compute-0 ceph-mgr[74051]: ms_deliver_dispatch: unhandled message 0x5573520b49c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 17:36:45 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.efvthf
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr handle_mgr_map Activating!
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.efvthf(active, starting, since 0.732681s)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr handle_mgr_map I am now activating
Sep 30 17:36:46 compute-0 podman[74193]: 2025-09-30 17:36:46.597834616 +0000 UTC m=+1.460816322 container create 4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4 (image=quay.io/ceph/ceph:v19, name=confident_hypatia, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e1 all = 1
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"} v 0)
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: balancer
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [balancer INFO root] Starting
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: crash
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Manager daemon compute-0.efvthf is now available
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:36:46
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [balancer INFO root] No pools available
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: devicehealth
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Starting
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: iostat
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: nfs
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: orchestrator
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: pg_autoscaler
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: progress
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [progress INFO root] Loading...
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [progress INFO root] No stored events to load
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded [] historic events
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [rbd_support INFO root] recovery thread starting
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [rbd_support INFO root] starting setup
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: rbd_support
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: restful
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [restful WARNING root] server not running: no certificate configured
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: status
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"} v 0)
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: telemetry
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [rbd_support INFO root] PerfHandler: starting
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TaskHandler: starting
Sep 30 17:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"} v 0)
Sep 30 17:36:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:36:46 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: volumes
Sep 30 17:36:47 compute-0 ceph-mon[73755]: Activating manager daemon compute-0.efvthf
Sep 30 17:36:47 compute-0 systemd[1]: Started libpod-conmon-4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4.scope.
Sep 30 17:36:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05635071bf621f702d4c84d1a0c344a889db07ddf6c882604e7e26c35593652d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05635071bf621f702d4c84d1a0c344a889db07ddf6c882604e7e26c35593652d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05635071bf621f702d4c84d1a0c344a889db07ddf6c882604e7e26c35593652d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:48 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:36:48 compute-0 podman[74193]: 2025-09-30 17:36:48.460138931 +0000 UTC m=+3.323120637 container init 4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4 (image=quay.io/ceph/ceph:v19, name=confident_hypatia, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:36:48 compute-0 podman[74193]: 2025-09-30 17:36:48.472705269 +0000 UTC m=+3.335686975 container start 4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4 (image=quay.io/ceph/ceph:v19, name=confident_hypatia, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' 
Sep 30 17:36:48 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:36:48 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 17:36:48 compute-0 ceph-mgr[74051]: [rbd_support INFO root] setup complete
Sep 30 17:36:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Sep 30 17:36:48 compute-0 podman[74193]: 2025-09-30 17:36:48.502378193 +0000 UTC m=+3.365359949 container attach 4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4 (image=quay.io/ceph/ceph:v19, name=confident_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:36:48 compute-0 ceph-mon[73755]: mgrmap e2: compute-0.efvthf(active, starting, since 0.732681s)
Sep 30 17:36:48 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:36:48 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:36:48 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:36:48 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:36:48 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:36:48 compute-0 ceph-mon[73755]: Manager daemon compute-0.efvthf is now available
Sep 30 17:36:48 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:36:48 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:36:48 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.efvthf(active, since 2s)
Sep 30 17:36:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' 
Sep 30 17:36:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Sep 30 17:36:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' 
Sep 30 17:36:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 17:36:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272233750' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 17:36:48 compute-0 confident_hypatia[74290]: 
Sep 30 17:36:48 compute-0 confident_hypatia[74290]: {
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "health": {
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "status": "HEALTH_OK",
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "checks": {},
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "mutes": []
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     },
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "election_epoch": 5,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "quorum": [
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         0
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     ],
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "quorum_names": [
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "compute-0"
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     ],
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "quorum_age": 13,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "monmap": {
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "epoch": 1,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "min_mon_release_name": "squid",
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_mons": 1
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     },
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "osdmap": {
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "epoch": 1,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_osds": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_up_osds": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "osd_up_since": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_in_osds": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "osd_in_since": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_remapped_pgs": 0
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     },
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "pgmap": {
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "pgs_by_state": [],
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_pgs": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_pools": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_objects": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "data_bytes": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "bytes_used": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "bytes_avail": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "bytes_total": 0
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     },
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "fsmap": {
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "epoch": 1,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "btime": "2025-09-30T17:36:27:095553+0000",
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "by_rank": [],
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "up:standby": 0
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     },
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "mgrmap": {
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "available": true,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "num_standbys": 0,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "modules": [
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:             "iostat",
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:             "nfs",
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:             "restful"
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         ],
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "services": {}
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     },
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "servicemap": {
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "epoch": 1,
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "modified": "2025-09-30T17:36:27.098259+0000",
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:         "services": {}
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     },
Sep 30 17:36:48 compute-0 confident_hypatia[74290]:     "progress_events": {}
Sep 30 17:36:48 compute-0 confident_hypatia[74290]: }
Sep 30 17:36:48 compute-0 systemd[1]: libpod-4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4.scope: Deactivated successfully.
Sep 30 17:36:48 compute-0 podman[74193]: 2025-09-30 17:36:48.945532471 +0000 UTC m=+3.808514187 container died 4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4 (image=quay.io/ceph/ceph:v19, name=confident_hypatia, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-05635071bf621f702d4c84d1a0c344a889db07ddf6c882604e7e26c35593652d-merged.mount: Deactivated successfully.
Sep 30 17:36:49 compute-0 podman[74193]: 2025-09-30 17:36:49.151448479 +0000 UTC m=+4.014430185 container remove 4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4 (image=quay.io/ceph/ceph:v19, name=confident_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:36:49 compute-0 systemd[1]: libpod-conmon-4e791da972c36c05cadb914bbf7b4690a7ff4cf10d7865c9e66d2872c923b0d4.scope: Deactivated successfully.
Sep 30 17:36:49 compute-0 podman[74331]: 2025-09-30 17:36:49.269430486 +0000 UTC m=+0.097497495 container create ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445 (image=quay.io/ceph/ceph:v19, name=intelligent_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:36:49 compute-0 podman[74331]: 2025-09-30 17:36:49.194216966 +0000 UTC m=+0.022284005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:49 compute-0 systemd[1]: Started libpod-conmon-ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445.scope.
Sep 30 17:36:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba125cb3749c7b36f8225bde3b4e0cace91b51f35d984d85b6a3a0ebf1f76e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba125cb3749c7b36f8225bde3b4e0cace91b51f35d984d85b6a3a0ebf1f76e32/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba125cb3749c7b36f8225bde3b4e0cace91b51f35d984d85b6a3a0ebf1f76e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba125cb3749c7b36f8225bde3b4e0cace91b51f35d984d85b6a3a0ebf1f76e32/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:49 compute-0 podman[74331]: 2025-09-30 17:36:49.340008894 +0000 UTC m=+0.168075923 container init ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445 (image=quay.io/ceph/ceph:v19, name=intelligent_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Sep 30 17:36:49 compute-0 podman[74331]: 2025-09-30 17:36:49.34513193 +0000 UTC m=+0.173198939 container start ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445 (image=quay.io/ceph/ceph:v19, name=intelligent_banzai, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:36:49 compute-0 podman[74331]: 2025-09-30 17:36:49.399357443 +0000 UTC m=+0.227424472 container attach ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445 (image=quay.io/ceph/ceph:v19, name=intelligent_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 17:36:49 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' 
Sep 30 17:36:49 compute-0 ceph-mon[73755]: mgrmap e3: compute-0.efvthf(active, since 2s)
Sep 30 17:36:49 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' 
Sep 30 17:36:49 compute-0 ceph-mon[73755]: from='mgr.14102 192.168.122.100:0/1297447582' entity='mgr.compute-0.efvthf' 
Sep 30 17:36:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1272233750' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 17:36:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Sep 30 17:36:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/488728979' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 17:36:49 compute-0 intelligent_banzai[74347]: 
Sep 30 17:36:49 compute-0 intelligent_banzai[74347]: [global]
Sep 30 17:36:49 compute-0 intelligent_banzai[74347]:         fsid = 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:36:49 compute-0 intelligent_banzai[74347]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Sep 30 17:36:49 compute-0 systemd[1]: libpod-ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445.scope: Deactivated successfully.
Sep 30 17:36:49 compute-0 podman[74331]: 2025-09-30 17:36:49.698514954 +0000 UTC m=+0.526581973 container died ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445 (image=quay.io/ceph/ceph:v19, name=intelligent_banzai, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba125cb3749c7b36f8225bde3b4e0cace91b51f35d984d85b6a3a0ebf1f76e32-merged.mount: Deactivated successfully.
Sep 30 17:36:50 compute-0 podman[74331]: 2025-09-30 17:36:50.133996903 +0000 UTC m=+0.962063952 container remove ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445 (image=quay.io/ceph/ceph:v19, name=intelligent_banzai, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:50 compute-0 systemd[1]: libpod-conmon-ebb9e1d7432221dcea02a72cdb7fe3b51ae4a4184bacae9fd57ba52c86917445.scope: Deactivated successfully.
Sep 30 17:36:50 compute-0 podman[74385]: 2025-09-30 17:36:50.194561026 +0000 UTC m=+0.042975654 container create 9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2 (image=quay.io/ceph/ceph:v19, name=nervous_bartik, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 17:36:50 compute-0 systemd[1]: Started libpod-conmon-9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2.scope.
Sep 30 17:36:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e289a63693194517788d36091a9d1783de9a8dde0ceb7d289f530760e5068152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e289a63693194517788d36091a9d1783de9a8dde0ceb7d289f530760e5068152/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e289a63693194517788d36091a9d1783de9a8dde0ceb7d289f530760e5068152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:50 compute-0 podman[74385]: 2025-09-30 17:36:50.171943163 +0000 UTC m=+0.020357821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:50 compute-0 podman[74385]: 2025-09-30 17:36:50.283329502 +0000 UTC m=+0.131744170 container init 9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2 (image=quay.io/ceph/ceph:v19, name=nervous_bartik, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:36:50 compute-0 podman[74385]: 2025-09-30 17:36:50.289413505 +0000 UTC m=+0.137828143 container start 9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2 (image=quay.io/ceph/ceph:v19, name=nervous_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:36:50 compute-0 podman[74385]: 2025-09-30 17:36:50.302296221 +0000 UTC m=+0.150710879 container attach 9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2 (image=quay.io/ceph/ceph:v19, name=nervous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 17:36:50 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:36:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/488728979' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 17:36:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Sep 30 17:36:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3809037119' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Sep 30 17:36:52 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:36:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3809037119' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Sep 30 17:36:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3809037119' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  e: '/usr/bin/ceph-mgr'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  0: '/usr/bin/ceph-mgr'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  1: '-n'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  2: 'mgr.compute-0.efvthf'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  3: '-f'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  4: '--setuser'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  5: 'ceph'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  6: '--setgroup'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  7: 'ceph'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  8: '--default-log-to-file=false'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  9: '--default-log-to-journald=true'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  10: '--default-log-to-stderr=false'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr respawn  exe_path /proc/self/exe
Sep 30 17:36:53 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.efvthf(active, since 7s)
Sep 30 17:36:53 compute-0 systemd[1]: libpod-9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2.scope: Deactivated successfully.
Sep 30 17:36:53 compute-0 podman[74385]: 2025-09-30 17:36:53.575856073 +0000 UTC m=+3.424270711 container died 9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2 (image=quay.io/ceph/ceph:v19, name=nervous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:36:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setuser ceph since I am not root
Sep 30 17:36:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setgroup ceph since I am not root
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: pidfile_write: ignore empty --pid-file
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'alerts'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:36:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:53.774+0000 7f32dd348140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'balancer'
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:36:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:53.855+0000 7f32dd348140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:36:53 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'cephadm'
Sep 30 17:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e289a63693194517788d36091a9d1783de9a8dde0ceb7d289f530760e5068152-merged.mount: Deactivated successfully.
Sep 30 17:36:54 compute-0 podman[74385]: 2025-09-30 17:36:54.565304384 +0000 UTC m=+4.413719022 container remove 9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2 (image=quay.io/ceph/ceph:v19, name=nervous_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 17:36:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3809037119' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Sep 30 17:36:54 compute-0 ceph-mon[73755]: mgrmap e4: compute-0.efvthf(active, since 7s)
Sep 30 17:36:54 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'crash'
Sep 30 17:36:54 compute-0 podman[74472]: 2025-09-30 17:36:54.690214231 +0000 UTC m=+0.104272689 container create d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6 (image=quay.io/ceph/ceph:v19, name=kind_faraday, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:54 compute-0 podman[74472]: 2025-09-30 17:36:54.611292269 +0000 UTC m=+0.025350757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:54 compute-0 ceph-mgr[74051]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:36:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:54.712+0000 7f32dd348140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:36:54 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'dashboard'
Sep 30 17:36:54 compute-0 systemd[1]: libpod-conmon-9c81e815154060d74cca2c2a7ffedfd82ccc6610d4ed9f14cdbc47c460ce18d2.scope: Deactivated successfully.
Sep 30 17:36:54 compute-0 systemd[1]: Started libpod-conmon-d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6.scope.
Sep 30 17:36:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c20900c3425c76d423ac9cc3629cc059fb2f8b75045c7a0c5ee73f0341126c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c20900c3425c76d423ac9cc3629cc059fb2f8b75045c7a0c5ee73f0341126c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c20900c3425c76d423ac9cc3629cc059fb2f8b75045c7a0c5ee73f0341126c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:54 compute-0 podman[74472]: 2025-09-30 17:36:54.800798747 +0000 UTC m=+0.214857205 container init d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6 (image=quay.io/ceph/ceph:v19, name=kind_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:36:54 compute-0 podman[74472]: 2025-09-30 17:36:54.806423658 +0000 UTC m=+0.220482116 container start d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6 (image=quay.io/ceph/ceph:v19, name=kind_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:36:54 compute-0 podman[74472]: 2025-09-30 17:36:54.836868533 +0000 UTC m=+0.250927011 container attach d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6 (image=quay.io/ceph/ceph:v19, name=kind_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 17:36:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Sep 30 17:36:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972308356' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 17:36:55 compute-0 kind_faraday[74489]: {
Sep 30 17:36:55 compute-0 kind_faraday[74489]:     "epoch": 4,
Sep 30 17:36:55 compute-0 kind_faraday[74489]:     "available": true,
Sep 30 17:36:55 compute-0 kind_faraday[74489]:     "active_name": "compute-0.efvthf",
Sep 30 17:36:55 compute-0 kind_faraday[74489]:     "num_standby": 0
Sep 30 17:36:55 compute-0 kind_faraday[74489]: }
Sep 30 17:36:55 compute-0 systemd[1]: libpod-d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6.scope: Deactivated successfully.
Sep 30 17:36:55 compute-0 podman[74472]: 2025-09-30 17:36:55.224630359 +0000 UTC m=+0.638688817 container died d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6 (image=quay.io/ceph/ceph:v19, name=kind_faraday, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-88c20900c3425c76d423ac9cc3629cc059fb2f8b75045c7a0c5ee73f0341126c-merged.mount: Deactivated successfully.
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'devicehealth'
Sep 30 17:36:55 compute-0 podman[74472]: 2025-09-30 17:36:55.380281777 +0000 UTC m=+0.794340235 container remove d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6 (image=quay.io/ceph/ceph:v19, name=kind_faraday, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 17:36:55 compute-0 systemd[1]: libpod-conmon-d7912a80b11006ea288669aba040e86786edd96bd5b12ed531fd6864d75bc7f6.scope: Deactivated successfully.
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 17:36:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:55.436+0000 7f32dd348140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:36:55 compute-0 podman[74529]: 2025-09-30 17:36:55.51384641 +0000 UTC m=+0.109061379 container create 5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f (image=quay.io/ceph/ceph:v19, name=bold_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 17:36:55 compute-0 podman[74529]: 2025-09-30 17:36:55.42582106 +0000 UTC m=+0.021036049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:36:55 compute-0 systemd[1]: Started libpod-conmon-5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f.scope.
Sep 30 17:36:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb9c8bf60b4d7ed74fdd71fe0f89e035cbf9f62d5721bae674b74ee778fd40a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb9c8bf60b4d7ed74fdd71fe0f89e035cbf9f62d5721bae674b74ee778fd40a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb9c8bf60b4d7ed74fdd71fe0f89e035cbf9f62d5721bae674b74ee778fd40a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:36:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2972308356' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 17:36:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 17:36:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 17:36:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]:   from numpy import show_config as show_numpy_config
Sep 30 17:36:55 compute-0 podman[74529]: 2025-09-30 17:36:55.618533399 +0000 UTC m=+0.213748378 container init 5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f (image=quay.io/ceph/ceph:v19, name=bold_dewdney, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:36:55 compute-0 podman[74529]: 2025-09-30 17:36:55.624616911 +0000 UTC m=+0.219831880 container start 5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f (image=quay.io/ceph/ceph:v19, name=bold_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:36:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:55.625+0000 7f32dd348140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'influx'
Sep 30 17:36:55 compute-0 podman[74529]: 2025-09-30 17:36:55.65046879 +0000 UTC m=+0.245683789 container attach 5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f (image=quay.io/ceph/ceph:v19, name=bold_dewdney, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:36:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:55.726+0000 7f32dd348140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'insights'
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'iostat'
Sep 30 17:36:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:55.898+0000 7f32dd348140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:36:55 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'k8sevents'
Sep 30 17:36:56 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'localpool'
Sep 30 17:36:56 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 17:36:56 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mirroring'
Sep 30 17:36:56 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'nfs'
Sep 30 17:36:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:57.011+0000 7f32dd348140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'orchestrator'
Sep 30 17:36:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:57.258+0000 7f32dd348140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 17:36:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:57.337+0000 7f32dd348140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_support'
Sep 30 17:36:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:57.430+0000 7f32dd348140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 17:36:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:57.520+0000 7f32dd348140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'progress'
Sep 30 17:36:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:57.612+0000 7f32dd348140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'prometheus'
Sep 30 17:36:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:57.973+0000 7f32dd348140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:36:57 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rbd_support'
Sep 30 17:36:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:58.084+0000 7f32dd348140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:36:58 compute-0 ceph-mgr[74051]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:36:58 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'restful'
Sep 30 17:36:58 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rgw'
Sep 30 17:36:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:58.592+0000 7f32dd348140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:36:58 compute-0 ceph-mgr[74051]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:36:58 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rook'
Sep 30 17:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:59.220+0000 7f32dd348140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'selftest'
Sep 30 17:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:59.299+0000 7f32dd348140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'snap_schedule'
Sep 30 17:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:59.386+0000 7f32dd348140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'stats'
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'status'
Sep 30 17:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:59.540+0000 7f32dd348140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telegraf'
Sep 30 17:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:59.612+0000 7f32dd348140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telemetry'
Sep 30 17:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:36:59.778+0000 7f32dd348140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:36:59 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 17:37:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:37:00.006+0000 7f32dd348140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'volumes'
Sep 30 17:37:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:37:00.276+0000 7f32dd348140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'zabbix'
Sep 30 17:37:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:37:00.350+0000 7f32dd348140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Active manager daemon compute-0.efvthf restarted
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.efvthf
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: ms_deliver_dispatch: unhandled message 0x561e73b70d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.efvthf(active, starting, since 0.330908s)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr handle_mgr_map Activating!
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr handle_mgr_map I am now activating
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e1 all = 1
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: balancer
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [balancer INFO root] Starting
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Manager daemon compute-0.efvthf is now available
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:37:00
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [balancer INFO root] No pools available
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: Active manager daemon compute-0.efvthf restarted
Sep 30 17:37:00 compute-0 ceph-mon[73755]: Activating manager daemon compute-0.efvthf
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: cephadm
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: crash
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: devicehealth
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: iostat
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Starting
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: nfs
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: orchestrator
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: pg_autoscaler
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: progress
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [progress INFO root] Loading...
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [progress INFO root] No stored events to load
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded [] historic events
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] recovery thread starting
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] starting setup
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: rbd_support
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: restful
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: status
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [restful WARNING root] server not running: no certificate configured
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: telemetry
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] PerfHandler: starting
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TaskHandler: starting
Sep 30 17:37:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"} v 0)
Sep 30 17:37:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] setup complete
Sep 30 17:37:00 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: volumes
Sep 30 17:37:01 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.efvthf(active, since 1.49594s)
Sep 30 17:37:01 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Sep 30 17:37:01 compute-0 bold_dewdney[74545]: {
Sep 30 17:37:01 compute-0 bold_dewdney[74545]:     "mgrmap_epoch": 6,
Sep 30 17:37:01 compute-0 bold_dewdney[74545]:     "initialized": true
Sep 30 17:37:01 compute-0 bold_dewdney[74545]: }
Sep 30 17:37:01 compute-0 systemd[1]: libpod-5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f.scope: Deactivated successfully.
Sep 30 17:37:01 compute-0 podman[74529]: 2025-09-30 17:37:01.872412756 +0000 UTC m=+6.467627725 container died 5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f (image=quay.io/ceph/ceph:v19, name=bold_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:37:01 compute-0 ceph-mon[73755]: osdmap e2: 0 total, 0 up, 0 in
Sep 30 17:37:01 compute-0 ceph-mon[73755]: mgrmap e5: compute-0.efvthf(active, starting, since 0.330908s)
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mon[73755]: Manager daemon compute-0.efvthf is now available
Sep 30 17:37:01 compute-0 ceph-mon[73755]: Found migration_current of "None". Setting to last migration.
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:37:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfb9c8bf60b4d7ed74fdd71fe0f89e035cbf9f62d5721bae674b74ee778fd40a-merged.mount: Deactivated successfully.
Sep 30 17:37:02 compute-0 podman[74529]: 2025-09-30 17:37:02.479606261 +0000 UTC m=+7.074821230 container remove 5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f (image=quay.io/ceph/ceph:v19, name=bold_dewdney, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Sep 30 17:37:02 compute-0 systemd[1]: libpod-conmon-5ef1aa5b00903296432d9f62dac8b01b1405f62f43c03deb89186f3488040d8f.scope: Deactivated successfully.
Sep 30 17:37:02 compute-0 podman[74695]: 2025-09-30 17:37:02.530748686 +0000 UTC m=+0.030288052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Sep 30 17:37:02 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:02 compute-0 podman[74695]: 2025-09-30 17:37:02.751528629 +0000 UTC m=+0.251067955 container create 1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094 (image=quay.io/ceph/ceph:v19, name=clever_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Sep 30 17:37:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:02 compute-0 systemd[1]: Started libpod-conmon-1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094.scope.
Sep 30 17:37:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27203a5698b5e6964cd2029e7946fe29f649e583ab13d307a1e2f7365a903969/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27203a5698b5e6964cd2029e7946fe29f649e583ab13d307a1e2f7365a903969/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27203a5698b5e6964cd2029e7946fe29f649e583ab13d307a1e2f7365a903969/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:02 compute-0 podman[74695]: 2025-09-30 17:37:02.855629522 +0000 UTC m=+0.355168878 container init 1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094 (image=quay.io/ceph/ceph:v19, name=clever_keller, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 17:37:02 compute-0 podman[74695]: 2025-09-30 17:37:02.86111986 +0000 UTC m=+0.360659196 container start 1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094 (image=quay.io/ceph/ceph:v19, name=clever_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 17:37:02 compute-0 podman[74695]: 2025-09-30 17:37:02.871573763 +0000 UTC m=+0.371113099 container attach 1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094 (image=quay.io/ceph/ceph:v19, name=clever_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:02 compute-0 ceph-mon[73755]: mgrmap e6: compute-0.efvthf(active, since 1.49594s)
Sep 30 17:37:02 compute-0 ceph-mon[73755]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Sep 30 17:37:02 compute-0 ceph-mon[73755]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Sep 30 17:37:02 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:02 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:02 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.efvthf(active, since 2s)
Sep 30 17:37:03 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Sep 30 17:37:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 17:37:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:03 compute-0 systemd[1]: libpod-1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094.scope: Deactivated successfully.
Sep 30 17:37:03 compute-0 podman[74695]: 2025-09-30 17:37:03.238551787 +0000 UTC m=+0.738091113 container died 1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094 (image=quay.io/ceph/ceph:v19, name=clever_keller, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 17:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-27203a5698b5e6964cd2029e7946fe29f649e583ab13d307a1e2f7365a903969-merged.mount: Deactivated successfully.
Sep 30 17:37:03 compute-0 podman[74695]: 2025-09-30 17:37:03.3235101 +0000 UTC m=+0.823049436 container remove 1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094 (image=quay.io/ceph/ceph:v19, name=clever_keller, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:37:03 compute-0 systemd[1]: libpod-conmon-1daa33ee335184bb0383c6e5ae4bf0d2691c705be6ef4499017ddae5fa137094.scope: Deactivated successfully.
Sep 30 17:37:03 compute-0 podman[74752]: 2025-09-30 17:37:03.419958331 +0000 UTC m=+0.081165038 container create 464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071 (image=quay.io/ceph/ceph:v19, name=flamboyant_galois, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 17:37:03 compute-0 systemd[1]: Started libpod-conmon-464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071.scope.
Sep 30 17:37:03 compute-0 podman[74752]: 2025-09-30 17:37:03.357318079 +0000 UTC m=+0.018524806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43dcb6d89fcdeef460d4a5fae1c7c97b63db534ddee809563e522c54271b3d3e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43dcb6d89fcdeef460d4a5fae1c7c97b63db534ddee809563e522c54271b3d3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43dcb6d89fcdeef460d4a5fae1c7c97b63db534ddee809563e522c54271b3d3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:03 compute-0 podman[74752]: 2025-09-30 17:37:03.53939378 +0000 UTC m=+0.200600487 container init 464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071 (image=quay.io/ceph/ceph:v19, name=flamboyant_galois, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:37:03 compute-0 podman[74752]: 2025-09-30 17:37:03.544729864 +0000 UTC m=+0.205936571 container start 464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071 (image=quay.io/ceph/ceph:v19, name=flamboyant_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 17:37:03 compute-0 podman[74752]: 2025-09-30 17:37:03.614375453 +0000 UTC m=+0.275582170 container attach 464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071 (image=quay.io/ceph/ceph:v19, name=flamboyant_galois, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:03 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Sep 30 17:37:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:03 compute-0 ceph-mgr[74051]: [cephadm INFO root] Set ssh ssh_user
Sep 30 17:37:03 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Sep 30 17:37:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Sep 30 17:37:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:03 compute-0 ceph-mgr[74051]: [cephadm INFO root] Set ssh ssh_config
Sep 30 17:37:03 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Sep 30 17:37:03 compute-0 ceph-mgr[74051]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Sep 30 17:37:03 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Sep 30 17:37:03 compute-0 flamboyant_galois[74769]: ssh user set to ceph-admin. sudo will be used
Sep 30 17:37:03 compute-0 systemd[1]: libpod-464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071.scope: Deactivated successfully.
Sep 30 17:37:03 compute-0 podman[74752]: 2025-09-30 17:37:03.920751695 +0000 UTC m=+0.581958402 container died 464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071 (image=quay.io/ceph/ceph:v19, name=flamboyant_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-43dcb6d89fcdeef460d4a5fae1c7c97b63db534ddee809563e522c54271b3d3e-merged.mount: Deactivated successfully.
Sep 30 17:37:03 compute-0 podman[74752]: 2025-09-30 17:37:03.968419922 +0000 UTC m=+0.629626629 container remove 464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071 (image=quay.io/ceph/ceph:v19, name=flamboyant_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:03 compute-0 ceph-mon[73755]: mgrmap e7: compute-0.efvthf(active, since 2s)
Sep 30 17:37:03 compute-0 ceph-mon[73755]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:03 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:03 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:03 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:03 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:03 compute-0 systemd[1]: libpod-conmon-464c7cd1d7efb8cf6f7b81f4fd55f74f5285510da945ffc9c16275f9167fa071.scope: Deactivated successfully.
Sep 30 17:37:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920487 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:04 compute-0 podman[74807]: 2025-09-30 17:37:04.027442804 +0000 UTC m=+0.039153534 container create f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:04 compute-0 podman[74807]: 2025-09-30 17:37:04.008846047 +0000 UTC m=+0.020556767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:04 compute-0 systemd[1]: Started libpod-conmon-f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f.scope.
Sep 30 17:37:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec20d9afad18152e961b7b76986e5292b14d67c88230153e0be24e4a90db725/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec20d9afad18152e961b7b76986e5292b14d67c88230153e0be24e4a90db725/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec20d9afad18152e961b7b76986e5292b14d67c88230153e0be24e4a90db725/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec20d9afad18152e961b7b76986e5292b14d67c88230153e0be24e4a90db725/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ec20d9afad18152e961b7b76986e5292b14d67c88230153e0be24e4a90db725/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:37:04] ENGINE Bus STARTING
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:37:04] ENGINE Bus STARTING
Sep 30 17:37:04 compute-0 podman[74807]: 2025-09-30 17:37:04.192481858 +0000 UTC m=+0.204192558 container init f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:04 compute-0 podman[74807]: 2025-09-30 17:37:04.198108979 +0000 UTC m=+0.209819689 container start f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:37:04 compute-0 podman[74807]: 2025-09-30 17:37:04.226618815 +0000 UTC m=+0.238329515 container attach f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:37:04] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:37:04] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:37:04] ENGINE Client ('192.168.122.100', 49414) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:37:04] ENGINE Client ('192.168.122.100', 49414) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:37:04] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:37:04] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:37:04] ENGINE Bus STARTED
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:37:04] ENGINE Bus STARTED
Sep 30 17:37:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 17:37:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Sep 30 17:37:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: [cephadm INFO root] Set ssh ssh_identity_key
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: [cephadm INFO root] Set ssh private key
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Set ssh private key
Sep 30 17:37:04 compute-0 systemd[1]: libpod-f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f.scope: Deactivated successfully.
Sep 30 17:37:04 compute-0 podman[74807]: 2025-09-30 17:37:04.549682365 +0000 UTC m=+0.561393085 container died f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 17:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ec20d9afad18152e961b7b76986e5292b14d67c88230153e0be24e4a90db725-merged.mount: Deactivated successfully.
Sep 30 17:37:04 compute-0 podman[74807]: 2025-09-30 17:37:04.595897485 +0000 UTC m=+0.607608195 container remove f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f (image=quay.io/ceph/ceph:v19, name=sharp_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Sep 30 17:37:04 compute-0 systemd[1]: libpod-conmon-f522eb9341e0563f2e86428c657cf02c77861bba05eec7767bd5ddc19947300f.scope: Deactivated successfully.
Sep 30 17:37:04 compute-0 podman[74887]: 2025-09-30 17:37:04.672080118 +0000 UTC m=+0.058110330 container create 3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407 (image=quay.io/ceph/ceph:v19, name=beautiful_robinson, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:04 compute-0 systemd[1]: Started libpod-conmon-3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407.scope.
Sep 30 17:37:04 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a59c8209d76072488a721ac46b3ee897115c3fa7be4df1984e5297c5ed7231e/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a59c8209d76072488a721ac46b3ee897115c3fa7be4df1984e5297c5ed7231e/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a59c8209d76072488a721ac46b3ee897115c3fa7be4df1984e5297c5ed7231e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a59c8209d76072488a721ac46b3ee897115c3fa7be4df1984e5297c5ed7231e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a59c8209d76072488a721ac46b3ee897115c3fa7be4df1984e5297c5ed7231e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:04 compute-0 podman[74887]: 2025-09-30 17:37:04.644565687 +0000 UTC m=+0.030595929 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:04 compute-0 podman[74887]: 2025-09-30 17:37:04.74542548 +0000 UTC m=+0.131455722 container init 3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407 (image=quay.io/ceph/ceph:v19, name=beautiful_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:04 compute-0 podman[74887]: 2025-09-30 17:37:04.752458556 +0000 UTC m=+0.138488778 container start 3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407 (image=quay.io/ceph/ceph:v19, name=beautiful_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Sep 30 17:37:04 compute-0 podman[74887]: 2025-09-30 17:37:04.757967165 +0000 UTC m=+0.143997387 container attach 3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407 (image=quay.io/ceph/ceph:v19, name=beautiful_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:04 compute-0 ceph-mon[73755]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:04 compute-0 ceph-mon[73755]: Set ssh ssh_user
Sep 30 17:37:04 compute-0 ceph-mon[73755]: Set ssh ssh_config
Sep 30 17:37:04 compute-0 ceph-mon[73755]: ssh user set to ceph-admin. sudo will be used
Sep 30 17:37:04 compute-0 ceph-mon[73755]: [30/Sep/2025:17:37:04] ENGINE Bus STARTING
Sep 30 17:37:04 compute-0 ceph-mon[73755]: [30/Sep/2025:17:37:04] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:37:04 compute-0 ceph-mon[73755]: [30/Sep/2025:17:37:04] ENGINE Client ('192.168.122.100', 49414) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:37:04 compute-0 ceph-mon[73755]: [30/Sep/2025:17:37:04] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:37:04 compute-0 ceph-mon[73755]: [30/Sep/2025:17:37:04] ENGINE Bus STARTED
Sep 30 17:37:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:04 compute-0 ceph-mon[73755]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:04 compute-0 ceph-mon[73755]: Set ssh ssh_identity_key
Sep 30 17:37:04 compute-0 ceph-mon[73755]: Set ssh private key
Sep 30 17:37:05 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Sep 30 17:37:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:05 compute-0 ceph-mgr[74051]: [cephadm INFO root] Set ssh ssh_identity_pub
Sep 30 17:37:05 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Sep 30 17:37:05 compute-0 systemd[1]: libpod-3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407.scope: Deactivated successfully.
Sep 30 17:37:05 compute-0 podman[74887]: 2025-09-30 17:37:05.136292543 +0000 UTC m=+0.522322765 container died 3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407 (image=quay.io/ceph/ceph:v19, name=beautiful_robinson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a59c8209d76072488a721ac46b3ee897115c3fa7be4df1984e5297c5ed7231e-merged.mount: Deactivated successfully.
Sep 30 17:37:05 compute-0 podman[74887]: 2025-09-30 17:37:05.174316028 +0000 UTC m=+0.560346250 container remove 3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407 (image=quay.io/ceph/ceph:v19, name=beautiful_robinson, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 17:37:05 compute-0 systemd[1]: libpod-conmon-3882e404724f69c790e23535650899bed863c5a28038beb0225fc69afde88407.scope: Deactivated successfully.
Sep 30 17:37:05 compute-0 podman[74940]: 2025-09-30 17:37:05.227237277 +0000 UTC m=+0.035733038 container create 10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c (image=quay.io/ceph/ceph:v19, name=hardcore_bassi, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:05 compute-0 systemd[1]: Started libpod-conmon-10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c.scope.
Sep 30 17:37:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ba7f4fec716ace21cce840c3898dee5e40837abc0a0d82aa83bba30305a1da/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ba7f4fec716ace21cce840c3898dee5e40837abc0a0d82aa83bba30305a1da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ba7f4fec716ace21cce840c3898dee5e40837abc0a0d82aa83bba30305a1da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:05 compute-0 podman[74940]: 2025-09-30 17:37:05.286244538 +0000 UTC m=+0.094740319 container init 10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c (image=quay.io/ceph/ceph:v19, name=hardcore_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:05 compute-0 podman[74940]: 2025-09-30 17:37:05.29147461 +0000 UTC m=+0.099970371 container start 10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c (image=quay.io/ceph/ceph:v19, name=hardcore_bassi, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:37:05 compute-0 podman[74940]: 2025-09-30 17:37:05.295016799 +0000 UTC m=+0.103512570 container attach 10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c (image=quay.io/ceph/ceph:v19, name=hardcore_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Sep 30 17:37:05 compute-0 podman[74940]: 2025-09-30 17:37:05.212189869 +0000 UTC m=+0.020685660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:05 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:05 compute-0 hardcore_bassi[74956]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNQ286oC14+6ejinYiXaimUHnQcDIIVZxAaVlJCMnzwNeh3KrexrzVYvvwGTLklqR7pD3i3K5n7Aj+KA3Q8l6FoyKhCYZtTVfT/ggh1B8dRZyOU/v7y3q7Ug1SU0+lKbL8m3iqH4h5+m3KJtcxzOKjZL7jSC0zexXpAzkc3OzVGNO4cmxwA5+sbG3xTUpN4t3iqQqIrmo7J+hCWqjDOrX4PZG75pw9Id8Zjq7x6GvhIGx5uUORMvP8bLmj4hlr4VqNa6cQAPOkM+DmGz+vtNO9Gf8uu4RApv1ngYsPD08k3WlmbYYhgi98MBvTDuk9kjx48Vicsu+TGIuxIbEBjxYRVHt+TybERVMsVOc5JuDQkRqPXPMBfSf/8W2nJhaQzCcJfu0XQ/IVpP6lXOhEBPmSTi3rYm9p4pmIY5dfX1q6tjRP1/pFm5KMk3uvAK6t7zUyhCI8ZFU5GdpqgDUuUnj+XE356Eptsbymb8ud9Vt1sApJLkJO9LwH2IsjvsuAYgk= zuul@controller
Sep 30 17:37:05 compute-0 systemd[1]: libpod-10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c.scope: Deactivated successfully.
Sep 30 17:37:05 compute-0 podman[74940]: 2025-09-30 17:37:05.634591565 +0000 UTC m=+0.443087336 container died 10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c (image=quay.io/ceph/ceph:v19, name=hardcore_bassi, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6ba7f4fec716ace21cce840c3898dee5e40837abc0a0d82aa83bba30305a1da-merged.mount: Deactivated successfully.
Sep 30 17:37:05 compute-0 podman[74940]: 2025-09-30 17:37:05.832051452 +0000 UTC m=+0.640547213 container remove 10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c (image=quay.io/ceph/ceph:v19, name=hardcore_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:05 compute-0 systemd[1]: libpod-conmon-10d54f14f586d213da4c98843713cb2feb0d1f9bc42af54e4fd1a832fc260d0c.scope: Deactivated successfully.
Sep 30 17:37:05 compute-0 podman[74994]: 2025-09-30 17:37:05.903073075 +0000 UTC m=+0.050983771 container create c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8 (image=quay.io/ceph/ceph:v19, name=adoring_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:37:05 compute-0 systemd[1]: Started libpod-conmon-c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8.scope.
Sep 30 17:37:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50f12cb5668debd76df8d305a0bddd0e53163d1530692254c392c0b67e8ca40/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50f12cb5668debd76df8d305a0bddd0e53163d1530692254c392c0b67e8ca40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50f12cb5668debd76df8d305a0bddd0e53163d1530692254c392c0b67e8ca40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:05 compute-0 podman[74994]: 2025-09-30 17:37:05.873184705 +0000 UTC m=+0.021095431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:05 compute-0 podman[74994]: 2025-09-30 17:37:05.985240369 +0000 UTC m=+0.133151075 container init c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8 (image=quay.io/ceph/ceph:v19, name=adoring_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Sep 30 17:37:05 compute-0 podman[74994]: 2025-09-30 17:37:05.992125641 +0000 UTC m=+0.140036337 container start c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8 (image=quay.io/ceph/ceph:v19, name=adoring_cray, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:05 compute-0 podman[74994]: 2025-09-30 17:37:05.996668375 +0000 UTC m=+0.144579081 container attach c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8 (image=quay.io/ceph/ceph:v19, name=adoring_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:37:06 compute-0 ceph-mon[73755]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:06 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:06 compute-0 ceph-mon[73755]: Set ssh ssh_identity_pub
Sep 30 17:37:06 compute-0 ceph-mon[73755]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:06 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:06 compute-0 sshd-session[75036]: Accepted publickey for ceph-admin from 192.168.122.100 port 45630 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:06 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Sep 30 17:37:06 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Sep 30 17:37:06 compute-0 systemd-logind[811]: New session 22 of user ceph-admin.
Sep 30 17:37:06 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Sep 30 17:37:06 compute-0 systemd[1]: Starting User Manager for UID 42477...
Sep 30 17:37:06 compute-0 systemd[75040]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:06 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:06 compute-0 systemd[75040]: Queued start job for default target Main User Target.
Sep 30 17:37:06 compute-0 sshd-session[75053]: Accepted publickey for ceph-admin from 192.168.122.100 port 45642 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:06 compute-0 systemd-logind[811]: New session 24 of user ceph-admin.
Sep 30 17:37:06 compute-0 systemd[75040]: Created slice User Application Slice.
Sep 30 17:37:06 compute-0 systemd[75040]: Started Mark boot as successful after the user session has run 2 minutes.
Sep 30 17:37:06 compute-0 systemd[75040]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 17:37:06 compute-0 systemd[75040]: Reached target Paths.
Sep 30 17:37:06 compute-0 systemd[75040]: Reached target Timers.
Sep 30 17:37:06 compute-0 systemd[75040]: Starting D-Bus User Message Bus Socket...
Sep 30 17:37:06 compute-0 systemd[75040]: Starting Create User's Volatile Files and Directories...
Sep 30 17:37:06 compute-0 systemd[75040]: Listening on D-Bus User Message Bus Socket.
Sep 30 17:37:06 compute-0 systemd[75040]: Reached target Sockets.
Sep 30 17:37:06 compute-0 systemd[75040]: Finished Create User's Volatile Files and Directories.
Sep 30 17:37:06 compute-0 systemd[75040]: Reached target Basic System.
Sep 30 17:37:06 compute-0 systemd[75040]: Reached target Main User Target.
Sep 30 17:37:06 compute-0 systemd[75040]: Startup finished in 141ms.
Sep 30 17:37:06 compute-0 systemd[1]: Started User Manager for UID 42477.
Sep 30 17:37:06 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Sep 30 17:37:06 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Sep 30 17:37:06 compute-0 sshd-session[75036]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:06 compute-0 sshd-session[75053]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:06 compute-0 sudo[75060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:06 compute-0 sudo[75060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:06 compute-0 sudo[75060]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:07 compute-0 sshd-session[75085]: Accepted publickey for ceph-admin from 192.168.122.100 port 45648 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:07 compute-0 systemd-logind[811]: New session 25 of user ceph-admin.
Sep 30 17:37:07 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Sep 30 17:37:07 compute-0 sshd-session[75085]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:07 compute-0 ceph-mon[73755]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:07 compute-0 sudo[75089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Sep 30 17:37:07 compute-0 sudo[75089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:07 compute-0 sudo[75089]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:07 compute-0 sshd-session[75114]: Accepted publickey for ceph-admin from 192.168.122.100 port 45662 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:07 compute-0 systemd-logind[811]: New session 26 of user ceph-admin.
Sep 30 17:37:07 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Sep 30 17:37:07 compute-0 sshd-session[75114]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:07 compute-0 sudo[75118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Sep 30 17:37:07 compute-0 sudo[75118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:07 compute-0 sudo[75118]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:07 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Sep 30 17:37:07 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Sep 30 17:37:07 compute-0 sshd-session[75143]: Accepted publickey for ceph-admin from 192.168.122.100 port 45670 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:07 compute-0 systemd-logind[811]: New session 27 of user ceph-admin.
Sep 30 17:37:07 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Sep 30 17:37:07 compute-0 sshd-session[75143]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:07 compute-0 sudo[75147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:07 compute-0 sudo[75147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:07 compute-0 sudo[75147]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:08 compute-0 sshd-session[75172]: Accepted publickey for ceph-admin from 192.168.122.100 port 45674 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:08 compute-0 systemd-logind[811]: New session 28 of user ceph-admin.
Sep 30 17:37:08 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Sep 30 17:37:08 compute-0 sshd-session[75172]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:08 compute-0 sudo[75176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:08 compute-0 sudo[75176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:08 compute-0 sudo[75176]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:08 compute-0 ceph-mon[73755]: Deploying cephadm binary to compute-0
Sep 30 17:37:08 compute-0 sshd-session[75201]: Accepted publickey for ceph-admin from 192.168.122.100 port 45686 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:08 compute-0 systemd-logind[811]: New session 29 of user ceph-admin.
Sep 30 17:37:08 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Sep 30 17:37:08 compute-0 sshd-session[75201]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:08 compute-0 sudo[75205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Sep 30 17:37:08 compute-0 sudo[75205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:08 compute-0 sudo[75205]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:08 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:08 compute-0 sshd-session[75230]: Accepted publickey for ceph-admin from 192.168.122.100 port 45690 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:08 compute-0 systemd-logind[811]: New session 30 of user ceph-admin.
Sep 30 17:37:08 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Sep 30 17:37:08 compute-0 sshd-session[75230]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:08 compute-0 sudo[75234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:08 compute-0 sudo[75234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:08 compute-0 sudo[75234]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053024 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:09 compute-0 sshd-session[75259]: Accepted publickey for ceph-admin from 192.168.122.100 port 45706 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:09 compute-0 systemd-logind[811]: New session 31 of user ceph-admin.
Sep 30 17:37:09 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Sep 30 17:37:09 compute-0 sshd-session[75259]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:09 compute-0 sudo[75263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Sep 30 17:37:09 compute-0 sudo[75263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:09 compute-0 sudo[75263]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:09 compute-0 sshd-session[75288]: Accepted publickey for ceph-admin from 192.168.122.100 port 45712 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:09 compute-0 systemd-logind[811]: New session 32 of user ceph-admin.
Sep 30 17:37:09 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Sep 30 17:37:09 compute-0 sshd-session[75288]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:10 compute-0 sshd-session[75315]: Accepted publickey for ceph-admin from 192.168.122.100 port 45728 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:10 compute-0 systemd-logind[811]: New session 33 of user ceph-admin.
Sep 30 17:37:10 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Sep 30 17:37:10 compute-0 sshd-session[75315]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:10 compute-0 sudo[75319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Sep 30 17:37:10 compute-0 sudo[75319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:10 compute-0 sudo[75319]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:10 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:10 compute-0 sshd-session[75344]: Accepted publickey for ceph-admin from 192.168.122.100 port 45732 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:37:10 compute-0 systemd-logind[811]: New session 34 of user ceph-admin.
Sep 30 17:37:10 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Sep 30 17:37:10 compute-0 sshd-session[75344]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:37:10 compute-0 sudo[75348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Sep 30 17:37:10 compute-0 sudo[75348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:11 compute-0 sudo[75348]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 17:37:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:11 compute-0 ceph-mgr[74051]: [cephadm INFO root] Added host compute-0
Sep 30 17:37:11 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Added host compute-0
Sep 30 17:37:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 17:37:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:11 compute-0 adoring_cray[75010]: Added host 'compute-0' with addr '192.168.122.100'
Sep 30 17:37:11 compute-0 systemd[1]: libpod-c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8.scope: Deactivated successfully.
Sep 30 17:37:11 compute-0 podman[74994]: 2025-09-30 17:37:11.298940563 +0000 UTC m=+5.446851259 container died c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8 (image=quay.io/ceph/ceph:v19, name=adoring_cray, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:11 compute-0 sudo[75394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:11 compute-0 sudo[75394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:11 compute-0 sudo[75394]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c50f12cb5668debd76df8d305a0bddd0e53163d1530692254c392c0b67e8ca40-merged.mount: Deactivated successfully.
Sep 30 17:37:11 compute-0 sudo[75430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Sep 30 17:37:11 compute-0 sudo[75430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:11 compute-0 podman[74994]: 2025-09-30 17:37:11.507804516 +0000 UTC m=+5.655715212 container remove c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8 (image=quay.io/ceph/ceph:v19, name=adoring_cray, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:11 compute-0 systemd[1]: libpod-conmon-c5667735e5edef3e8ea39b00aeb0ebe461842ca979d0090a722f93fe9d58d4c8.scope: Deactivated successfully.
Sep 30 17:37:11 compute-0 podman[75455]: 2025-09-30 17:37:11.552850797 +0000 UTC m=+0.023065170 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:11 compute-0 podman[75455]: 2025-09-30 17:37:11.867730282 +0000 UTC m=+0.337944675 container create d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:37:11 compute-0 systemd[1]: Started libpod-conmon-d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5.scope.
Sep 30 17:37:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1edb05241d5e72f0238ff38e5a4ba0d308a7830df40169a13b2f26e95efd4c3e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1edb05241d5e72f0238ff38e5a4ba0d308a7830df40169a13b2f26e95efd4c3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1edb05241d5e72f0238ff38e5a4ba0d308a7830df40169a13b2f26e95efd4c3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:11 compute-0 podman[75455]: 2025-09-30 17:37:11.996382512 +0000 UTC m=+0.466596885 container init d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Sep 30 17:37:12 compute-0 podman[75455]: 2025-09-30 17:37:12.002878066 +0000 UTC m=+0.473092409 container start d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:12 compute-0 podman[75455]: 2025-09-30 17:37:12.005922992 +0000 UTC m=+0.476137425 container attach d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:12 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:12 compute-0 ceph-mon[73755]: Added host compute-0
Sep 30 17:37:12 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:37:12 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:12 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service mon spec with placement count:5
Sep 30 17:37:12 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Sep 30 17:37:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 17:37:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:12 compute-0 laughing_lederberg[75483]: Scheduled mon update...
Sep 30 17:37:12 compute-0 systemd[1]: libpod-d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5.scope: Deactivated successfully.
Sep 30 17:37:12 compute-0 podman[75455]: 2025-09-30 17:37:12.383606795 +0000 UTC m=+0.853821148 container died d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1edb05241d5e72f0238ff38e5a4ba0d308a7830df40169a13b2f26e95efd4c3e-merged.mount: Deactivated successfully.
Sep 30 17:37:12 compute-0 podman[75455]: 2025-09-30 17:37:12.424246365 +0000 UTC m=+0.894460768 container remove d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5 (image=quay.io/ceph/ceph:v19, name=laughing_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:37:12 compute-0 systemd[1]: libpod-conmon-d2f933a1f63cdfca9516afe7f3e6de29c184000634ae75f9b8dc564a0f8749b5.scope: Deactivated successfully.
Sep 30 17:37:12 compute-0 podman[75535]: 2025-09-30 17:37:12.498027648 +0000 UTC m=+0.050358936 container create ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79 (image=quay.io/ceph/ceph:v19, name=great_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 17:37:12 compute-0 podman[75488]: 2025-09-30 17:37:12.519088356 +0000 UTC m=+0.482667339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:12 compute-0 systemd[1]: Started libpod-conmon-ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79.scope.
Sep 30 17:37:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:12 compute-0 podman[75535]: 2025-09-30 17:37:12.470955748 +0000 UTC m=+0.023287056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a839ea3b216bb4267a4d67f6371b1df1e215170594e46829e84c32a29d0fb8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a839ea3b216bb4267a4d67f6371b1df1e215170594e46829e84c32a29d0fb8b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a839ea3b216bb4267a4d67f6371b1df1e215170594e46829e84c32a29d0fb8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:12 compute-0 podman[75535]: 2025-09-30 17:37:12.583264028 +0000 UTC m=+0.135595336 container init ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79 (image=quay.io/ceph/ceph:v19, name=great_kalam, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:12 compute-0 podman[75535]: 2025-09-30 17:37:12.58891879 +0000 UTC m=+0.141250078 container start ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79 (image=quay.io/ceph/ceph:v19, name=great_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:37:12 compute-0 podman[75535]: 2025-09-30 17:37:12.626285068 +0000 UTC m=+0.178616376 container attach ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79 (image=quay.io/ceph/ceph:v19, name=great_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:12 compute-0 podman[75572]: 2025-09-30 17:37:12.670473947 +0000 UTC m=+0.067269960 container create 10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f (image=quay.io/ceph/ceph:v19, name=flamboyant_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 17:37:12 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:12 compute-0 systemd[1]: Started libpod-conmon-10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f.scope.
Sep 30 17:37:12 compute-0 podman[75572]: 2025-09-30 17:37:12.633173341 +0000 UTC m=+0.029969374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:12 compute-0 podman[75572]: 2025-09-30 17:37:12.827286695 +0000 UTC m=+0.224082728 container init 10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f (image=quay.io/ceph/ceph:v19, name=flamboyant_tharp, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:37:12 compute-0 podman[75572]: 2025-09-30 17:37:12.834899286 +0000 UTC m=+0.231695299 container start 10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f (image=quay.io/ceph/ceph:v19, name=flamboyant_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:12 compute-0 flamboyant_tharp[75607]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Sep 30 17:37:12 compute-0 podman[75572]: 2025-09-30 17:37:12.948624181 +0000 UTC m=+0.345420214 container attach 10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f (image=quay.io/ceph/ceph:v19, name=flamboyant_tharp, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Sep 30 17:37:12 compute-0 systemd[1]: libpod-10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f.scope: Deactivated successfully.
Sep 30 17:37:12 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:12 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service mgr spec with placement count:2
Sep 30 17:37:12 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Sep 30 17:37:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 17:37:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:12 compute-0 great_kalam[75555]: Scheduled mgr update...
Sep 30 17:37:12 compute-0 podman[75612]: 2025-09-30 17:37:12.985149338 +0000 UTC m=+0.023003118 container died 10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f (image=quay.io/ceph/ceph:v19, name=flamboyant_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 17:37:12 compute-0 systemd[1]: libpod-ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79.scope: Deactivated successfully.
Sep 30 17:37:12 compute-0 conmon[75555]: conmon ca1bc9c07bf9b233e781 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79.scope/container/memory.events
Sep 30 17:37:13 compute-0 podman[75535]: 2025-09-30 17:37:13.040490928 +0000 UTC m=+0.592822226 container died ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79 (image=quay.io/ceph/ceph:v19, name=great_kalam, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Sep 30 17:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc3058da05dc0268b647f26663dd552ef08dfcc0016dbb0edc39d7a13279ef80-merged.mount: Deactivated successfully.
Sep 30 17:37:13 compute-0 podman[75612]: 2025-09-30 17:37:13.165312612 +0000 UTC m=+0.203166372 container remove 10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f (image=quay.io/ceph/ceph:v19, name=flamboyant_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:13 compute-0 systemd[1]: libpod-conmon-10c02e34fc8a17d393a098fac0a3d249dfa967cb01f967af2fa506ed8d20f05f.scope: Deactivated successfully.
Sep 30 17:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a839ea3b216bb4267a4d67f6371b1df1e215170594e46829e84c32a29d0fb8b-merged.mount: Deactivated successfully.
Sep 30 17:37:13 compute-0 sudo[75430]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Sep 30 17:37:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:13 compute-0 podman[75535]: 2025-09-30 17:37:13.218933868 +0000 UTC m=+0.771265156 container remove ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79 (image=quay.io/ceph/ceph:v19, name=great_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:13 compute-0 systemd[1]: libpod-conmon-ca1bc9c07bf9b233e781fd24c768489479bd452b1a720d389dc5761e49973b79.scope: Deactivated successfully.
Sep 30 17:37:13 compute-0 sudo[75644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:13 compute-0 sudo[75644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:13 compute-0 sudo[75644]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:13 compute-0 podman[75642]: 2025-09-30 17:37:13.31618589 +0000 UTC m=+0.077362163 container create cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7 (image=quay.io/ceph/ceph:v19, name=kind_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 17:37:13 compute-0 sudo[75681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 17:37:13 compute-0 sudo[75681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:13 compute-0 podman[75642]: 2025-09-30 17:37:13.261446506 +0000 UTC m=+0.022622799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:13 compute-0 systemd[1]: Started libpod-conmon-cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7.scope.
Sep 30 17:37:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/609384865eea5fbdce9e95819eba583caf604013aa005bf3ee406246c8c1dbcb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/609384865eea5fbdce9e95819eba583caf604013aa005bf3ee406246c8c1dbcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/609384865eea5fbdce9e95819eba583caf604013aa005bf3ee406246c8c1dbcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:13 compute-0 ceph-mon[73755]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:13 compute-0 ceph-mon[73755]: Saving service mon spec with placement count:5
Sep 30 17:37:13 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:13 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:13 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:13 compute-0 podman[75642]: 2025-09-30 17:37:13.535673501 +0000 UTC m=+0.296849794 container init cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7 (image=quay.io/ceph/ceph:v19, name=kind_dubinsky, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 17:37:13 compute-0 podman[75642]: 2025-09-30 17:37:13.541067906 +0000 UTC m=+0.302244179 container start cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7 (image=quay.io/ceph/ceph:v19, name=kind_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:37:13 compute-0 sudo[75681]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:13 compute-0 podman[75642]: 2025-09-30 17:37:13.651076308 +0000 UTC m=+0.412252581 container attach cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7 (image=quay.io/ceph/ceph:v19, name=kind_dubinsky, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:37:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:13 compute-0 sudo[75752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:13 compute-0 sudo[75752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:13 compute-0 sudo[75752]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:13 compute-0 sudo[75777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:37:13 compute-0 sudo[75777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:13 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:13 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service crash spec with placement *
Sep 30 17:37:13 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Sep 30 17:37:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 17:37:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:13 compute-0 kind_dubinsky[75708]: Scheduled crash update...
Sep 30 17:37:13 compute-0 systemd[1]: libpod-cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7.scope: Deactivated successfully.
Sep 30 17:37:13 compute-0 podman[75642]: 2025-09-30 17:37:13.95703354 +0000 UTC m=+0.718209813 container died cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7 (image=quay.io/ceph/ceph:v19, name=kind_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 17:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-609384865eea5fbdce9e95819eba583caf604013aa005bf3ee406246c8c1dbcb-merged.mount: Deactivated successfully.
Sep 30 17:37:13 compute-0 podman[75642]: 2025-09-30 17:37:13.994901371 +0000 UTC m=+0.756077644 container remove cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7 (image=quay.io/ceph/ceph:v19, name=kind_dubinsky, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:37:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:14 compute-0 systemd[1]: libpod-conmon-cd42a7326a559c2a2cb726be4aba817055c7a8d64472c1d706ca6d694f7cb0a7.scope: Deactivated successfully.
Sep 30 17:37:14 compute-0 podman[75827]: 2025-09-30 17:37:14.052177299 +0000 UTC m=+0.037666507 container create 959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53 (image=quay.io/ceph/ceph:v19, name=quizzical_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:14 compute-0 systemd[1]: Started libpod-conmon-959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53.scope.
Sep 30 17:37:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d552ad790ed0d19a1821b31042a25acdc8d7a98969ce02ddacd38eadf48b73e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d552ad790ed0d19a1821b31042a25acdc8d7a98969ce02ddacd38eadf48b73e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d552ad790ed0d19a1821b31042a25acdc8d7a98969ce02ddacd38eadf48b73e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:14 compute-0 podman[75827]: 2025-09-30 17:37:14.034405803 +0000 UTC m=+0.019895031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:14 compute-0 podman[75827]: 2025-09-30 17:37:14.136783843 +0000 UTC m=+0.122273081 container init 959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53 (image=quay.io/ceph/ceph:v19, name=quizzical_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 17:37:14 compute-0 podman[75827]: 2025-09-30 17:37:14.14223277 +0000 UTC m=+0.127721978 container start 959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53 (image=quay.io/ceph/ceph:v19, name=quizzical_pasteur, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 17:37:14 compute-0 podman[75827]: 2025-09-30 17:37:14.146316723 +0000 UTC m=+0.131805931 container attach 959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53 (image=quay.io/ceph/ceph:v19, name=quizzical_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:14 compute-0 podman[75926]: 2025-09-30 17:37:14.347706249 +0000 UTC m=+0.052350065 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:14 compute-0 podman[75926]: 2025-09-30 17:37:14.449744711 +0000 UTC m=+0.154388517 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Sep 30 17:37:14 compute-0 ceph-mon[73755]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:14 compute-0 ceph-mon[73755]: Saving service mgr spec with placement count:2
Sep 30 17:37:14 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:14 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1205059426' entity='client.admin' 
Sep 30 17:37:14 compute-0 systemd[1]: libpod-959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53.scope: Deactivated successfully.
Sep 30 17:37:14 compute-0 podman[75827]: 2025-09-30 17:37:14.548816229 +0000 UTC m=+0.534305457 container died 959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53 (image=quay.io/ceph/ceph:v19, name=quizzical_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d552ad790ed0d19a1821b31042a25acdc8d7a98969ce02ddacd38eadf48b73e9-merged.mount: Deactivated successfully.
Sep 30 17:37:14 compute-0 podman[75827]: 2025-09-30 17:37:14.592587448 +0000 UTC m=+0.578076656 container remove 959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53 (image=quay.io/ceph/ceph:v19, name=quizzical_pasteur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 17:37:14 compute-0 systemd[1]: libpod-conmon-959720e10987a0a572078f8872dbc1f49dd26b6f190d66f1e040d09aefe74c53.scope: Deactivated successfully.
Sep 30 17:37:14 compute-0 sudo[75777]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:14 compute-0 podman[75994]: 2025-09-30 17:37:14.655321183 +0000 UTC m=+0.039415001 container create 9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679 (image=quay.io/ceph/ceph:v19, name=hopeful_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Sep 30 17:37:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:14 compute-0 systemd[1]: Started libpod-conmon-9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679.scope.
Sep 30 17:37:14 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fa3184f0611d5faecf9bc321c16bab39f5dce67680de595145b208988759a36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fa3184f0611d5faecf9bc321c16bab39f5dce67680de595145b208988759a36/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fa3184f0611d5faecf9bc321c16bab39f5dce67680de595145b208988759a36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:14 compute-0 sudo[76008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:14 compute-0 podman[75994]: 2025-09-30 17:37:14.635186677 +0000 UTC m=+0.019280515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:14 compute-0 sudo[76008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:14 compute-0 sudo[76008]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:14 compute-0 podman[75994]: 2025-09-30 17:37:14.739979838 +0000 UTC m=+0.124073676 container init 9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679 (image=quay.io/ceph/ceph:v19, name=hopeful_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:14 compute-0 podman[75994]: 2025-09-30 17:37:14.746078461 +0000 UTC m=+0.130172279 container start 9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679 (image=quay.io/ceph/ceph:v19, name=hopeful_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:14 compute-0 podman[75994]: 2025-09-30 17:37:14.750590745 +0000 UTC m=+0.134684583 container attach 9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679 (image=quay.io/ceph/ceph:v19, name=hopeful_roentgen, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:14 compute-0 sudo[76039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:37:14 compute-0 sudo[76039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:15 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76098 (sysctl)
Sep 30 17:37:15 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Sep 30 17:37:15 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Sep 30 17:37:15 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Sep 30 17:37:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:15 compute-0 systemd[1]: libpod-9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679.scope: Deactivated successfully.
Sep 30 17:37:15 compute-0 podman[76105]: 2025-09-30 17:37:15.148759871 +0000 UTC m=+0.023719557 container died 9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679 (image=quay.io/ceph/ceph:v19, name=hopeful_roentgen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 17:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fa3184f0611d5faecf9bc321c16bab39f5dce67680de595145b208988759a36-merged.mount: Deactivated successfully.
Sep 30 17:37:15 compute-0 podman[76105]: 2025-09-30 17:37:15.191413972 +0000 UTC m=+0.066373648 container remove 9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679 (image=quay.io/ceph/ceph:v19, name=hopeful_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:37:15 compute-0 systemd[1]: libpod-conmon-9b529e87e2b224f1db3f16315e917823b7958068ecf6c597dbe0b55dfdf6b679.scope: Deactivated successfully.
Sep 30 17:37:15 compute-0 podman[76123]: 2025-09-30 17:37:15.256168037 +0000 UTC m=+0.037314927 container create 6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa (image=quay.io/ceph/ceph:v19, name=sharp_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:37:15 compute-0 systemd[1]: Started libpod-conmon-6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa.scope.
Sep 30 17:37:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7e17e7c6913afabacd8df01c45b977a9e63d2f6f148c88aef0f57e89b8cbb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7e17e7c6913afabacd8df01c45b977a9e63d2f6f148c88aef0f57e89b8cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7e17e7c6913afabacd8df01c45b977a9e63d2f6f148c88aef0f57e89b8cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:15 compute-0 podman[76123]: 2025-09-30 17:37:15.240810532 +0000 UTC m=+0.021957452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:15 compute-0 podman[76123]: 2025-09-30 17:37:15.34229658 +0000 UTC m=+0.123443490 container init 6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa (image=quay.io/ceph/ceph:v19, name=sharp_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:37:15 compute-0 podman[76123]: 2025-09-30 17:37:15.350238859 +0000 UTC m=+0.131385749 container start 6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa (image=quay.io/ceph/ceph:v19, name=sharp_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:37:15 compute-0 podman[76123]: 2025-09-30 17:37:15.353918112 +0000 UTC m=+0.135065042 container attach 6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa (image=quay.io/ceph/ceph:v19, name=sharp_newton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:15 compute-0 sudo[76039]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:15 compute-0 sudo[76159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:15 compute-0 sudo[76159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:15 compute-0 sudo[76159]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:15 compute-0 sudo[76184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 17:37:15 compute-0 sudo[76184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:15 compute-0 ceph-mon[73755]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:15 compute-0 ceph-mon[73755]: Saving service crash spec with placement *
Sep 30 17:37:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1205059426' entity='client.admin' 
Sep 30 17:37:15 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:15 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:15 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 17:37:15 compute-0 sudo[76184]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:15 compute-0 ceph-mgr[74051]: [cephadm INFO root] Added label _admin to host compute-0
Sep 30 17:37:15 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Sep 30 17:37:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:15 compute-0 sharp_newton[76145]: Added label _admin to host compute-0
Sep 30 17:37:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:15 compute-0 systemd[1]: libpod-6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa.scope: Deactivated successfully.
Sep 30 17:37:15 compute-0 podman[76123]: 2025-09-30 17:37:15.7724658 +0000 UTC m=+0.553612690 container died 6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa (image=quay.io/ceph/ceph:v19, name=sharp_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 17:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-39c7e17e7c6913afabacd8df01c45b977a9e63d2f6f148c88aef0f57e89b8cbb-merged.mount: Deactivated successfully.
Sep 30 17:37:15 compute-0 sudo[76247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:15 compute-0 sudo[76247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:15 compute-0 sudo[76247]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:15 compute-0 podman[76123]: 2025-09-30 17:37:15.831569764 +0000 UTC m=+0.612716654 container remove 6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa (image=quay.io/ceph/ceph:v19, name=sharp_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:15 compute-0 systemd[1]: libpod-conmon-6ae333501237330a7626a7e09456eb44340e606375567e26dffbfa6f8f2a71fa.scope: Deactivated successfully.
Sep 30 17:37:15 compute-0 sudo[76284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- inventory --format=json-pretty --filter-for-batch
Sep 30 17:37:15 compute-0 sudo[76284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:15 compute-0 podman[76303]: 2025-09-30 17:37:15.890163576 +0000 UTC m=+0.037632956 container create 0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e (image=quay.io/ceph/ceph:v19, name=hopeful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:37:15 compute-0 systemd[1]: Started libpod-conmon-0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e.scope.
Sep 30 17:37:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06bac85f86815b0496550f0774cb18b423b986b649f352139fb626a86333e231/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06bac85f86815b0496550f0774cb18b423b986b649f352139fb626a86333e231/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06bac85f86815b0496550f0774cb18b423b986b649f352139fb626a86333e231/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:15 compute-0 podman[76303]: 2025-09-30 17:37:15.872582404 +0000 UTC m=+0.020051804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:15 compute-0 podman[76303]: 2025-09-30 17:37:15.983442388 +0000 UTC m=+0.130911778 container init 0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e (image=quay.io/ceph/ceph:v19, name=hopeful_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:15 compute-0 podman[76303]: 2025-09-30 17:37:15.991242563 +0000 UTC m=+0.138711953 container start 0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e (image=quay.io/ceph/ceph:v19, name=hopeful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:37:15 compute-0 podman[76303]: 2025-09-30 17:37:15.994711781 +0000 UTC m=+0.142181161 container attach 0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e (image=quay.io/ceph/ceph:v19, name=hopeful_franklin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:16 compute-0 podman[76386]: 2025-09-30 17:37:16.295544764 +0000 UTC m=+0.105388217 container create 9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Sep 30 17:37:16 compute-0 podman[76386]: 2025-09-30 17:37:16.209243667 +0000 UTC m=+0.019087150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:37:16 compute-0 systemd[1]: Started libpod-conmon-9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc.scope.
Sep 30 17:37:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:16 compute-0 podman[76386]: 2025-09-30 17:37:16.374067915 +0000 UTC m=+0.183911388 container init 9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:16 compute-0 podman[76386]: 2025-09-30 17:37:16.378927647 +0000 UTC m=+0.188771100 container start 9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:16 compute-0 relaxed_hertz[76402]: 167 167
Sep 30 17:37:16 compute-0 systemd[1]: libpod-9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc.scope: Deactivated successfully.
Sep 30 17:37:16 compute-0 podman[76386]: 2025-09-30 17:37:16.412117311 +0000 UTC m=+0.221960784 container attach 9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hertz, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:16 compute-0 podman[76386]: 2025-09-30 17:37:16.412842989 +0000 UTC m=+0.222686442 container died 9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bef5b67353bc28443a791d9bea658548a15ad7f35d26a7a40613bc56d9175c7-merged.mount: Deactivated successfully.
Sep 30 17:37:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Sep 30 17:37:16 compute-0 podman[76386]: 2025-09-30 17:37:16.478155009 +0000 UTC m=+0.287998462 container remove 9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 17:37:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/796881462' entity='client.admin' 
Sep 30 17:37:16 compute-0 hopeful_franklin[76327]: set mgr/dashboard/cluster/status
Sep 30 17:37:16 compute-0 systemd[1]: libpod-0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e.scope: Deactivated successfully.
Sep 30 17:37:16 compute-0 podman[76303]: 2025-09-30 17:37:16.49692483 +0000 UTC m=+0.644394210 container died 0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e (image=quay.io/ceph/ceph:v19, name=hopeful_franklin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 17:37:16 compute-0 systemd[1]: libpod-conmon-9f15cbe1e47315cdcabb7c6cb56a21e91f0eb146b76f9b985e29cff12fd745dc.scope: Deactivated successfully.
Sep 30 17:37:16 compute-0 ceph-mon[73755]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/796881462' entity='client.admin' 
Sep 30 17:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-06bac85f86815b0496550f0774cb18b423b986b649f352139fb626a86333e231-merged.mount: Deactivated successfully.
Sep 30 17:37:16 compute-0 podman[76303]: 2025-09-30 17:37:16.658034405 +0000 UTC m=+0.805503785 container remove 0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e (image=quay.io/ceph/ceph:v19, name=hopeful_franklin, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:37:16 compute-0 systemd[1]: libpod-conmon-0f6f8d2c02bd6d88e95344288f904af285d5933cd09ca171eb454fe833fdc23e.scope: Deactivated successfully.
Sep 30 17:37:16 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:16 compute-0 sudo[72715]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:16 compute-0 podman[76441]: 2025-09-30 17:37:16.840067186 +0000 UTC m=+0.049699449 container create 0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:16 compute-0 systemd[1]: Started libpod-conmon-0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b.scope.
Sep 30 17:37:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7825174ff206cf60267af731246fa655440afa96387429cb8ce630fb6247fe45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:16 compute-0 podman[76441]: 2025-09-30 17:37:16.810058282 +0000 UTC m=+0.019690555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7825174ff206cf60267af731246fa655440afa96387429cb8ce630fb6247fe45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7825174ff206cf60267af731246fa655440afa96387429cb8ce630fb6247fe45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7825174ff206cf60267af731246fa655440afa96387429cb8ce630fb6247fe45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:16 compute-0 podman[76441]: 2025-09-30 17:37:16.918871294 +0000 UTC m=+0.128503577 container init 0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 17:37:16 compute-0 podman[76441]: 2025-09-30 17:37:16.926875615 +0000 UTC m=+0.136507868 container start 0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 17:37:16 compute-0 podman[76441]: 2025-09-30 17:37:16.942035556 +0000 UTC m=+0.151667829 container attach 0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:17 compute-0 sudo[76485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyqftianwzmntvmviayrpglwczyhmxcn ; /usr/bin/python3'
Sep 30 17:37:17 compute-0 sudo[76485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:17 compute-0 python3[76487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:37:17 compute-0 podman[76494]: 2025-09-30 17:37:17.23017561 +0000 UTC m=+0.039098232 container create 8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256 (image=quay.io/ceph/ceph:v19, name=vigorous_benz, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:17 compute-0 systemd[1]: Started libpod-conmon-8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256.scope.
Sep 30 17:37:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88319508945bf03703a9eb51ea084e70f7e508d89a69ffdbf55a1f4899294683/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88319508945bf03703a9eb51ea084e70f7e508d89a69ffdbf55a1f4899294683/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:17 compute-0 podman[76494]: 2025-09-30 17:37:17.298066695 +0000 UTC m=+0.106989337 container init 8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256 (image=quay.io/ceph/ceph:v19, name=vigorous_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Sep 30 17:37:17 compute-0 podman[76494]: 2025-09-30 17:37:17.306717602 +0000 UTC m=+0.115640254 container start 8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256 (image=quay.io/ceph/ceph:v19, name=vigorous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:17 compute-0 podman[76494]: 2025-09-30 17:37:17.211559313 +0000 UTC m=+0.020481945 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:17 compute-0 podman[76494]: 2025-09-30 17:37:17.311549083 +0000 UTC m=+0.120471705 container attach 8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256 (image=quay.io/ceph/ceph:v19, name=vigorous_benz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:17 compute-0 ceph-mon[73755]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:17 compute-0 ceph-mon[73755]: Added label _admin to host compute-0
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]: [
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:     {
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "available": false,
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "being_replaced": false,
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "ceph_device_lvm": false,
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "device_id": "QEMU_DVD-ROM_QM00001",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "lsm_data": {},
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "lvs": [],
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "path": "/dev/sr0",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "rejected_reasons": [
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "Has a FileSystem",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "Insufficient space (<5GB)"
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         ],
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         "sys_api": {
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "actuators": null,
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "device_nodes": [
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:                 "sr0"
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             ],
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "devname": "sr0",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "human_readable_size": "482.00 KB",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "id_bus": "ata",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "model": "QEMU DVD-ROM",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "nr_requests": "2",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "parent": "/dev/sr0",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "partitions": {},
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "path": "/dev/sr0",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "removable": "1",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "rev": "2.5+",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "ro": "0",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "rotational": "0",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "sas_address": "",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "sas_device_handle": "",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "scheduler_mode": "mq-deadline",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "sectors": 0,
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "sectorsize": "2048",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "size": 493568.0,
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "support_discard": "2048",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "type": "disk",
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:             "vendor": "QEMU"
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:         }
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]:     }
Sep 30 17:37:17 compute-0 peaceful_pasteur[76457]: ]
Sep 30 17:37:17 compute-0 systemd[1]: libpod-0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b.scope: Deactivated successfully.
Sep 30 17:37:17 compute-0 podman[76441]: 2025-09-30 17:37:17.615382302 +0000 UTC m=+0.825014565 container died 0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7825174ff206cf60267af731246fa655440afa96387429cb8ce630fb6247fe45-merged.mount: Deactivated successfully.
Sep 30 17:37:17 compute-0 podman[76441]: 2025-09-30 17:37:17.656881474 +0000 UTC m=+0.866513727 container remove 0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:37:17 compute-0 systemd[1]: libpod-conmon-0a82607cd97915d034e1745cc70d5e17c738dbbe8ce73194ec1c61358003308b.scope: Deactivated successfully.
Sep 30 17:37:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Sep 30 17:37:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/301093757' entity='client.admin' 
Sep 30 17:37:17 compute-0 systemd[1]: libpod-8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256.scope: Deactivated successfully.
Sep 30 17:37:17 compute-0 podman[76494]: 2025-09-30 17:37:17.717585918 +0000 UTC m=+0.526508540 container died 8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256 (image=quay.io/ceph/ceph:v19, name=vigorous_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:17 compute-0 sudo[76284]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:37:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-88319508945bf03703a9eb51ea084e70f7e508d89a69ffdbf55a1f4899294683-merged.mount: Deactivated successfully.
Sep 30 17:37:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 17:37:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:37:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:37:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:17 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:37:17 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:37:17 compute-0 podman[76494]: 2025-09-30 17:37:17.76745716 +0000 UTC m=+0.576379782 container remove 8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256 (image=quay.io/ceph/ceph:v19, name=vigorous_benz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:37:17 compute-0 systemd[1]: libpod-conmon-8c104a4bef36ec674462393c093feddf6a60072554a7ca7808d774e1284d3256.scope: Deactivated successfully.
Sep 30 17:37:17 compute-0 sudo[76485]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:17 compute-0 sudo[77697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:37:17 compute-0 sudo[77697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:17 compute-0 sudo[77697]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:17 compute-0 sudo[77722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:37:17 compute-0 sudo[77722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:17 compute-0 sudo[77722]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:17 compute-0 sudo[77747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:37:17 compute-0 sudo[77747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:17 compute-0 sudo[77747]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[77772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:18 compute-0 sudo[77772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[77772]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[77797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:37:18 compute-0 sudo[77797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[77797]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[77853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:37:18 compute-0 sudo[77853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[77853]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[77911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:37:18 compute-0 sudo[77911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[77911]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[77951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 17:37:18 compute-0 sudo[77951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[77951]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:37:18 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:37:18 compute-0 sudo[77995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:37:18 compute-0 sudo[77995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[77995]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[78020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:37:18 compute-0 sudo[78020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78020]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[78045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:37:18 compute-0 sudo[78045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78045]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[78073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:18 compute-0 sudo[78073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78073]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[78119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:37:18 compute-0 sudo[78119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78119]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[78215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afhtlwhifnhhsvbhebisybzhsghwojld ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759253838.1661227-33526-43312163719361/async_wrapper.py j591878814081 30 /home/zuul/.ansible/tmp/ansible-tmp-1759253838.1661227-33526-43312163719361/AnsiballZ_command.py _'
Sep 30 17:37:18 compute-0 sudo[78215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:18 compute-0 sudo[78216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:37:18 compute-0 sudo[78216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78216]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/301093757' entity='client.admin' 
Sep 30 17:37:18 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:18 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:18 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:18 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:18 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:37:18 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:18 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:18 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:37:18 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:37:18 compute-0 ceph-mgr[74051]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 17:37:18 compute-0 sudo[78243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:37:18 compute-0 sudo[78243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78243]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[78268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:37:18 compute-0 sudo[78268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 ansible-async_wrapper.py[78229]: Invoked with j591878814081 30 /home/zuul/.ansible/tmp/ansible-tmp-1759253838.1661227-33526-43312163719361/AnsiballZ_command.py _
Sep 30 17:37:18 compute-0 sudo[78268]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:37:18 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:37:18 compute-0 ansible-async_wrapper.py[78295]: Starting module and watcher
Sep 30 17:37:18 compute-0 ansible-async_wrapper.py[78295]: Start watching 78297 (30)
Sep 30 17:37:18 compute-0 ansible-async_wrapper.py[78297]: Start module (78297)
Sep 30 17:37:18 compute-0 ansible-async_wrapper.py[78229]: Return async_wrapper task started.
Sep 30 17:37:18 compute-0 sudo[78215]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[78296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:37:18 compute-0 sudo[78296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78296]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 sudo[78323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:37:18 compute-0 sudo[78323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78323]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:18 compute-0 python3[78301]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:37:18 compute-0 sudo[78348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:37:18 compute-0 sudo[78348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:18 compute-0 sudo[78348]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:19 compute-0 podman[78372]: 2025-09-30 17:37:19.029986888 +0000 UTC m=+0.039232006 container create 37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084 (image=quay.io/ceph/ceph:v19, name=sharp_allen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Sep 30 17:37:19 compute-0 sudo[78379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:19 compute-0 sudo[78379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78379]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 systemd[1]: Started libpod-conmon-37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084.scope.
Sep 30 17:37:19 compute-0 sudo[78411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:37:19 compute-0 sudo[78411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78411]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 podman[78372]: 2025-09-30 17:37:19.014321325 +0000 UTC m=+0.023566453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabab0397def0cdccfc023564eed7f5bc37de0fe6ffb9e62813e5535b3fba279/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabab0397def0cdccfc023564eed7f5bc37de0fe6ffb9e62813e5535b3fba279/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:19 compute-0 podman[78372]: 2025-09-30 17:37:19.139772865 +0000 UTC m=+0.149018003 container init 37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084 (image=quay.io/ceph/ceph:v19, name=sharp_allen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:19 compute-0 podman[78372]: 2025-09-30 17:37:19.150202717 +0000 UTC m=+0.159447835 container start 37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084 (image=quay.io/ceph/ceph:v19, name=sharp_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 17:37:19 compute-0 podman[78372]: 2025-09-30 17:37:19.154593947 +0000 UTC m=+0.163839075 container attach 37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084 (image=quay.io/ceph/ceph:v19, name=sharp_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:19 compute-0 sudo[78465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:37:19 compute-0 sudo[78465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78465]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 sudo[78490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:37:19 compute-0 sudo[78490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78490]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 sudo[78534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Sep 30 17:37:19 compute-0 sudo[78534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78534]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:37:19 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:37:19 compute-0 sudo[78559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:37:19 compute-0 sudo[78559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78559]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 sudo[78584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:37:19 compute-0 sudo[78584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78584]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:37:19 compute-0 sharp_allen[78437]: 
Sep 30 17:37:19 compute-0 sharp_allen[78437]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Sep 30 17:37:19 compute-0 systemd[1]: libpod-37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084.scope: Deactivated successfully.
Sep 30 17:37:19 compute-0 podman[78372]: 2025-09-30 17:37:19.551153294 +0000 UTC m=+0.560398412 container died 37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084 (image=quay.io/ceph/ceph:v19, name=sharp_allen, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 17:37:19 compute-0 sudo[78609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:37:19 compute-0 sudo[78609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78609]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-cabab0397def0cdccfc023564eed7f5bc37de0fe6ffb9e62813e5535b3fba279-merged.mount: Deactivated successfully.
Sep 30 17:37:19 compute-0 podman[78372]: 2025-09-30 17:37:19.58965553 +0000 UTC m=+0.598900648 container remove 37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084 (image=quay.io/ceph/ceph:v19, name=sharp_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Sep 30 17:37:19 compute-0 systemd[1]: libpod-conmon-37082087d14cd636339d2cefed40973226a5936c8851f4c754da33544b8b7084.scope: Deactivated successfully.
Sep 30 17:37:19 compute-0 ansible-async_wrapper.py[78297]: Module complete (78297)
Sep 30 17:37:19 compute-0 sudo[78643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:19 compute-0 sudo[78643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78643]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 sudo[78673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:37:19 compute-0 sudo[78673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78673]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:37:19 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:37:19 compute-0 ceph-mon[73755]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:37:19 compute-0 sudo[78721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:37:19 compute-0 sudo[78721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78721]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 sudo[78746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:37:19 compute-0 sudo[78746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78746]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 sudo[78771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:37:19 compute-0 sudo[78771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:19 compute-0 sudo[78771]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:37:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:37:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:19 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev d3dfbf5d-f3ec-4eec-918e-d60616cc3497 (Updating crash deployment (+1 -> 1))
Sep 30 17:37:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Sep 30 17:37:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 17:37:20 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 17:37:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:20 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Sep 30 17:37:20 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Sep 30 17:37:20 compute-0 sudo[78848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfbbqpdudqguewqdikyswimjoicxcdif ; /usr/bin/python3'
Sep 30 17:37:20 compute-0 sudo[78848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:20 compute-0 sudo[78839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:20 compute-0 sudo[78839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:20 compute-0 sudo[78839]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:20 compute-0 sudo[78870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:20 compute-0 sudo[78870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:20 compute-0 python3[78865]: ansible-ansible.legacy.async_status Invoked with jid=j591878814081.78229 mode=status _async_dir=/root/.ansible_async
Sep 30 17:37:20 compute-0 sudo[78848]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:20 compute-0 sudo[78941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntefqujqaewphoiqjsuqodylahavunrx ; /usr/bin/python3'
Sep 30 17:37:20 compute-0 sudo[78941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:20 compute-0 python3[78943]: ansible-ansible.legacy.async_status Invoked with jid=j591878814081.78229 mode=cleanup _async_dir=/root/.ansible_async
Sep 30 17:37:20 compute-0 sudo[78941]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:20 compute-0 podman[78986]: 2025-09-30 17:37:20.627645432 +0000 UTC m=+0.037982895 container create 1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_napier, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:37:20 compute-0 systemd[1]: Started libpod-conmon-1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418.scope.
Sep 30 17:37:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:20 compute-0 podman[78986]: 2025-09-30 17:37:20.692799518 +0000 UTC m=+0.103137001 container init 1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_napier, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 17:37:20 compute-0 podman[78986]: 2025-09-30 17:37:20.699536577 +0000 UTC m=+0.109874040 container start 1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 17:37:20 compute-0 podman[78986]: 2025-09-30 17:37:20.70364876 +0000 UTC m=+0.113986253 container attach 1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:37:20 compute-0 loving_napier[79002]: 167 167
Sep 30 17:37:20 compute-0 systemd[1]: libpod-1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418.scope: Deactivated successfully.
Sep 30 17:37:20 compute-0 podman[78986]: 2025-09-30 17:37:20.705316692 +0000 UTC m=+0.115654155 container died 1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 17:37:20 compute-0 podman[78986]: 2025-09-30 17:37:20.61205781 +0000 UTC m=+0.022395293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:37:20 compute-0 ceph-mgr[74051]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Sep 30 17:37:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:20 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Sep 30 17:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-25a87552c2b0adb9f8730e93728b8ddde71d5436499e248e28f1582b64b4f39f-merged.mount: Deactivated successfully.
Sep 30 17:37:20 compute-0 podman[78986]: 2025-09-30 17:37:20.740788163 +0000 UTC m=+0.151125626 container remove 1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:20 compute-0 systemd[1]: libpod-conmon-1102cfc6d00307841e0e1847b77c193057dcceeb11033442cc76e8a072dfa418.scope: Deactivated successfully.
Sep 30 17:37:20 compute-0 systemd[1]: Reloading.
Sep 30 17:37:20 compute-0 systemd-rc-local-generator[79054]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:37:20 compute-0 systemd-sysv-generator[79061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:37:20 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:20 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:20 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:20 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 17:37:20 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 17:37:20 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:20 compute-0 ceph-mon[73755]: Deploying daemon crash.compute-0 on compute-0
Sep 30 17:37:20 compute-0 ceph-mon[73755]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Sep 30 17:37:21 compute-0 sudo[79078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xggswtxrknnfjikuxjrdgchjykvdnmzs ; /usr/bin/python3'
Sep 30 17:37:21 compute-0 sudo[79078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:21 compute-0 systemd[1]: Reloading.
Sep 30 17:37:21 compute-0 systemd-rc-local-generator[79110]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:37:21 compute-0 systemd-sysv-generator[79114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:37:21 compute-0 python3[79082]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:37:21 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:37:21 compute-0 sudo[79078]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:21 compute-0 podman[79171]: 2025-09-30 17:37:21.521426163 +0000 UTC m=+0.037036261 container create edbad5d01b7a4dcc6f9c1e9d023898f7130d563d6aabd63d45ee99147c8fe2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Sep 30 17:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333494a71217147006f1f4e78146f4c9e0039b997ebe5d51f8719aae75905fa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333494a71217147006f1f4e78146f4c9e0039b997ebe5d51f8719aae75905fa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333494a71217147006f1f4e78146f4c9e0039b997ebe5d51f8719aae75905fa3/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333494a71217147006f1f4e78146f4c9e0039b997ebe5d51f8719aae75905fa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:21 compute-0 podman[79171]: 2025-09-30 17:37:21.57391505 +0000 UTC m=+0.089525178 container init edbad5d01b7a4dcc6f9c1e9d023898f7130d563d6aabd63d45ee99147c8fe2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:21 compute-0 podman[79171]: 2025-09-30 17:37:21.579947962 +0000 UTC m=+0.095558060 container start edbad5d01b7a4dcc6f9c1e9d023898f7130d563d6aabd63d45ee99147c8fe2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:21 compute-0 bash[79171]: edbad5d01b7a4dcc6f9c1e9d023898f7130d563d6aabd63d45ee99147c8fe2f6
Sep 30 17:37:21 compute-0 podman[79171]: 2025-09-30 17:37:21.505301528 +0000 UTC m=+0.020911656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:37:21 compute-0 systemd[1]: Started Ceph crash.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: INFO:ceph-crash:pinging cluster to exercise our key
Sep 30 17:37:21 compute-0 sudo[78870]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:21 compute-0 sudo[79215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flnrjojgvtxhyhwgltbfywhuosqbgkqj ; /usr/bin/python3'
Sep 30 17:37:21 compute-0 sudo[79215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:37:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: 2025-09-30T17:37:21.717+0000 7fe718afd640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: 2025-09-30T17:37:21.717+0000 7fe718afd640 -1 AuthRegistry(0x7fe7140698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: 2025-09-30T17:37:21.718+0000 7fe718afd640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: 2025-09-30T17:37:21.718+0000 7fe718afd640 -1 AuthRegistry(0x7fe718afbff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: 2025-09-30T17:37:21.719+0000 7fe712575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: 2025-09-30T17:37:21.719+0000 7fe718afd640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: [errno 13] RADOS permission denied (error connecting to the cluster)
Sep 30 17:37:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Sep 30 17:37:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev d3dfbf5d-f3ec-4eec-918e-d60616cc3497 (Updating crash deployment (+1 -> 1))
Sep 30 17:37:21 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event d3dfbf5d-f3ec-4eec-918e-d60616cc3497 (Updating crash deployment (+1 -> 1)) in 2 seconds
Sep 30 17:37:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 17:37:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 python3[79219]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:37:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 17:37:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 17:37:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 podman[79230]: 2025-09-30 17:37:21.859341047 +0000 UTC m=+0.045532184 container create 64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65 (image=quay.io/ceph/ceph:v19, name=musing_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:21 compute-0 sudo[79243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:37:21 compute-0 sudo[79243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:21 compute-0 sudo[79243]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:21 compute-0 ceph-mon[73755]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:21 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:21 compute-0 podman[79230]: 2025-09-30 17:37:21.83914386 +0000 UTC m=+0.025335017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:21 compute-0 systemd[1]: Started libpod-conmon-64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65.scope.
Sep 30 17:37:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:21 compute-0 sudo[79268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d139191306dc126fc0c48443d4ad4717dc9620eb5a436ac7ad5e2ddc69521e86/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d139191306dc126fc0c48443d4ad4717dc9620eb5a436ac7ad5e2ddc69521e86/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d139191306dc126fc0c48443d4ad4717dc9620eb5a436ac7ad5e2ddc69521e86/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:21 compute-0 sudo[79268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:21 compute-0 sudo[79268]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:22 compute-0 sudo[79298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:37:22 compute-0 sudo[79298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:22 compute-0 podman[79230]: 2025-09-30 17:37:22.117084638 +0000 UTC m=+0.303275795 container init 64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65 (image=quay.io/ceph/ceph:v19, name=musing_curie, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:22 compute-0 podman[79230]: 2025-09-30 17:37:22.123400057 +0000 UTC m=+0.309591194 container start 64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65 (image=quay.io/ceph/ceph:v19, name=musing_curie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 17:37:22 compute-0 podman[79230]: 2025-09-30 17:37:22.160291203 +0000 UTC m=+0.346482330 container attach 64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65 (image=quay.io/ceph/ceph:v19, name=musing_curie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:22 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:37:22 compute-0 musing_curie[79293]: 
Sep 30 17:37:22 compute-0 musing_curie[79293]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Sep 30 17:37:22 compute-0 systemd[1]: libpod-64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65.scope: Deactivated successfully.
Sep 30 17:37:22 compute-0 podman[79230]: 2025-09-30 17:37:22.535093883 +0000 UTC m=+0.721285020 container died 64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65 (image=quay.io/ceph/ceph:v19, name=musing_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:22 compute-0 podman[79415]: 2025-09-30 17:37:22.554083609 +0000 UTC m=+0.113294515 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d139191306dc126fc0c48443d4ad4717dc9620eb5a436ac7ad5e2ddc69521e86-merged.mount: Deactivated successfully.
Sep 30 17:37:22 compute-0 podman[79230]: 2025-09-30 17:37:22.599242143 +0000 UTC m=+0.785433270 container remove 64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65 (image=quay.io/ceph/ceph:v19, name=musing_curie, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 17:37:22 compute-0 sudo[79215]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:22 compute-0 systemd[1]: libpod-conmon-64257c81923a440590c8409a6552ac80204fe5b9255133c717dbd1f344507e65.scope: Deactivated successfully.
Sep 30 17:37:22 compute-0 podman[79415]: 2025-09-30 17:37:22.653768252 +0000 UTC m=+0.212979158 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 17:37:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:22 compute-0 sudo[79298]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:37:22 compute-0 sudo[79520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itushwqxyltylhqbkvcvlqkpzpyoaczb ; /usr/bin/python3'
Sep 30 17:37:22 compute-0 sudo[79520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 ceph-mon[73755]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:37:22 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:22 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:22 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 sudo[79523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:37:22 compute-0 sudo[79523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:22 compute-0 sudo[79523]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Sep 30 17:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Sep 30 17:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Sep 30 17:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Sep 30 17:37:23 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 17:37:23 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 17:37:23 compute-0 sudo[79548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:23 compute-0 sudo[79548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:23 compute-0 python3[79522]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:37:23 compute-0 sudo[79548]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:23 compute-0 podman[79573]: 2025-09-30 17:37:23.106581881 +0000 UTC m=+0.033810380 container create a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d (image=quay.io/ceph/ceph:v19, name=romantic_haibt, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:23 compute-0 sudo[79574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:23 compute-0 sudo[79574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:23 compute-0 systemd[1]: Started libpod-conmon-a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d.scope.
Sep 30 17:37:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880f41bc2cbb6287f9356d905605483d6a5ce65f3f3e4da0e96f2f3991fea153/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880f41bc2cbb6287f9356d905605483d6a5ce65f3f3e4da0e96f2f3991fea153/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880f41bc2cbb6287f9356d905605483d6a5ce65f3f3e4da0e96f2f3991fea153/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:23 compute-0 podman[79573]: 2025-09-30 17:37:23.177595844 +0000 UTC m=+0.104824363 container init a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d (image=quay.io/ceph/ceph:v19, name=romantic_haibt, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:23 compute-0 podman[79573]: 2025-09-30 17:37:23.184977399 +0000 UTC m=+0.112205908 container start a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d (image=quay.io/ceph/ceph:v19, name=romantic_haibt, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:23 compute-0 podman[79573]: 2025-09-30 17:37:23.091770379 +0000 UTC m=+0.018998888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:23 compute-0 podman[79573]: 2025-09-30 17:37:23.190672282 +0000 UTC m=+0.117900801 container attach a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d (image=quay.io/ceph/ceph:v19, name=romantic_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:37:23 compute-0 podman[79652]: 2025-09-30 17:37:23.388418627 +0000 UTC m=+0.034114897 container create 9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663 (image=quay.io/ceph/ceph:v19, name=quizzical_panini, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Sep 30 17:37:23 compute-0 systemd[1]: Started libpod-conmon-9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663.scope.
Sep 30 17:37:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:23 compute-0 podman[79652]: 2025-09-30 17:37:23.443958722 +0000 UTC m=+0.089655012 container init 9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663 (image=quay.io/ceph/ceph:v19, name=quizzical_panini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:23 compute-0 podman[79652]: 2025-09-30 17:37:23.448726502 +0000 UTC m=+0.094422762 container start 9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663 (image=quay.io/ceph/ceph:v19, name=quizzical_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:37:23 compute-0 quizzical_panini[79666]: 167 167
Sep 30 17:37:23 compute-0 podman[79652]: 2025-09-30 17:37:23.451608024 +0000 UTC m=+0.097304324 container attach 9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663 (image=quay.io/ceph/ceph:v19, name=quizzical_panini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:37:23 compute-0 systemd[1]: libpod-9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663.scope: Deactivated successfully.
Sep 30 17:37:23 compute-0 podman[79652]: 2025-09-30 17:37:23.452345562 +0000 UTC m=+0.098041822 container died 9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663 (image=quay.io/ceph/ceph:v19, name=quizzical_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:23 compute-0 podman[79652]: 2025-09-30 17:37:23.371158024 +0000 UTC m=+0.016854324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-265eecc290dc0d3352476fb9cfc6a748af913c0cca2042f8ae18ec15243fd8aa-merged.mount: Deactivated successfully.
Sep 30 17:37:23 compute-0 podman[79652]: 2025-09-30 17:37:23.483443503 +0000 UTC m=+0.129139773 container remove 9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663 (image=quay.io/ceph/ceph:v19, name=quizzical_panini, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 17:37:23 compute-0 systemd[1]: libpod-conmon-9c13fbea080a07a34eb66b9e64b4327afda82177fe5276c965b9a549d194c663.scope: Deactivated successfully.
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3919087603' entity='client.admin' 
Sep 30 17:37:23 compute-0 sudo[79574]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:37:23 compute-0 systemd[1]: libpod-a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d.scope: Deactivated successfully.
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 conmon[79613]: conmon a4840a1e0e3409aced31 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d.scope/container/memory.events
Sep 30 17:37:23 compute-0 podman[79573]: 2025-09-30 17:37:23.538236269 +0000 UTC m=+0.465464778 container died a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d (image=quay.io/ceph/ceph:v19, name=romantic_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.efvthf (unknown last config time)...
Sep 30 17:37:23 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.efvthf (unknown last config time)...
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.efvthf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.efvthf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.efvthf on compute-0
Sep 30 17:37:23 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.efvthf on compute-0
Sep 30 17:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-880f41bc2cbb6287f9356d905605483d6a5ce65f3f3e4da0e96f2f3991fea153-merged.mount: Deactivated successfully.
Sep 30 17:37:23 compute-0 podman[79573]: 2025-09-30 17:37:23.573272759 +0000 UTC m=+0.500501258 container remove a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d (image=quay.io/ceph/ceph:v19, name=romantic_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 17:37:23 compute-0 systemd[1]: libpod-conmon-a4840a1e0e3409aced31d531a6c0ac00fdbe10c271fdb6dec96ef43cadbc662d.scope: Deactivated successfully.
Sep 30 17:37:23 compute-0 sudo[79520]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:23 compute-0 sudo[79691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:23 compute-0 sudo[79691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:23 compute-0 sudo[79691]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:23 compute-0 sudo[79721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:37:23 compute-0 sudo[79721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:23 compute-0 sudo[79769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgyjzxoafbvvxnrwphfkvnklgytypgzl ; /usr/bin/python3'
Sep 30 17:37:23 compute-0 sudo[79769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:23 compute-0 ansible-async_wrapper.py[78295]: Done in kid B.
Sep 30 17:37:23 compute-0 python3[79771]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:37:23 compute-0 podman[79783]: 2025-09-30 17:37:23.898017992 +0000 UTC m=+0.040676472 container create a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a (image=quay.io/ceph/ceph:v19, name=pensive_spence, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:37:23 compute-0 podman[79798]: 2025-09-30 17:37:23.928028616 +0000 UTC m=+0.040104878 container create b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32 (image=quay.io/ceph/ceph:v19, name=serene_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:23 compute-0 systemd[1]: Started libpod-conmon-a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a.scope.
Sep 30 17:37:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:23 compute-0 systemd[1]: Started libpod-conmon-b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32.scope.
Sep 30 17:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5742baa309ec3a72c5ad57887eed4c9836d727f2f4253f64d0afd9d201f450bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5742baa309ec3a72c5ad57887eed4c9836d727f2f4253f64d0afd9d201f450bc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5742baa309ec3a72c5ad57887eed4c9836d727f2f4253f64d0afd9d201f450bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:23 compute-0 podman[79783]: 2025-09-30 17:37:23.964466691 +0000 UTC m=+0.107125201 container init a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a (image=quay.io/ceph/ceph:v19, name=pensive_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:37:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:23 compute-0 podman[79783]: 2025-09-30 17:37:23.969730433 +0000 UTC m=+0.112388923 container start a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a (image=quay.io/ceph/ceph:v19, name=pensive_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:37:23 compute-0 podman[79783]: 2025-09-30 17:37:23.973134038 +0000 UTC m=+0.115792528 container attach a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a (image=quay.io/ceph/ceph:v19, name=pensive_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 17:37:23 compute-0 podman[79783]: 2025-09-30 17:37:23.880872892 +0000 UTC m=+0.023531402 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:23 compute-0 podman[79798]: 2025-09-30 17:37:23.980066672 +0000 UTC m=+0.092142934 container init b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32 (image=quay.io/ceph/ceph:v19, name=serene_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:37:23 compute-0 podman[79798]: 2025-09-30 17:37:23.98554884 +0000 UTC m=+0.097625102 container start b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32 (image=quay.io/ceph/ceph:v19, name=serene_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:23 compute-0 serene_kowalevski[79821]: 167 167
Sep 30 17:37:23 compute-0 systemd[1]: libpod-b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32.scope: Deactivated successfully.
Sep 30 17:37:23 compute-0 ceph-mon[73755]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 ceph-mon[73755]: Reconfiguring mon.compute-0 (unknown last config time)...
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3919087603' entity='client.admin' 
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:23 compute-0 ceph-mon[73755]: Reconfiguring mgr.compute-0.efvthf (unknown last config time)...
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.efvthf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:23 compute-0 ceph-mon[73755]: Reconfiguring daemon mgr.compute-0.efvthf on compute-0
Sep 30 17:37:23 compute-0 podman[79798]: 2025-09-30 17:37:23.990857333 +0000 UTC m=+0.102933615 container attach b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32 (image=quay.io/ceph/ceph:v19, name=serene_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 17:37:23 compute-0 podman[79798]: 2025-09-30 17:37:23.991193672 +0000 UTC m=+0.103269954 container died b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32 (image=quay.io/ceph/ceph:v19, name=serene_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:24 compute-0 podman[79798]: 2025-09-30 17:37:23.911137182 +0000 UTC m=+0.023213474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:24 compute-0 podman[79798]: 2025-09-30 17:37:24.022703343 +0000 UTC m=+0.134779605 container remove b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32 (image=quay.io/ceph/ceph:v19, name=serene_kowalevski, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:24 compute-0 systemd[1]: libpod-conmon-b09f6bae115a8f86c643333098ab47c722e9cb6c014f9bdf28dbd1e1a8f32f32.scope: Deactivated successfully.
Sep 30 17:37:24 compute-0 sudo[79721]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:24 compute-0 sudo[79857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:37:24 compute-0 sudo[79857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:24 compute-0 sudo[79857]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1882233538' entity='client.admin' 
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:37:24 compute-0 systemd[1]: libpod-a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a.scope: Deactivated successfully.
Sep 30 17:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:24 compute-0 podman[79884]: 2025-09-30 17:37:24.373120391 +0000 UTC m=+0.029949823 container died a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a (image=quay.io/ceph/ceph:v19, name=pensive_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:24 compute-0 sudo[79885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:37:24 compute-0 sudo[79885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:24 compute-0 sudo[79885]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5742baa309ec3a72c5ad57887eed4c9836d727f2f4253f64d0afd9d201f450bc-merged.mount: Deactivated successfully.
Sep 30 17:37:24 compute-0 podman[79884]: 2025-09-30 17:37:24.409149736 +0000 UTC m=+0.065979148 container remove a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a (image=quay.io/ceph/ceph:v19, name=pensive_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:24 compute-0 systemd[1]: libpod-conmon-a096455fef97e2e7dd46e6b606bd372396b9173d8da979a5ee71770bcec6545a.scope: Deactivated successfully.
Sep 30 17:37:24 compute-0 sudo[79769]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:24 compute-0 sudo[79947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umozozxgotcauueshfbvazhrytmnimph ; /usr/bin/python3'
Sep 30 17:37:24 compute-0 sudo[79947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:24 compute-0 python3[79949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:37:24 compute-0 podman[79950]: 2025-09-30 17:37:24.811052677 +0000 UTC m=+0.054103180 container create 1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6 (image=quay.io/ceph/ceph:v19, name=pedantic_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:24 compute-0 systemd[1]: Started libpod-conmon-1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6.scope.
Sep 30 17:37:24 compute-0 podman[79950]: 2025-09-30 17:37:24.779461474 +0000 UTC m=+0.022511997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388eca4123e71ebf7456883334d9304c277bf28dcb53ec2efc270d354be034ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388eca4123e71ebf7456883334d9304c277bf28dcb53ec2efc270d354be034ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388eca4123e71ebf7456883334d9304c277bf28dcb53ec2efc270d354be034ef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:24 compute-0 podman[79950]: 2025-09-30 17:37:24.931593473 +0000 UTC m=+0.174644036 container init 1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6 (image=quay.io/ceph/ceph:v19, name=pedantic_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Sep 30 17:37:24 compute-0 podman[79950]: 2025-09-30 17:37:24.93944558 +0000 UTC m=+0.182496093 container start 1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6 (image=quay.io/ceph/ceph:v19, name=pedantic_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 17:37:24 compute-0 podman[79950]: 2025-09-30 17:37:24.950011116 +0000 UTC m=+0.193061639 container attach 1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6 (image=quay.io/ceph/ceph:v19, name=pedantic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1882233538' entity='client.admin' 
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:25 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Sep 30 17:37:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1376759335' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Sep 30 17:37:25 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 1 completed events
Sep 30 17:37:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:37:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Sep 30 17:37:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:37:26 compute-0 ceph-mon[73755]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:26 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1376759335' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Sep 30 17:37:26 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1376759335' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Sep 30 17:37:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Sep 30 17:37:26 compute-0 pedantic_wescoff[79965]: set require_min_compat_client to mimic
Sep 30 17:37:26 compute-0 systemd[1]: libpod-1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6.scope: Deactivated successfully.
Sep 30 17:37:26 compute-0 podman[79950]: 2025-09-30 17:37:26.419278245 +0000 UTC m=+1.662328838 container died 1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6 (image=quay.io/ceph/ceph:v19, name=pedantic_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:37:26 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Sep 30 17:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-388eca4123e71ebf7456883334d9304c277bf28dcb53ec2efc270d354be034ef-merged.mount: Deactivated successfully.
Sep 30 17:37:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:26 compute-0 podman[79950]: 2025-09-30 17:37:26.721247516 +0000 UTC m=+1.964298019 container remove 1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6 (image=quay.io/ceph/ceph:v19, name=pedantic_wescoff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:37:26 compute-0 sudo[79947]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:26 compute-0 systemd[1]: libpod-conmon-1354821337c91dba3c66a37977f11271e78488bb2a26c39d57fe5c72ce20f8c6.scope: Deactivated successfully.
Sep 30 17:37:27 compute-0 sudo[80025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkphmzbljlbdfycoonjeiyggeodpedmn ; /usr/bin/python3'
Sep 30 17:37:27 compute-0 sudo[80025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:27 compute-0 python3[80027]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:37:27 compute-0 podman[80028]: 2025-09-30 17:37:27.329507948 +0000 UTC m=+0.044631201 container create 1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5 (image=quay.io/ceph/ceph:v19, name=pensive_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:27 compute-0 systemd[1]: Started libpod-conmon-1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5.scope.
Sep 30 17:37:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4649dd9823cf76eeb7276cd88972f3c65ca642aec1b6b45fa74087692744c174/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4649dd9823cf76eeb7276cd88972f3c65ca642aec1b6b45fa74087692744c174/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4649dd9823cf76eeb7276cd88972f3c65ca642aec1b6b45fa74087692744c174/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:27 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1376759335' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Sep 30 17:37:27 compute-0 ceph-mon[73755]: osdmap e3: 0 total, 0 up, 0 in
Sep 30 17:37:27 compute-0 podman[80028]: 2025-09-30 17:37:27.404783928 +0000 UTC m=+0.119907181 container init 1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5 (image=quay.io/ceph/ceph:v19, name=pensive_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 17:37:27 compute-0 podman[80028]: 2025-09-30 17:37:27.31126885 +0000 UTC m=+0.026392133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:27 compute-0 podman[80028]: 2025-09-30 17:37:27.410955063 +0000 UTC m=+0.126078316 container start 1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5 (image=quay.io/ceph/ceph:v19, name=pensive_shannon, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:37:27 compute-0 podman[80028]: 2025-09-30 17:37:27.415187949 +0000 UTC m=+0.130311202 container attach 1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5 (image=quay.io/ceph/ceph:v19, name=pensive_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:27 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:27 compute-0 sudo[80068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:37:27 compute-0 sudo[80068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:27 compute-0 sudo[80068]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:27 compute-0 sudo[80093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Sep 30 17:37:27 compute-0 sudo[80093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:28 compute-0 sudo[80093]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 17:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 17:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 17:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 17:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mgr[74051]: [cephadm INFO root] Added host compute-0
Sep 30 17:37:28 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Added host compute-0
Sep 30 17:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 sudo[80138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:37:28 compute-0 sudo[80138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:37:28 compute-0 sudo[80138]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:28 compute-0 ceph-mon[73755]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:29 compute-0 ceph-mon[73755]: from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:37:29 compute-0 ceph-mon[73755]: Added host compute-0
Sep 30 17:37:29 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Sep 30 17:37:29 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Sep 30 17:37:30 compute-0 ceph-mon[73755]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:30 compute-0 ceph-mon[73755]: Deploying cephadm binary to compute-1
Sep 30 17:37:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:37:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:37:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:37:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:37:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:37:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:37:32 compute-0 ceph-mon[73755]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 17:37:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: [cephadm INFO root] Added host compute-1
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Added host compute-1
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1
Sep 30 17:37:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 17:37:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1
Sep 30 17:37:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 17:37:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1
Sep 30 17:37:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1
Sep 30 17:37:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Sep 30 17:37:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:33 compute-0 pensive_shannon[80044]: Added host 'compute-0' with addr '192.168.122.100'
Sep 30 17:37:33 compute-0 pensive_shannon[80044]: Added host 'compute-1' with addr '192.168.122.101'
Sep 30 17:37:33 compute-0 pensive_shannon[80044]: Scheduled mon update...
Sep 30 17:37:33 compute-0 pensive_shannon[80044]: Scheduled mgr update...
Sep 30 17:37:33 compute-0 pensive_shannon[80044]: Scheduled osd.default_drive_group update...
Sep 30 17:37:33 compute-0 systemd[1]: libpod-1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5.scope: Deactivated successfully.
Sep 30 17:37:33 compute-0 podman[80028]: 2025-09-30 17:37:33.68468027 +0000 UTC m=+6.399803523 container died 1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5 (image=quay.io/ceph/ceph:v19, name=pensive_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4649dd9823cf76eeb7276cd88972f3c65ca642aec1b6b45fa74087692744c174-merged.mount: Deactivated successfully.
Sep 30 17:37:33 compute-0 podman[80028]: 2025-09-30 17:37:33.718048408 +0000 UTC m=+6.433171661 container remove 1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5 (image=quay.io/ceph/ceph:v19, name=pensive_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Sep 30 17:37:33 compute-0 systemd[1]: libpod-conmon-1841c2625e16f1f5629d4f431a41bd723dfea5a2d7eb751b9c30f0d1c879bbc5.scope: Deactivated successfully.
Sep 30 17:37:33 compute-0 sudo[80025]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:33 compute-0 sudo[80198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gywpyutfstecjhnunfbrbbmisugfaoll ; /usr/bin/python3'
Sep 30 17:37:34 compute-0 sudo[80198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:37:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:37:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:34 compute-0 python3[80200]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:37:34 compute-0 podman[80202]: 2025-09-30 17:37:34.217997801 +0000 UTC m=+0.068847620 container create 938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb (image=quay.io/ceph/ceph:v19, name=boring_edison, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Sep 30 17:37:34 compute-0 systemd[1]: Started libpod-conmon-938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb.scope.
Sep 30 17:37:34 compute-0 podman[80202]: 2025-09-30 17:37:34.169537294 +0000 UTC m=+0.020387143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:37:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12372dfadf8d51c83e5091795149a26bfe2aeabfa1780230d452e7f6455d12b0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12372dfadf8d51c83e5091795149a26bfe2aeabfa1780230d452e7f6455d12b0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12372dfadf8d51c83e5091795149a26bfe2aeabfa1780230d452e7f6455d12b0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:37:34 compute-0 podman[80202]: 2025-09-30 17:37:34.31474646 +0000 UTC m=+0.165596299 container init 938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb (image=quay.io/ceph/ceph:v19, name=boring_edison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:34 compute-0 podman[80202]: 2025-09-30 17:37:34.32154522 +0000 UTC m=+0.172395039 container start 938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb (image=quay.io/ceph/ceph:v19, name=boring_edison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:37:34 compute-0 podman[80202]: 2025-09-30 17:37:34.333118891 +0000 UTC m=+0.183968730 container attach 938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb (image=quay.io/ceph/ceph:v19, name=boring_edison, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 17:37:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:37:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:34 compute-0 ceph-mon[73755]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:34 compute-0 ceph-mon[73755]: Added host compute-1
Sep 30 17:37:34 compute-0 ceph-mon[73755]: Saving service mon spec with placement compute-0;compute-1
Sep 30 17:37:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:34 compute-0 ceph-mon[73755]: Saving service mgr spec with placement compute-0;compute-1
Sep 30 17:37:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:34 compute-0 ceph-mon[73755]: Marking host: compute-0 for OSDSpec preview refresh.
Sep 30 17:37:34 compute-0 ceph-mon[73755]: Saving service osd.default_drive_group spec with placement compute-0;compute-1
Sep 30 17:37:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 17:37:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303758606' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:37:34 compute-0 boring_edison[80220]: 
Sep 30 17:37:34 compute-0 boring_edison[80220]: {"fsid":"63d32c6a-fa18-54ed-8711-9a3915cc367b","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":59,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-09-30T17:36:27:095553+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-09-30T17:36:27.098259+0000","services":{}},"progress_events":{}}
Sep 30 17:37:34 compute-0 systemd[1]: libpod-938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb.scope: Deactivated successfully.
Sep 30 17:37:34 compute-0 podman[80202]: 2025-09-30 17:37:34.837775522 +0000 UTC m=+0.688625341 container died 938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb (image=quay.io/ceph/ceph:v19, name=boring_edison, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-12372dfadf8d51c83e5091795149a26bfe2aeabfa1780230d452e7f6455d12b0-merged.mount: Deactivated successfully.
Sep 30 17:37:35 compute-0 podman[80202]: 2025-09-30 17:37:35.005292768 +0000 UTC m=+0.856142587 container remove 938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb (image=quay.io/ceph/ceph:v19, name=boring_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:37:35 compute-0 systemd[1]: libpod-conmon-938710dff9365643a3a34f42ce1bdf079192ec7c6c7c9b180032516c21f924bb.scope: Deactivated successfully.
Sep 30 17:37:35 compute-0 sudo[80198]: pam_unix(sudo:session): session closed for user root
Sep 30 17:37:35 compute-0 sshd-session[80204]: Invalid user ftp_client from 14.103.233.27 port 59026
Sep 30 17:37:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:37:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/303758606' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:37:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:36 compute-0 ceph-mon[73755]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:36 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:37 compute-0 ceph-mon[73755]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:39 compute-0 ceph-mon[73755]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:41 compute-0 ceph-mon[73755]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:43 compute-0 ceph-mon[73755]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:45 compute-0 ceph-mon[73755]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:47 compute-0 ceph-mon[73755]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:49 compute-0 ceph-mon[73755]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:51 compute-0 ceph-mon[73755]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:53 compute-0 ceph-mon[73755]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:37:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:37:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:37:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:37:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 17:37:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:37:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:54 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:37:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:54 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:37:54 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:37:54 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:37:54 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:37:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:55 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:55 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:55 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:55 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:55 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:37:55 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:55 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:37:55 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:37:55 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:37:55 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:37:55 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:37:55 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:37:55 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:37:56 compute-0 ceph-mon[73755]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:56 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:37:56 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:37:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:37:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:37:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:37:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:56 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev af8398a5-5bed-40aa-926f-44983f7b70a9 (Updating mon deployment (+1 -> 2))
Sep 30 17:37:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 17:37:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 17:37:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 17:37:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 17:37:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:56 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Sep 30 17:37:56 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Sep 30 17:37:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:57 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:57 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:57 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:57 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 17:37:57 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 17:37:57 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:57 compute-0 ceph-mon[73755]: Deploying daemon mon.compute-1 on compute-1
Sep 30 17:37:58 compute-0 ceph-mon[73755]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:59 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev af8398a5-5bed-40aa-926f-44983f7b70a9 (Updating mon deployment (+1 -> 2))
Sep 30 17:37:59 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event af8398a5-5bed-40aa-926f-44983f7b70a9 (Updating mon deployment (+1 -> 2)) in 3 seconds
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:37:59 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev dd649570-1fae-499e-91d6-f3315134d650 (Updating mgr deployment (+1 -> 2))
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.glbusf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.glbusf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.glbusf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:37:59 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.glbusf on compute-1
Sep 30 17:37:59 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.glbusf on compute-1
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).monmap v1 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Sep 30 17:37:59 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1710035670; not ready for session (expect reconnect)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:37:59 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:37:59 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Sep 30 17:37:59 compute-0 ceph-mon[73755]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Sep 30 17:37:59 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 17:37:59 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1710035670; not ready for session (expect reconnect)
Sep 30 17:38:00 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:38:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:38:00
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [balancer INFO root] No pools available
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:38:00 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 2 completed events
Sep 30 17:38:00 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:38:01 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Sep 30 17:38:01 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:01 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Sep 30 17:38:01 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1710035670; not ready for session (expect reconnect)
Sep 30 17:38:01 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:38:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:01 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 17:38:01 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Sep 30 17:38:02 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Sep 30 17:38:02 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1710035670; not ready for session (expect reconnect)
Sep 30 17:38:02 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:38:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:02 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 17:38:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:03 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1710035670; not ready for session (expect reconnect)
Sep 30 17:38:03 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:38:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:03 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Sep 30 17:38:04 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1710035670; not ready for session (expect reconnect)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:04 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 17:38:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:04 compute-0 ceph-mon[73755]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,1)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : monmap epoch 2
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : last_changed 2025-09-30T17:37:59.709954+0000
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : created 2025-09-30T17:36:25.121133+0000
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : election_strategy: 1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap 
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.efvthf(active, since 64s)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN OSD count 0 < osd_pool_default_size 1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : [WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: Deploying daemon mgr.compute-1.glbusf on compute-1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:38:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0 calling monitor election
Sep 30 17:38:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:04 compute-0 ceph-mon[73755]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-1 calling monitor election
Sep 30 17:38:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:04 compute-0 ceph-mon[73755]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,1)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: monmap epoch 2
Sep 30 17:38:04 compute-0 ceph-mon[73755]: fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:04 compute-0 ceph-mon[73755]: last_changed 2025-09-30T17:37:59.709954+0000
Sep 30 17:38:04 compute-0 ceph-mon[73755]: created 2025-09-30T17:36:25.121133+0000
Sep 30 17:38:04 compute-0 ceph-mon[73755]: min_mon_release 19 (squid)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: election_strategy: 1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 17:38:04 compute-0 ceph-mon[73755]: 1: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: fsmap 
Sep 30 17:38:04 compute-0 ceph-mon[73755]: osdmap e3: 0 total, 0 up, 0 in
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mgrmap e7: compute-0.efvthf(active, since 64s)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: Health detail: HEALTH_WARN OSD count 0 < osd_pool_default_size 1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: [WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 1
Sep 30 17:38:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 17:38:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:04 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev dd649570-1fae-499e-91d6-f3315134d650 (Updating mgr deployment (+1 -> 2))
Sep 30 17:38:04 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event dd649570-1fae-499e-91d6-f3315134d650 (Updating mgr deployment (+1 -> 2)) in 5 seconds
Sep 30 17:38:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 17:38:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:05 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 13d94276-5d13-453b-9a27-7000ebfbb91a (Updating crash deployment (+1 -> 2))
Sep 30 17:38:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Sep 30 17:38:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 17:38:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 17:38:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:05 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Sep 30 17:38:05 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Sep 30 17:38:05 compute-0 sudo[80280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odohtobfcxxksjfsstbplaccqwdizlse ; /usr/bin/python3'
Sep 30 17:38:05 compute-0 sudo[80280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:05 compute-0 python3[80282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:05 compute-0 podman[80284]: 2025-09-30 17:38:05.355372348 +0000 UTC m=+0.087819238 container create c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8 (image=quay.io/ceph/ceph:v19, name=exciting_black, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Sep 30 17:38:05 compute-0 podman[80284]: 2025-09-30 17:38:05.28880976 +0000 UTC m=+0.021256680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:05 compute-0 systemd[1]: Started libpod-conmon-c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8.scope.
Sep 30 17:38:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92667c2e437bb31dde261d12c39ed37fd2fa5fc626eb7c04e2ed1cf0d59595e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92667c2e437bb31dde261d12c39ed37fd2fa5fc626eb7c04e2ed1cf0d59595e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92667c2e437bb31dde261d12c39ed37fd2fa5fc626eb7c04e2ed1cf0d59595e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:05 compute-0 podman[80284]: 2025-09-30 17:38:05.520716231 +0000 UTC m=+0.253163121 container init c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8 (image=quay.io/ceph/ceph:v19, name=exciting_black, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:38:05 compute-0 podman[80284]: 2025-09-30 17:38:05.53406 +0000 UTC m=+0.266506920 container start c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8 (image=quay.io/ceph/ceph:v19, name=exciting_black, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:05 compute-0 podman[80284]: 2025-09-30 17:38:05.586735576 +0000 UTC m=+0.319182506 container attach c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8 (image=quay.io/ceph/ceph:v19, name=exciting_black, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 17:38:05 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1710035670; not ready for session (expect reconnect)
Sep 30 17:38:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:38:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:05 compute-0 ceph-mon[73755]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 17:38:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 17:38:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:05 compute-0 ceph-mon[73755]: Deploying daemon crash.compute-1 on compute-1
Sep 30 17:38:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881542026' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:38:06 compute-0 exciting_black[80300]: 
Sep 30 17:38:06 compute-0 exciting_black[80300]: {"fsid":"63d32c6a-fa18-54ed-8711-9a3915cc367b","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-1"],"quorum_age":1,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-09-30T17:36:27:095553+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-09-30T17:38:02.720729+0000","services":{}},"progress_events":{"dd649570-1fae-499e-91d6-f3315134d650":{"message":"Updating mgr deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Sep 30 17:38:06 compute-0 systemd[1]: libpod-c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8.scope: Deactivated successfully.
Sep 30 17:38:06 compute-0 podman[80284]: 2025-09-30 17:38:06.04981823 +0000 UTC m=+0.782265120 container died c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8 (image=quay.io/ceph/ceph:v19, name=exciting_black, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c92667c2e437bb31dde261d12c39ed37fd2fa5fc626eb7c04e2ed1cf0d59595e-merged.mount: Deactivated successfully.
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:06 compute-0 ceph-mgr[74051]: mgr.server handle_report got status from non-daemon mon.compute-1
Sep 30 17:38:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:38:06.711+0000 7f32797c1640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Sep 30 17:38:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:06 compute-0 podman[80284]: 2025-09-30 17:38:06.762276389 +0000 UTC m=+1.494723269 container remove c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8 (image=quay.io/ceph/ceph:v19, name=exciting_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:06 compute-0 sudo[80280]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:06 compute-0 systemd[1]: libpod-conmon-c3cf408c86a644e6e03df483ce067fe7aed7cd01ecc79d811d5b4df909e392d8.scope: Deactivated successfully.
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:06 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 13d94276-5d13-453b-9a27-7000ebfbb91a (Updating crash deployment (+1 -> 2))
Sep 30 17:38:06 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 13d94276-5d13-453b-9a27-7000ebfbb91a (Updating crash deployment (+1 -> 2)) in 2 seconds
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:06 compute-0 sudo[80339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:07 compute-0 sudo[80339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:07 compute-0 sudo[80339]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2881542026' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:07 compute-0 sudo[80364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:38:07 compute-0 sudo[80364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:07 compute-0 podman[80428]: 2025-09-30 17:38:07.433926833 +0000 UTC m=+0.064288462 container create 046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:07 compute-0 podman[80428]: 2025-09-30 17:38:07.390470291 +0000 UTC m=+0.020831930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:07 compute-0 systemd[1]: Started libpod-conmon-046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3.scope.
Sep 30 17:38:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:07 compute-0 podman[80428]: 2025-09-30 17:38:07.571186024 +0000 UTC m=+0.201547653 container init 046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 17:38:07 compute-0 podman[80428]: 2025-09-30 17:38:07.576724614 +0000 UTC m=+0.207086263 container start 046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:07 compute-0 unruffled_johnson[80444]: 167 167
Sep 30 17:38:07 compute-0 systemd[1]: libpod-046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3.scope: Deactivated successfully.
Sep 30 17:38:07 compute-0 podman[80428]: 2025-09-30 17:38:07.682029425 +0000 UTC m=+0.312391094 container attach 046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:38:07 compute-0 podman[80428]: 2025-09-30 17:38:07.68382107 +0000 UTC m=+0.314182709 container died 046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 17:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8653911c0935aa21cbc59d69d2b6a44ada7df5752e5000c9152debe9e3256651-merged.mount: Deactivated successfully.
Sep 30 17:38:08 compute-0 ceph-mon[73755]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:08 compute-0 podman[80428]: 2025-09-30 17:38:08.067792888 +0000 UTC m=+0.698154527 container remove 046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:08 compute-0 systemd[1]: libpod-conmon-046821dd5d3e4a8d81d906a02bafbfd9ef6a034bfc808e72c702632b8b980cb3.scope: Deactivated successfully.
Sep 30 17:38:08 compute-0 podman[80468]: 2025-09-30 17:38:08.213420001 +0000 UTC m=+0.034662550 container create c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_almeida, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 17:38:08 compute-0 systemd[1]: Started libpod-conmon-c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d.scope.
Sep 30 17:38:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128d3c6e5691ab4ba94fcce592db427d38274c17a0754c87b421f30a155ba2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128d3c6e5691ab4ba94fcce592db427d38274c17a0754c87b421f30a155ba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128d3c6e5691ab4ba94fcce592db427d38274c17a0754c87b421f30a155ba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128d3c6e5691ab4ba94fcce592db427d38274c17a0754c87b421f30a155ba2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128d3c6e5691ab4ba94fcce592db427d38274c17a0754c87b421f30a155ba2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:08 compute-0 podman[80468]: 2025-09-30 17:38:08.290775093 +0000 UTC m=+0.112017672 container init c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:08 compute-0 podman[80468]: 2025-09-30 17:38:08.198028121 +0000 UTC m=+0.019270690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:08 compute-0 podman[80468]: 2025-09-30 17:38:08.29814166 +0000 UTC m=+0.119384209 container start c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:08 compute-0 podman[80468]: 2025-09-30 17:38:08.301914146 +0000 UTC m=+0.123156715 container attach c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_almeida, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:08 compute-0 affectionate_almeida[80485]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:38:08 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:08 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:08 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a6bb176f-a2ce-4022-8226-399d42b79f3f
Sep 30 17:38:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd new", "uuid": "a6bb176f-a2ce-4022-8226-399d42b79f3f"} v 0)
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2393315832' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6bb176f-a2ce-4022-8226-399d42b79f3f"}]: dispatch
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2393315832' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a6bb176f-a2ce-4022-8226-399d42b79f3f"}]': finished
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:09 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2393315832' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a6bb176f-a2ce-4022-8226-399d42b79f3f"}]: dispatch
Sep 30 17:38:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2393315832' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a6bb176f-a2ce-4022-8226-399d42b79f3f"}]': finished
Sep 30 17:38:09 compute-0 ceph-mon[73755]: osdmap e4: 1 total, 0 up, 1 in
Sep 30 17:38:09 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:09 compute-0 lvm[80546]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:38:09 compute-0 lvm[80546]: VG ceph_vg0 finished
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd new", "uuid": "46225d03-2655-4a6e-a371-e1f19c340cf7"} v 0)
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/891903981' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "46225d03-2655-4a6e-a371-e1f19c340cf7"}]: dispatch
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/891903981' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "46225d03-2655-4a6e-a371-e1f19c340cf7"}]': finished
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:09 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:09 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon getmap"} v 0)
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2319601444' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]:  stderr: got monmap epoch 2
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: --> Creating keyring file for osd.0
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Sep 30 17:38:09 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a6bb176f-a2ce-4022-8226-399d42b79f3f --setuser ceph --setgroup ceph
Sep 30 17:38:09 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 4 completed events
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:38:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon getmap"} v 0)
Sep 30 17:38:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4199176611' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 17:38:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:10 compute-0 ceph-mon[73755]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/891903981' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "46225d03-2655-4a6e-a371-e1f19c340cf7"}]: dispatch
Sep 30 17:38:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/891903981' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "46225d03-2655-4a6e-a371-e1f19c340cf7"}]': finished
Sep 30 17:38:10 compute-0 ceph-mon[73755]: osdmap e5: 2 total, 0 up, 2 in
Sep 30 17:38:10 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:10 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2319601444' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 17:38:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4199176611' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 17:38:10 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:11 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Sep 30 17:38:11 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Cluster is now healthy
Sep 30 17:38:11 compute-0 ceph-mon[73755]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Sep 30 17:38:11 compute-0 ceph-mon[73755]: Cluster is now healthy
Sep 30 17:38:11 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf started
Sep 30 17:38:11 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from mgr.compute-1.glbusf 192.168.122.101:0/2502795071; not ready for session (expect reconnect)
Sep 30 17:38:12 compute-0 ceph-mon[73755]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:12 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf started
Sep 30 17:38:12 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.efvthf(active, since 71s), standbys: compute-1.glbusf
Sep 30 17:38:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"} v 0)
Sep 30 17:38:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"}]: dispatch
Sep 30 17:38:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:12 compute-0 affectionate_almeida[80485]:  stderr: 2025-09-30T17:38:09.725+0000 7f81b0fff740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Sep 30 17:38:12 compute-0 affectionate_almeida[80485]:  stderr: 2025-09-30T17:38:09.989+0000 7f81b0fff740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Sep 30 17:38:12 compute-0 affectionate_almeida[80485]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Sep 30 17:38:13 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 17:38:13 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Sep 30 17:38:13 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:13 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:13 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 30 17:38:13 compute-0 affectionate_almeida[80485]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 17:38:13 compute-0 affectionate_almeida[80485]: --> ceph-volume lvm activate successful for osd ID: 0
Sep 30 17:38:13 compute-0 affectionate_almeida[80485]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Sep 30 17:38:13 compute-0 systemd[1]: libpod-c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d.scope: Deactivated successfully.
Sep 30 17:38:13 compute-0 systemd[1]: libpod-c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d.scope: Consumed 1.877s CPU time.
Sep 30 17:38:13 compute-0 podman[80468]: 2025-09-30 17:38:13.378696716 +0000 UTC m=+5.199939275 container died c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_almeida, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 17:38:13 compute-0 ceph-mon[73755]: mgrmap e8: compute-0.efvthf(active, since 71s), standbys: compute-1.glbusf
Sep 30 17:38:13 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"}]: dispatch
Sep 30 17:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b128d3c6e5691ab4ba94fcce592db427d38274c17a0754c87b421f30a155ba2-merged.mount: Deactivated successfully.
Sep 30 17:38:13 compute-0 podman[80468]: 2025-09-30 17:38:13.543879525 +0000 UTC m=+5.365122074 container remove c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:38:13 compute-0 systemd[1]: libpod-conmon-c72700238209dd298b69ff270ad03b2e861e1ccca17c254db7c38b6c5b035b9d.scope: Deactivated successfully.
Sep 30 17:38:13 compute-0 sudo[80364]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:13 compute-0 sudo[81467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:13 compute-0 sudo[81467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:13 compute-0 sudo[81467]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:13 compute-0 sudo[81492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:38:13 compute-0 sudo[81492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:14 compute-0 podman[81556]: 2025-09-30 17:38:14.025974842 +0000 UTC m=+0.020290026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:14 compute-0 podman[81556]: 2025-09-30 17:38:14.280394714 +0000 UTC m=+0.274709878 container create fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_blackburn, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:38:14 compute-0 systemd[1]: Started libpod-conmon-fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c.scope.
Sep 30 17:38:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:14 compute-0 podman[81556]: 2025-09-30 17:38:14.438263068 +0000 UTC m=+0.432578252 container init fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_blackburn, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 17:38:14 compute-0 podman[81556]: 2025-09-30 17:38:14.444014204 +0000 UTC m=+0.438329368 container start fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_blackburn, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:38:14 compute-0 admiring_blackburn[81572]: 167 167
Sep 30 17:38:14 compute-0 systemd[1]: libpod-fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c.scope: Deactivated successfully.
Sep 30 17:38:14 compute-0 ceph-mon[73755]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:14 compute-0 podman[81556]: 2025-09-30 17:38:14.451904054 +0000 UTC m=+0.446219218 container attach fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_blackburn, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Sep 30 17:38:14 compute-0 podman[81556]: 2025-09-30 17:38:14.452387476 +0000 UTC m=+0.446702640 container died fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_blackburn, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-78c6779ffe296f9e743c5ff36ffe47060a8ec5682cc2d6eba3b42e33fa542633-merged.mount: Deactivated successfully.
Sep 30 17:38:14 compute-0 podman[81556]: 2025-09-30 17:38:14.489940839 +0000 UTC m=+0.484256003 container remove fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_blackburn, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 17:38:14 compute-0 systemd[1]: libpod-conmon-fd5e431dde78d05faf07581e329f29e3bd9ae3c7b9b6d5cf03a2dd611daafd4c.scope: Deactivated successfully.
Sep 30 17:38:14 compute-0 podman[81596]: 2025-09-30 17:38:14.626904202 +0000 UTC m=+0.038036855 container create 5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:14 compute-0 systemd[1]: Started libpod-conmon-5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561.scope.
Sep 30 17:38:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbb18df962d66e207e45fd4aff02ddca44d8b8ec17c07db1d3dad1b412bd90a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbb18df962d66e207e45fd4aff02ddca44d8b8ec17c07db1d3dad1b412bd90a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbb18df962d66e207e45fd4aff02ddca44d8b8ec17c07db1d3dad1b412bd90a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddbb18df962d66e207e45fd4aff02ddca44d8b8ec17c07db1d3dad1b412bd90a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:14 compute-0 podman[81596]: 2025-09-30 17:38:14.61024821 +0000 UTC m=+0.021380903 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:14 compute-0 podman[81596]: 2025-09-30 17:38:14.736080971 +0000 UTC m=+0.147213644 container init 5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 17:38:14 compute-0 podman[81596]: 2025-09-30 17:38:14.742010791 +0000 UTC m=+0.153143454 container start 5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:14 compute-0 podman[81596]: 2025-09-30 17:38:14.783812002 +0000 UTC m=+0.194944665 container attach 5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:15 compute-0 serene_jennings[81612]: {
Sep 30 17:38:15 compute-0 serene_jennings[81612]:     "0": [
Sep 30 17:38:15 compute-0 serene_jennings[81612]:         {
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "devices": [
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "/dev/loop3"
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             ],
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "lv_name": "ceph_lv0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "lv_size": "21470642176",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "name": "ceph_lv0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "tags": {
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.cluster_name": "ceph",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.crush_device_class": "",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.encrypted": "0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.osd_id": "0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.type": "block",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.vdo": "0",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:                 "ceph.with_tpm": "0"
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             },
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "type": "block",
Sep 30 17:38:15 compute-0 serene_jennings[81612]:             "vg_name": "ceph_vg0"
Sep 30 17:38:15 compute-0 serene_jennings[81612]:         }
Sep 30 17:38:15 compute-0 serene_jennings[81612]:     ]
Sep 30 17:38:15 compute-0 serene_jennings[81612]: }
Sep 30 17:38:15 compute-0 systemd[1]: libpod-5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561.scope: Deactivated successfully.
Sep 30 17:38:15 compute-0 podman[81596]: 2025-09-30 17:38:15.052178518 +0000 UTC m=+0.463311181 container died 5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Sep 30 17:38:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Sep 30 17:38:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:15 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:15 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Sep 30 17:38:15 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Sep 30 17:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddbb18df962d66e207e45fd4aff02ddca44d8b8ec17c07db1d3dad1b412bd90a-merged.mount: Deactivated successfully.
Sep 30 17:38:15 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Sep 30 17:38:15 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:16 compute-0 podman[81596]: 2025-09-30 17:38:16.173576895 +0000 UTC m=+1.584709568 container remove 5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:16 compute-0 systemd[1]: libpod-conmon-5429bfdee954cc7dbdfc08d547b776f7ee923b6d625839d282ad9dd69d2e3561.scope: Deactivated successfully.
Sep 30 17:38:16 compute-0 sudo[81492]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Sep 30 17:38:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Sep 30 17:38:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:16 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Sep 30 17:38:16 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Sep 30 17:38:16 compute-0 sudo[81635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:16 compute-0 sudo[81635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:16 compute-0 sudo[81635]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:16 compute-0 sudo[81660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:16 compute-0 sudo[81660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:16 compute-0 ceph-mon[73755]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:16 compute-0 ceph-mon[73755]: Deploying daemon osd.1 on compute-1
Sep 30 17:38:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Sep 30 17:38:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:16 compute-0 podman[81727]: 2025-09-30 17:38:16.740210077 +0000 UTC m=+0.022110612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:17 compute-0 podman[81727]: 2025-09-30 17:38:17.084505237 +0000 UTC m=+0.366405782 container create bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:38:17 compute-0 systemd[1]: Started libpod-conmon-bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586.scope.
Sep 30 17:38:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:17 compute-0 podman[81727]: 2025-09-30 17:38:17.596273707 +0000 UTC m=+0.878174232 container init bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:17 compute-0 podman[81727]: 2025-09-30 17:38:17.606424684 +0000 UTC m=+0.888325229 container start bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:17 compute-0 nostalgic_wu[81743]: 167 167
Sep 30 17:38:17 compute-0 systemd[1]: libpod-bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586.scope: Deactivated successfully.
Sep 30 17:38:17 compute-0 podman[81727]: 2025-09-30 17:38:17.707403085 +0000 UTC m=+0.989303650 container attach bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 17:38:17 compute-0 podman[81727]: 2025-09-30 17:38:17.708592995 +0000 UTC m=+0.990493560 container died bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:17 compute-0 ceph-mon[73755]: Deploying daemon osd.0 on compute-0
Sep 30 17:38:17 compute-0 ceph-mon[73755]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0fbbf1fcc03d9d0ac24f7df5d921435be2984fd3554e7a88d3585d507064595-merged.mount: Deactivated successfully.
Sep 30 17:38:18 compute-0 podman[81727]: 2025-09-30 17:38:18.186633719 +0000 UTC m=+1.468534244 container remove bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:38:18 compute-0 systemd[1]: libpod-conmon-bf73d129efde10d395e522373678536bf4c483377c800befbcb1117c57a5d586.scope: Deactivated successfully.
Sep 30 17:38:18 compute-0 podman[81773]: 2025-09-30 17:38:18.516242948 +0000 UTC m=+0.109559149 container create 237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:18 compute-0 podman[81773]: 2025-09-30 17:38:18.426322218 +0000 UTC m=+0.019638439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:18 compute-0 systemd[1]: Started libpod-conmon-237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41.scope.
Sep 30 17:38:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be893331f77fdaf314cd0ecdc057cae80b8464b5e82201c8953ec1f2335e7e58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be893331f77fdaf314cd0ecdc057cae80b8464b5e82201c8953ec1f2335e7e58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be893331f77fdaf314cd0ecdc057cae80b8464b5e82201c8953ec1f2335e7e58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be893331f77fdaf314cd0ecdc057cae80b8464b5e82201c8953ec1f2335e7e58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be893331f77fdaf314cd0ecdc057cae80b8464b5e82201c8953ec1f2335e7e58/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:19 compute-0 podman[81773]: 2025-09-30 17:38:19.207455288 +0000 UTC m=+0.800771519 container init 237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Sep 30 17:38:19 compute-0 podman[81773]: 2025-09-30 17:38:19.215783809 +0000 UTC m=+0.809100010 container start 237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test[81791]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Sep 30 17:38:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test[81791]:                             [--no-systemd] [--no-tmpfs]
Sep 30 17:38:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test[81791]: ceph-volume activate: error: unrecognized arguments: --bad-option
Sep 30 17:38:19 compute-0 systemd[1]: libpod-237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41.scope: Deactivated successfully.
Sep 30 17:38:19 compute-0 podman[81773]: 2025-09-30 17:38:19.625710024 +0000 UTC m=+1.219026235 container attach 237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:19 compute-0 podman[81773]: 2025-09-30 17:38:19.626598456 +0000 UTC m=+1.219914677 container died 237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:19 compute-0 sshd-session[81787]: Invalid user prashanth from 14.103.233.27 port 50388
Sep 30 17:38:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:20 compute-0 sshd-session[81787]: Received disconnect from 14.103.233.27 port 50388:11: Bye Bye [preauth]
Sep 30 17:38:20 compute-0 sshd-session[81787]: Disconnected from invalid user prashanth 14.103.233.27 port 50388 [preauth]
Sep 30 17:38:20 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-be893331f77fdaf314cd0ecdc057cae80b8464b5e82201c8953ec1f2335e7e58-merged.mount: Deactivated successfully.
Sep 30 17:38:20 compute-0 ceph-mon[73755]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:20 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:20 compute-0 podman[81773]: 2025-09-30 17:38:20.848215588 +0000 UTC m=+2.441531789 container remove 237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Sep 30 17:38:20 compute-0 systemd[1]: libpod-conmon-237bb08e6cb53522d8c43f036c37d82b4ee12c6ea89e10dda1f15b6ead00eb41.scope: Deactivated successfully.
Sep 30 17:38:21 compute-0 systemd[1]: Reloading.
Sep 30 17:38:21 compute-0 systemd-rc-local-generator[81851]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:38:21 compute-0 systemd-sysv-generator[81855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:38:21 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:21 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:21 compute-0 systemd[1]: Reloading.
Sep 30 17:38:21 compute-0 systemd-rc-local-generator[81892]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:38:21 compute-0 systemd-sysv-generator[81895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:38:22 compute-0 systemd[1]: Starting Ceph osd.0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:38:22 compute-0 podman[81954]: 2025-09-30 17:38:22.272298964 +0000 UTC m=+0.027522049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:22 compute-0 podman[81954]: 2025-09-30 17:38:22.558773469 +0000 UTC m=+0.313996534 container create d32e6e17ae78c81543bc25ff0e1c61537cbd1a71bb5a12067c9b2bb7eab3398b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 17:38:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8733d9ff72fb0344c865346a6c38e3b8932ab48106823f0fa59eb3adadfdfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8733d9ff72fb0344c865346a6c38e3b8932ab48106823f0fa59eb3adadfdfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8733d9ff72fb0344c865346a6c38e3b8932ab48106823f0fa59eb3adadfdfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8733d9ff72fb0344c865346a6c38e3b8932ab48106823f0fa59eb3adadfdfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8733d9ff72fb0344c865346a6c38e3b8932ab48106823f0fa59eb3adadfdfa/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Sep 30 17:38:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Sep 30 17:38:23 compute-0 podman[81954]: 2025-09-30 17:38:23.220764596 +0000 UTC m=+0.975987671 container init d32e6e17ae78c81543bc25ff0e1c61537cbd1a71bb5a12067c9b2bb7eab3398b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:38:23 compute-0 podman[81954]: 2025-09-30 17:38:23.232615187 +0000 UTC m=+0.987838252 container start d32e6e17ae78c81543bc25ff0e1c61537cbd1a71bb5a12067c9b2bb7eab3398b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:23 compute-0 ceph-mon[73755]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:23 compute-0 podman[81954]: 2025-09-30 17:38:23.343695565 +0000 UTC m=+1.098918630 container attach d32e6e17ae78c81543bc25ff0e1c61537cbd1a71bb5a12067c9b2bb7eab3398b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:23 compute-0 bash[81954]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:23 compute-0 bash[81954]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:23 compute-0 lvm[82051]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:38:23 compute-0 lvm[82051]: VG ceph_vg0 finished
Sep 30 17:38:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: --> Failed to activate via raw: did not find any matching OSD to activate
Sep 30 17:38:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:23 compute-0 bash[81954]: --> Failed to activate via raw: did not find any matching OSD to activate
Sep 30 17:38:23 compute-0 bash[81954]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:23 compute-0 bash[81954]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 17:38:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Sep 30 17:38:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 17:38:24 compute-0 bash[81954]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 17:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Sep 30 17:38:24 compute-0 bash[81954]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Sep 30 17:38:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Sep 30 17:38:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Sep 30 17:38:24 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Sep 30 17:38:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:24 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:24 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 17:38:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Sep 30 17:38:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Sep 30 17:38:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Sep 30 17:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:24 compute-0 bash[81954]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:24 compute-0 bash[81954]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 30 17:38:24 compute-0 bash[81954]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 30 17:38:24 compute-0 ceph-mon[73755]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='osd.1 [v2:192.168.122.101:6800/4235751489,v1:192.168.122.101:6801/4235751489]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='osd.1 [v2:192.168.122.101:6800/4235751489,v1:192.168.122.101:6801/4235751489]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Sep 30 17:38:24 compute-0 ceph-mon[73755]: osdmap e6: 2 total, 0 up, 2 in
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:24 compute-0 ceph-mon[73755]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Sep 30 17:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 17:38:24 compute-0 bash[81954]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 17:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate[81970]: --> ceph-volume lvm activate successful for osd ID: 0
Sep 30 17:38:24 compute-0 bash[81954]: --> ceph-volume lvm activate successful for osd ID: 0
Sep 30 17:38:24 compute-0 systemd[1]: libpod-d32e6e17ae78c81543bc25ff0e1c61537cbd1a71bb5a12067c9b2bb7eab3398b.scope: Deactivated successfully.
Sep 30 17:38:24 compute-0 podman[81954]: 2025-09-30 17:38:24.427710346 +0000 UTC m=+2.182933411 container died d32e6e17ae78c81543bc25ff0e1c61537cbd1a71bb5a12067c9b2bb7eab3398b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:38:24 compute-0 systemd[1]: libpod-d32e6e17ae78c81543bc25ff0e1c61537cbd1a71bb5a12067c9b2bb7eab3398b.scope: Consumed 1.281s CPU time.
Sep 30 17:38:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e8733d9ff72fb0344c865346a6c38e3b8932ab48106823f0fa59eb3adadfdfa-merged.mount: Deactivated successfully.
Sep 30 17:38:24 compute-0 podman[81954]: 2025-09-30 17:38:24.965231028 +0000 UTC m=+2.720454093 container remove d32e6e17ae78c81543bc25ff0e1c61537cbd1a71bb5a12067c9b2bb7eab3398b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:38:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Sep 30 17:38:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:38:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Sep 30 17:38:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Sep 30 17:38:25 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4235751489; not ready for session (expect reconnect)
Sep 30 17:38:25 compute-0 podman[82222]: 2025-09-30 17:38:25.159993108 +0000 UTC m=+0.023059126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:25 compute-0 podman[82222]: 2025-09-30 17:38:25.282207127 +0000 UTC m=+0.145273115 container create fd4d46f7f2f9caa5a9b02e62d243bc45a011cc47cd454a47cd5dcd4e430678de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:25 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Sep 30 17:38:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:25 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:25 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:25 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 17:38:25 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54916150f308b465648cbcb7db403d21dcc3277125f63d59cb0f6ff2dab5661/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54916150f308b465648cbcb7db403d21dcc3277125f63d59cb0f6ff2dab5661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54916150f308b465648cbcb7db403d21dcc3277125f63d59cb0f6ff2dab5661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54916150f308b465648cbcb7db403d21dcc3277125f63d59cb0f6ff2dab5661/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54916150f308b465648cbcb7db403d21dcc3277125f63d59cb0f6ff2dab5661/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:25 compute-0 podman[82222]: 2025-09-30 17:38:25.439936358 +0000 UTC m=+0.303002366 container init fd4d46f7f2f9caa5a9b02e62d243bc45a011cc47cd454a47cd5dcd4e430678de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:38:25 compute-0 podman[82222]: 2025-09-30 17:38:25.445735375 +0000 UTC m=+0.308801373 container start fd4d46f7f2f9caa5a9b02e62d243bc45a011cc47cd454a47cd5dcd4e430678de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 17:38:25 compute-0 ceph-osd[82241]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 17:38:25 compute-0 ceph-osd[82241]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Sep 30 17:38:25 compute-0 ceph-osd[82241]: pidfile_write: ignore empty --pid-file
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:25 compute-0 bash[82222]: fd4d46f7f2f9caa5a9b02e62d243bc45a011cc47cd454a47cd5dcd4e430678de
Sep 30 17:38:25 compute-0 systemd[1]: Started Ceph osd.0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:38:25 compute-0 sudo[81660]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:25 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09800 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:26 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4235751489; not ready for session (expect reconnect)
Sep 30 17:38:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:26 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 17:38:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b1d09c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:26 compute-0 sudo[82270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:26 compute-0 sudo[82270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:26 compute-0 sudo[82270]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:26 compute-0 sudo[82295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:38:26 compute-0 sudo[82295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:26 compute-0 ceph-osd[82241]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Sep 30 17:38:26 compute-0 ceph-osd[82241]: load: jerasure load: lrc 
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:26 compute-0 ceph-mon[73755]: purged_snaps scrub starts
Sep 30 17:38:26 compute-0 ceph-mon[73755]: purged_snaps scrub ok
Sep 30 17:38:26 compute-0 ceph-mon[73755]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:26 compute-0 ceph-mon[73755]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Sep 30 17:38:26 compute-0 ceph-mon[73755]: osdmap e7: 2 total, 0 up, 2 in
Sep 30 17:38:26 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:26 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:26 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:26 compute-0 podman[82366]: 2025-09-30 17:38:26.824327947 +0000 UTC m=+0.037268027 container create b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cohen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:38:26 compute-0 systemd[1]: Started libpod-conmon-b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f.scope.
Sep 30 17:38:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Sep 30 17:38:26 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:26 compute-0 podman[82366]: 2025-09-30 17:38:26.895461811 +0000 UTC m=+0.108401901 container init b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cohen, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 17:38:26 compute-0 podman[82366]: 2025-09-30 17:38:26.901386231 +0000 UTC m=+0.114326311 container start b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 17:38:26 compute-0 podman[82366]: 2025-09-30 17:38:26.806745981 +0000 UTC m=+0.019686081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:26 compute-0 podman[82366]: 2025-09-30 17:38:26.905632608 +0000 UTC m=+0.118572938 container attach b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:38:26 compute-0 practical_cohen[82383]: 167 167
Sep 30 17:38:26 compute-0 systemd[1]: libpod-b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f.scope: Deactivated successfully.
Sep 30 17:38:26 compute-0 conmon[82383]: conmon b8052724a2fece1741d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f.scope/container/memory.events
Sep 30 17:38:26 compute-0 podman[82366]: 2025-09-30 17:38:26.909854045 +0000 UTC m=+0.122794135 container died b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f359a4d7c93d9c40d2f7df9e5020b3cd89da7b163a905368d50bd30b94e270b2-merged.mount: Deactivated successfully.
Sep 30 17:38:26 compute-0 podman[82366]: 2025-09-30 17:38:26.946263089 +0000 UTC m=+0.159203159 container remove b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cohen, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:38:26 compute-0 systemd[1]: libpod-conmon-b8052724a2fece1741d1be45414fa8ae725a98bd043faca6c87731409f30163f.scope: Deactivated successfully.
Sep 30 17:38:27 compute-0 podman[82411]: 2025-09-30 17:38:27.086444804 +0000 UTC m=+0.037058051 container create 76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:27 compute-0 systemd[1]: Started libpod-conmon-76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3.scope.
Sep 30 17:38:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fd0dde342c9a8525ac216fb717fc7fc20f2cea6c44193bc86825510dc91d13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fd0dde342c9a8525ac216fb717fc7fc20f2cea6c44193bc86825510dc91d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fd0dde342c9a8525ac216fb717fc7fc20f2cea6c44193bc86825510dc91d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fd0dde342c9a8525ac216fb717fc7fc20f2cea6c44193bc86825510dc91d13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:27 compute-0 podman[82411]: 2025-09-30 17:38:27.069673759 +0000 UTC m=+0.020287026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:27 compute-0 podman[82411]: 2025-09-30 17:38:27.169723056 +0000 UTC m=+0.120336303 container init 76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_roentgen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:27 compute-0 podman[82411]: 2025-09-30 17:38:27.183397183 +0000 UTC m=+0.134010420 container start 76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 17:38:27 compute-0 podman[82411]: 2025-09-30 17:38:27.186564943 +0000 UTC m=+0.137178200 container attach 76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_roentgen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Sep 30 17:38:27 compute-0 ceph-osd[82241]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:27 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4235751489; not ready for session (expect reconnect)
Sep 30 17:38:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:27 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:27 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7b000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7b000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7b000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount shared_bdev_used = 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: RocksDB version: 7.9.2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Git sha 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Compile date 2025-07-17 03:12:14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: DB SUMMARY
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: DB Session ID:  FRGWLLECGW6F45Z8KP4E
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: CURRENT file:  CURRENT
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: IDENTITY file:  IDENTITY
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                         Options.error_if_exists: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.create_if_missing: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                         Options.paranoid_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.flush_verify_memtable_count: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                                     Options.env: 0x5631b1d5d6c0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                                      Options.fs: LegacyFileSystem
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                                Options.info_log: 0x5631b2b7f760
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_file_opening_threads: 16
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                              Options.statistics: (nil)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.use_fsync: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.max_log_file_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.log_file_time_to_roll: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.keep_log_file_num: 1000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.recycle_log_file_num: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                         Options.allow_fallocate: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.allow_mmap_reads: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.allow_mmap_writes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.use_direct_reads: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.create_missing_column_families: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                              Options.db_log_dir: 
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                                 Options.wal_dir: db.wal
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.table_cache_numshardbits: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                         Options.WAL_ttl_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.WAL_size_limit_MB: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.manifest_preallocation_size: 4194304
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                     Options.is_fd_close_on_exec: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.advise_random_on_open: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.db_write_buffer_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.write_buffer_manager: 0x5631b2c74a00
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.access_hint_on_compaction_start: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                      Options.use_adaptive_mutex: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                            Options.rate_limiter: (nil)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.wal_recovery_mode: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.enable_thread_tracking: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.enable_pipelined_write: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.unordered_write: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.write_thread_max_yield_usec: 100
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.row_cache: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                              Options.wal_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.avoid_flush_during_recovery: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.allow_ingest_behind: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.two_write_queues: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.manual_wal_flush: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.wal_compression: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.atomic_flush: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.persist_stats_to_disk: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.write_dbid_to_manifest: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.log_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.best_efforts_recovery: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.allow_data_in_errors: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.db_host_id: __hostname__
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.enforce_single_del_contracts: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.max_background_jobs: 4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.max_background_compactions: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.max_subcompactions: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.writable_file_max_buffer_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.delayed_write_rate : 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.max_total_wal_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.stats_dump_period_sec: 600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.stats_persist_period_sec: 600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.max_open_files: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.bytes_per_sync: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                      Options.wal_bytes_per_sync: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.strict_bytes_per_sync: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.compaction_readahead_size: 2097152
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.max_background_flushes: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Compression algorithms supported:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kZSTD supported: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kXpressCompression supported: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kBZip2Compression supported: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kZSTDNotFinalCompression supported: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kLZ4Compression supported: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kZlibCompression supported: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kLZ4HCCompression supported: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kSnappyCompression supported: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Fast CRC32 supported: Supported on x86
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: DMutex implementation: pthread_mutex_t
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fb40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3cce9ace-b720-4aeb-9489-ddd1becac36c
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253907560091, "job": 1, "event": "recovery_started", "wal_files": [31]}
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253907560276, "job": 1, "event": "recovery_finished"}
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: freelist init
Sep 30 17:38:27 compute-0 ceph-osd[82241]: freelist _read_cfg
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs umount
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7b000 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 17:38:27 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:27 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:27 compute-0 ceph-mon[73755]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:27 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:27 compute-0 lvm[82700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:38:27 compute-0 lvm[82700]: VG ceph_vg0 finished
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7b000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7b000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bdev(0x5631b2b7b000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluefs mount shared_bdev_used = 4718592
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Sep 30 17:38:27 compute-0 lvm[82702]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:38:27 compute-0 lvm[82702]: VG ceph_vg0 finished
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: RocksDB version: 7.9.2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Git sha 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Compile date 2025-07-17 03:12:14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: DB SUMMARY
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: DB Session ID:  FRGWLLECGW6F45Z8KP4F
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: CURRENT file:  CURRENT
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: IDENTITY file:  IDENTITY
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                         Options.error_if_exists: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.create_if_missing: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                         Options.paranoid_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.flush_verify_memtable_count: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                                     Options.env: 0x5631b1d5d1f0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                                      Options.fs: LegacyFileSystem
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                                Options.info_log: 0x5631b2b7f900
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_file_opening_threads: 16
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                              Options.statistics: (nil)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.use_fsync: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.max_log_file_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.log_file_time_to_roll: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.keep_log_file_num: 1000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.recycle_log_file_num: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                         Options.allow_fallocate: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.allow_mmap_reads: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.allow_mmap_writes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.use_direct_reads: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.create_missing_column_families: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                              Options.db_log_dir: 
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                                 Options.wal_dir: db.wal
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.table_cache_numshardbits: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                         Options.WAL_ttl_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.WAL_size_limit_MB: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.manifest_preallocation_size: 4194304
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                     Options.is_fd_close_on_exec: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.advise_random_on_open: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.db_write_buffer_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.write_buffer_manager: 0x5631b2c74a00
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.access_hint_on_compaction_start: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                      Options.use_adaptive_mutex: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                            Options.rate_limiter: (nil)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.wal_recovery_mode: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.enable_thread_tracking: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.enable_pipelined_write: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.unordered_write: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.write_thread_max_yield_usec: 100
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.row_cache: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                              Options.wal_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.avoid_flush_during_recovery: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.allow_ingest_behind: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.two_write_queues: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.manual_wal_flush: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.wal_compression: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.atomic_flush: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.persist_stats_to_disk: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.write_dbid_to_manifest: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.log_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.best_efforts_recovery: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.allow_data_in_errors: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.db_host_id: __hostname__
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.enforce_single_del_contracts: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.max_background_jobs: 4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.max_background_compactions: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.max_subcompactions: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.writable_file_max_buffer_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.delayed_write_rate : 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.max_total_wal_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.stats_dump_period_sec: 600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.stats_persist_period_sec: 600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.max_open_files: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.bytes_per_sync: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                      Options.wal_bytes_per_sync: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.strict_bytes_per_sync: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.compaction_readahead_size: 2097152
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.max_background_flushes: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Compression algorithms supported:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kZSTD supported: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kXpressCompression supported: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kBZip2Compression supported: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kZSTDNotFinalCompression supported: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kLZ4Compression supported: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kZlibCompression supported: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kLZ4HCCompression supported: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         kSnappyCompression supported: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Fast CRC32 supported: Supported on x86
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: DMutex implementation: pthread_mutex_t
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7f640)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7f640)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7f640)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7f640)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7f640)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7f640)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7f640)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fa80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fa80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:           Options.merge_operator: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631b2b7fa80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631b1d9e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.compression: LZ4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.num_levels: 7
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.bloom_locality: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                               Options.ttl: 2592000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                       Options.enable_blob_files: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                           Options.min_blob_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3cce9ace-b720-4aeb-9489-ddd1becac36c
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253907829325, "job": 1, "event": "recovery_started", "wal_files": [31]}
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253907832761, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253907, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cce9ace-b720-4aeb-9489-ddd1becac36c", "db_session_id": "FRGWLLECGW6F45Z8KP4F", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:38:27 compute-0 silly_roentgen[82427]: {}
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253907835531, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253907, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cce9ace-b720-4aeb-9489-ddd1becac36c", "db_session_id": "FRGWLLECGW6F45Z8KP4F", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253907839671, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253907, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cce9ace-b720-4aeb-9489-ddd1becac36c", "db_session_id": "FRGWLLECGW6F45Z8KP4F", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759253907847762, "job": 1, "event": "recovery_finished"}
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Sep 30 17:38:27 compute-0 systemd[1]: libpod-76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3.scope: Deactivated successfully.
Sep 30 17:38:27 compute-0 podman[82411]: 2025-09-30 17:38:27.860084214 +0000 UTC m=+0.810697481 container died 76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 17:38:27 compute-0 systemd[1]: libpod-76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3.scope: Consumed 1.053s CPU time.
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5631b2d7a000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: DB pointer 0x5631b2d24000
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Sep 30 17:38:27 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 17:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 17:38:27 compute-0 ceph-osd[82241]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Sep 30 17:38:27 compute-0 ceph-osd[82241]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Sep 30 17:38:27 compute-0 ceph-osd[82241]: _get_class not permitted to load lua
Sep 30 17:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-54fd0dde342c9a8525ac216fb717fc7fc20f2cea6c44193bc86825510dc91d13-merged.mount: Deactivated successfully.
Sep 30 17:38:27 compute-0 ceph-osd[82241]: _get_class not permitted to load sdk
Sep 30 17:38:27 compute-0 ceph-osd[82241]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Sep 30 17:38:27 compute-0 ceph-osd[82241]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Sep 30 17:38:27 compute-0 ceph-osd[82241]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Sep 30 17:38:27 compute-0 ceph-osd[82241]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Sep 30 17:38:27 compute-0 ceph-osd[82241]: osd.0 0 load_pgs
Sep 30 17:38:27 compute-0 ceph-osd[82241]: osd.0 0 load_pgs opened 0 pgs
Sep 30 17:38:27 compute-0 ceph-osd[82241]: osd.0 0 log_to_monitors true
Sep 30 17:38:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0[82237]: 2025-09-30T17:38:27.893+0000 7f31bacf4740 -1 osd.0 0 log_to_monitors true
Sep 30 17:38:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Sep 30 17:38:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Sep 30 17:38:27 compute-0 podman[82411]: 2025-09-30 17:38:27.909646201 +0000 UTC m=+0.860259448 container remove 76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:27 compute-0 systemd[1]: libpod-conmon-76216fd246e2cd451ae48343553aab75fb38d839cfae5b377f8094904454ded3.scope: Deactivated successfully.
Sep 30 17:38:27 compute-0 sudo[82295]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:28 compute-0 sudo[82931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:38:28 compute-0 sudo[82931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:28 compute-0 sudo[82931]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:28 compute-0 sudo[82956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:28 compute-0 sudo[82956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:28 compute-0 sudo[82956]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:28 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4235751489; not ready for session (expect reconnect)
Sep 30 17:38:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:28 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:28 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 17:38:28 compute-0 sudo[82981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:38:28 compute-0 sudo[82981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:28 compute-0 podman[83077]: 2025-09-30 17:38:28.701219976 +0000 UTC m=+0.056087233 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:38:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Sep 30 17:38:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:38:28 compute-0 ceph-mon[73755]: from='osd.0 [v2:192.168.122.100:6802/129916338,v1:192.168.122.100:6803/129916338]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Sep 30 17:38:28 compute-0 ceph-mon[73755]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Sep 30 17:38:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:28 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Sep 30 17:38:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Sep 30 17:38:28 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Sep 30 17:38:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:28 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:28 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:28 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 17:38:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:28 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Sep 30 17:38:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Sep 30 17:38:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Sep 30 17:38:28 compute-0 podman[83077]: 2025-09-30 17:38:28.815754381 +0000 UTC m=+0.170621628 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:28 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Sep 30 17:38:28 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Sep 30 17:38:29 compute-0 sudo[82981]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:29 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/4235751489; not ready for session (expect reconnect)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 17:38:29 compute-0 sudo[83163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:29 compute-0 sudo[83163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:29 compute-0 sudo[83163]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:29 compute-0 sudo[83188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:38:29 compute-0 sudo[83188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:29 compute-0 sudo[83188]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:38:29 compute-0 ceph-mon[73755]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Sep 30 17:38:29 compute-0 ceph-mon[73755]: osdmap e8: 2 total, 0 up, 2 in
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='osd.0 [v2:192.168.122.100:6802/129916338,v1:192.168.122.100:6803/129916338]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:29 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:29 compute-0 sudo[83243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:29 compute-0 sudo[83243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:29 compute-0 sudo[83243]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/4235751489,v1:192.168.122.101:6801/4235751489] boot
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:29 compute-0 ceph-osd[82241]: osd.0 0 done with init, starting boot process
Sep 30 17:38:29 compute-0 ceph-osd[82241]: osd.0 0 start_boot
Sep 30 17:38:29 compute-0 ceph-osd[82241]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Sep 30 17:38:29 compute-0 ceph-osd[82241]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Sep 30 17:38:29 compute-0 ceph-osd[82241]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Sep 30 17:38:29 compute-0 ceph-osd[82241]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Sep 30 17:38:29 compute-0 ceph-osd[82241]: osd.0 0  bench count 12288000 bsize 4 KiB
Sep 30 17:38:29 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:29 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:29 compute-0 sudo[83268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- inventory --format=json-pretty --filter-for-batch
Sep 30 17:38:29 compute-0 sudo[83268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:30 compute-0 podman[83334]: 2025-09-30 17:38:30.242387191 +0000 UTC m=+0.062148277 container create 674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:38:30 compute-0 podman[83334]: 2025-09-30 17:38:30.201872624 +0000 UTC m=+0.021633730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:30 compute-0 systemd[1]: Started libpod-conmon-674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4.scope.
Sep 30 17:38:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:30 compute-0 podman[83334]: 2025-09-30 17:38:30.340480658 +0000 UTC m=+0.160241774 container init 674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 17:38:30 compute-0 podman[83334]: 2025-09-30 17:38:30.34843171 +0000 UTC m=+0.168192796 container start 674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:30 compute-0 focused_shaw[83350]: 167 167
Sep 30 17:38:30 compute-0 systemd[1]: libpod-674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4.scope: Deactivated successfully.
Sep 30 17:38:30 compute-0 podman[83334]: 2025-09-30 17:38:30.370714415 +0000 UTC m=+0.190475551 container attach 674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:38:30 compute-0 podman[83334]: 2025-09-30 17:38:30.371154046 +0000 UTC m=+0.190915152 container died 674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a0bcf01bbffbb0b174cfe3ef19c91b99e9abf3a894ae4ceadf04e081cd8704a-merged.mount: Deactivated successfully.
Sep 30 17:38:30 compute-0 podman[83334]: 2025-09-30 17:38:30.506098008 +0000 UTC m=+0.325859094 container remove 674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:30 compute-0 systemd[1]: libpod-conmon-674a0052fb34520892db9ea86624792fba1483e705484e852096a06d4ca72af4.scope: Deactivated successfully.
Sep 30 17:38:30 compute-0 podman[83376]: 2025-09-30 17:38:30.662410993 +0000 UTC m=+0.053368885 container create f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:30 compute-0 podman[83376]: 2025-09-30 17:38:30.631243462 +0000 UTC m=+0.022201274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:30 compute-0 systemd[1]: Started libpod-conmon-f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506.scope.
Sep 30 17:38:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61ae8c2d1c4d7ec45a6f0cced6e615de7d3ab8fa3078ff6abb7dfcd4f2e1294/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61ae8c2d1c4d7ec45a6f0cced6e615de7d3ab8fa3078ff6abb7dfcd4f2e1294/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61ae8c2d1c4d7ec45a6f0cced6e615de7d3ab8fa3078ff6abb7dfcd4f2e1294/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61ae8c2d1c4d7ec45a6f0cced6e615de7d3ab8fa3078ff6abb7dfcd4f2e1294/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:30 compute-0 podman[83376]: 2025-09-30 17:38:30.789152257 +0000 UTC m=+0.180110109 container init f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_babbage, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:30 compute-0 podman[83376]: 2025-09-30 17:38:30.795615611 +0000 UTC m=+0.186573413 container start f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:38:30 compute-0 podman[83376]: 2025-09-30 17:38:30.813468533 +0000 UTC m=+0.204426385 container attach f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 17:38:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Sep 30 17:38:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:38:30 compute-0 ceph-mon[73755]: OSD bench result of 9804.413295 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Sep 30 17:38:30 compute-0 ceph-mon[73755]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Sep 30 17:38:30 compute-0 ceph-mon[73755]: osd.1 [v2:192.168.122.101:6800/4235751489,v1:192.168.122.101:6801/4235751489] boot
Sep 30 17:38:30 compute-0 ceph-mon[73755]: osdmap e9: 2 total, 1 up, 2 in
Sep 30 17:38:30 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:30 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:38:30 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Sep 30 17:38:30 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Sep 30 17:38:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: [devicehealth INFO root] creating mgr pool
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:38:30 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:38:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Sep 30 17:38:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Sep 30 17:38:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:31 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]: [
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:     {
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "available": false,
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "being_replaced": false,
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "ceph_device_lvm": false,
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "device_id": "QEMU_DVD-ROM_QM00001",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "lsm_data": {},
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "lvs": [],
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "path": "/dev/sr0",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "rejected_reasons": [
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "Has a FileSystem",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "Insufficient space (<5GB)"
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         ],
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         "sys_api": {
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "actuators": null,
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "device_nodes": [
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:                 "sr0"
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             ],
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "devname": "sr0",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "human_readable_size": "482.00 KB",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "id_bus": "ata",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "model": "QEMU DVD-ROM",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "nr_requests": "2",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "parent": "/dev/sr0",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "partitions": {},
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "path": "/dev/sr0",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "removable": "1",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "rev": "2.5+",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "ro": "0",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "rotational": "0",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "sas_address": "",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "sas_device_handle": "",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "scheduler_mode": "mq-deadline",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "sectors": 0,
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "sectorsize": "2048",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "size": 493568.0,
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "support_discard": "2048",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "type": "disk",
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:             "vendor": "QEMU"
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:         }
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]:     }
Sep 30 17:38:31 compute-0 optimistic_babbage[83393]: ]
Sep 30 17:38:31 compute-0 systemd[1]: libpod-f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506.scope: Deactivated successfully.
Sep 30 17:38:31 compute-0 podman[83376]: 2025-09-30 17:38:31.460044361 +0000 UTC m=+0.851002163 container died f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_babbage, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:31 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Sep 30 17:38:31 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:38:31 compute-0 ceph-mgr[74051]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:38:31 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:38:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 17:38:31 compute-0 ceph-mgr[74051]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:38:31 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:38:31 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Sep 30 17:38:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 17:38:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:31 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:31 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:31 compute-0 ceph-mon[73755]: purged_snaps scrub starts
Sep 30 17:38:31 compute-0 ceph-mon[73755]: purged_snaps scrub ok
Sep 30 17:38:31 compute-0 ceph-mon[73755]: pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:31 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:31 compute-0 ceph-mon[73755]: osdmap e10: 2 total, 1 up, 2 in
Sep 30 17:38:31 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:31 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Sep 30 17:38:31 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:31 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:31 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e61ae8c2d1c4d7ec45a6f0cced6e615de7d3ab8fa3078ff6abb7dfcd4f2e1294-merged.mount: Deactivated successfully.
Sep 30 17:38:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Sep 30 17:38:32 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:32 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Sep 30 17:38:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Sep 30 17:38:32 compute-0 podman[83376]: 2025-09-30 17:38:32.641532775 +0000 UTC m=+2.032490577 container remove f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 17:38:32 compute-0 sudo[83268]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:32 compute-0 systemd[1]: libpod-conmon-f764087f7eaa087c33c54408d1bce8dd7fedefe4bec19de17b521e967086a506.scope: Deactivated successfully.
Sep 30 17:38:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:32 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:32 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:38:33 compute-0 ceph-mon[73755]: Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:38:33 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Sep 30 17:38:33 compute-0 ceph-mon[73755]: osdmap e11: 2 total, 1 up, 2 in
Sep 30 17:38:33 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:33 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:38:33 compute-0 sudo[84366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:38:33 compute-0 sudo[84366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:33 compute-0 sudo[84366]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:33 compute-0 sudo[84391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:38:33 compute-0 sudo[84391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:33 compute-0 sudo[84391]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:33 compute-0 sudo[84416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:38:33 compute-0 sudo[84416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:33 compute-0 sudo[84416]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:33 compute-0 sudo[84441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:33 compute-0 sudo[84441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:33 compute-0 sudo[84441]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:33 compute-0 sudo[84466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:38:33 compute-0 sudo[84466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:33 compute-0 sudo[84466]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:33 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:33 compute-0 sudo[84514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:38:33 compute-0 sudo[84514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:33 compute-0 sudo[84514]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:33 compute-0 sudo[84539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:38:33 compute-0 sudo[84539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:33 compute-0 sudo[84539]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 sudo[84564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 17:38:34 compute-0 sudo[84564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84564]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:38:34 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:38:34 compute-0 sudo[84589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:38:34 compute-0 sudo[84589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84589]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:34 compute-0 sudo[84614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:38:34 compute-0 sudo[84614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84614]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 sudo[84639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:38:34 compute-0 sudo[84639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84639]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 sudo[84664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:34 compute-0 sudo[84664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84664]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:38:34 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:38:34 compute-0 sudo[84689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:38:34 compute-0 sudo[84689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84689]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 ceph-mon[73755]: pgmap v48: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:34 compute-0 ceph-mon[73755]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Sep 30 17:38:34 compute-0 ceph-mon[73755]: osdmap e12: 2 total, 1 up, 2 in
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:38:34 compute-0 ceph-mon[73755]: Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:38:34 compute-0 ceph-mon[73755]: Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:38:34 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:38:34 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:38:34 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:34 compute-0 sudo[84737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:38:34 compute-0 sudo[84737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84737]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 sudo[84762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:38:34 compute-0 sudo[84762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84762]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 sudo[84787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:38:34 compute-0 sudo[84787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:34 compute-0 sudo[84787]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:34 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:34 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:38:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:35 compute-0 sudo[84812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:35 compute-0 sudo[84812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:35 compute-0 sudo[84812]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:35 compute-0 sudo[84837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:38:35 compute-0 sudo[84837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Cluster is now healthy
Sep 30 17:38:35 compute-0 podman[84900]: 2025-09-30 17:38:35.673295852 +0000 UTC m=+0.072894390 container create a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:35 compute-0 podman[84900]: 2025-09-30 17:38:35.620250027 +0000 UTC m=+0.019848605 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:35 compute-0 systemd[1]: Started libpod-conmon-a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4.scope.
Sep 30 17:38:35 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:38:35 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:35 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:35 compute-0 podman[84900]: 2025-09-30 17:38:35.840840021 +0000 UTC m=+0.240438569 container init a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 17:38:35 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:35 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:35 compute-0 podman[84900]: 2025-09-30 17:38:35.848093545 +0000 UTC m=+0.247692093 container start a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:35 compute-0 clever_perlman[84916]: 167 167
Sep 30 17:38:35 compute-0 systemd[1]: libpod-a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4.scope: Deactivated successfully.
Sep 30 17:38:35 compute-0 podman[84900]: 2025-09-30 17:38:35.893694521 +0000 UTC m=+0.293293069 container attach a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_perlman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:35 compute-0 podman[84900]: 2025-09-30 17:38:35.895139508 +0000 UTC m=+0.294738056 container died a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 17:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ec47f5c588c53fafa2f12a506fb7cd013023b68597022fc79cb21fcf8382234-merged.mount: Deactivated successfully.
Sep 30 17:38:36 compute-0 podman[84900]: 2025-09-30 17:38:36.061557788 +0000 UTC m=+0.461156336 container remove a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:38:36 compute-0 systemd[1]: libpod-conmon-a98d23f9b270983775e15c6d455730fe561d301d46094d692d710fbbb8e55df4.scope: Deactivated successfully.
Sep 30 17:38:36 compute-0 podman[84941]: 2025-09-30 17:38:36.288475403 +0000 UTC m=+0.072215872 container create b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_cori, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:38:36 compute-0 podman[84941]: 2025-09-30 17:38:36.242028606 +0000 UTC m=+0.025769125 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:36 compute-0 systemd[1]: Started libpod-conmon-b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923.scope.
Sep 30 17:38:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338c5312821707de96b6428cdfe591032bb71941a7d8c1707aa03e72068a54eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338c5312821707de96b6428cdfe591032bb71941a7d8c1707aa03e72068a54eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338c5312821707de96b6428cdfe591032bb71941a7d8c1707aa03e72068a54eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338c5312821707de96b6428cdfe591032bb71941a7d8c1707aa03e72068a54eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338c5312821707de96b6428cdfe591032bb71941a7d8c1707aa03e72068a54eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:36 compute-0 podman[84941]: 2025-09-30 17:38:36.471429743 +0000 UTC m=+0.255170232 container init b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_cori, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:38:36 compute-0 podman[84941]: 2025-09-30 17:38:36.478450361 +0000 UTC m=+0.262190830 container start b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:38:36 compute-0 podman[84941]: 2025-09-30 17:38:36.579573896 +0000 UTC m=+0.363314365 container attach b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_cori, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 17:38:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:36 compute-0 intelligent_cori[84958]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:38:36 compute-0 intelligent_cori[84958]: --> All data devices are unavailable
Sep 30 17:38:36 compute-0 systemd[1]: libpod-b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923.scope: Deactivated successfully.
Sep 30 17:38:36 compute-0 podman[84941]: 2025-09-30 17:38:36.804537031 +0000 UTC m=+0.588277490 container died b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_cori, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 17:38:36 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:36 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:36 compute-0 sudo[85008]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dietupnpftiajgwtxffawhuuqknmlowe ; /usr/bin/python3'
Sep 30 17:38:36 compute-0 sudo[85008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:36 compute-0 ceph-mon[73755]: pgmap v50: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:36 compute-0 ceph-mon[73755]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 17:38:36 compute-0 ceph-mon[73755]: Cluster is now healthy
Sep 30 17:38:36 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-338c5312821707de96b6428cdfe591032bb71941a7d8c1707aa03e72068a54eb-merged.mount: Deactivated successfully.
Sep 30 17:38:37 compute-0 podman[84941]: 2025-09-30 17:38:37.032197255 +0000 UTC m=+0.815937724 container remove b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_cori, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 17:38:37 compute-0 systemd[1]: libpod-conmon-b4afa2272bc0ed626ff23b9e5dc72b8a1d32074a0123607a1faed075c3883923.scope: Deactivated successfully.
Sep 30 17:38:37 compute-0 sudo[84837]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:37 compute-0 python3[85010]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:37 compute-0 sudo[85015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:37 compute-0 sudo[85015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:37 compute-0 sudo[85015]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:37 compute-0 podman[85013]: 2025-09-30 17:38:37.146680838 +0000 UTC m=+0.060995267 container create 14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab (image=quay.io/ceph/ceph:v19, name=reverent_jepsen, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:37 compute-0 systemd[1]: Started libpod-conmon-14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab.scope.
Sep 30 17:38:37 compute-0 sudo[85049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:38:37 compute-0 sudo[85049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:37 compute-0 podman[85013]: 2025-09-30 17:38:37.107525755 +0000 UTC m=+0.021840204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbdb1152034f0b596e5aa4c588767e715030587ba295a637a4b4a965e44e854/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbdb1152034f0b596e5aa4c588767e715030587ba295a637a4b4a965e44e854/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbdb1152034f0b596e5aa4c588767e715030587ba295a637a4b4a965e44e854/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:37 compute-0 podman[85013]: 2025-09-30 17:38:37.260165936 +0000 UTC m=+0.174480405 container init 14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab (image=quay.io/ceph/ceph:v19, name=reverent_jepsen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 17:38:37 compute-0 podman[85013]: 2025-09-30 17:38:37.267923353 +0000 UTC m=+0.182237782 container start 14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab (image=quay.io/ceph/ceph:v19, name=reverent_jepsen, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:38:37 compute-0 podman[85013]: 2025-09-30 17:38:37.290106856 +0000 UTC m=+0.204421305 container attach 14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab (image=quay.io/ceph/ceph:v19, name=reverent_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 17:38:37 compute-0 podman[85139]: 2025-09-30 17:38:37.646194976 +0000 UTC m=+0.113676464 container create 67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 17:38:37 compute-0 podman[85139]: 2025-09-30 17:38:37.555905456 +0000 UTC m=+0.023386954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:37 compute-0 systemd[1]: Started libpod-conmon-67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83.scope.
Sep 30 17:38:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 17:38:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985569669' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:38:37 compute-0 reverent_jepsen[85076]: 
Sep 30 17:38:37 compute-0 reverent_jepsen[85076]: {"fsid":"63d32c6a-fa18-54ed-8711-9a3915cc367b","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-1"],"quorum_age":32,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":1,"osd_up_since":1759253909,"num_in_osds":2,"osd_in_since":1759253889,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":27598848,"bytes_avail":21443043328,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2025-09-30T17:36:27:095553+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-09-30T17:38:02.720729+0000","services":{}},"progress_events":{}}
Sep 30 17:38:37 compute-0 podman[85013]: 2025-09-30 17:38:37.704537365 +0000 UTC m=+0.618851784 container died 14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab (image=quay.io/ceph/ceph:v19, name=reverent_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:37 compute-0 systemd[1]: libpod-14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab.scope: Deactivated successfully.
Sep 30 17:38:37 compute-0 podman[85139]: 2025-09-30 17:38:37.736479895 +0000 UTC m=+0.203961393 container init 67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_carson, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:38:37 compute-0 podman[85139]: 2025-09-30 17:38:37.744186531 +0000 UTC m=+0.211667999 container start 67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbbdb1152034f0b596e5aa4c588767e715030587ba295a637a4b4a965e44e854-merged.mount: Deactivated successfully.
Sep 30 17:38:37 compute-0 objective_carson[85156]: 167 167
Sep 30 17:38:37 compute-0 systemd[1]: libpod-67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83.scope: Deactivated successfully.
Sep 30 17:38:37 compute-0 podman[85139]: 2025-09-30 17:38:37.752089741 +0000 UTC m=+0.219571219 container attach 67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 17:38:37 compute-0 podman[85139]: 2025-09-30 17:38:37.752374668 +0000 UTC m=+0.219856146 container died 67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 15.374 iops: 3935.747 elapsed_sec: 0.762
Sep 30 17:38:37 compute-0 ceph-osd[82241]: log_channel(cluster) log [WRN] : OSD bench result of 3935.746627 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 0 waiting for initial osdmap
Sep 30 17:38:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0[82237]: 2025-09-30T17:38:37.763+0000 7f31b6c77640 -1 osd.0 0 waiting for initial osdmap
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 12 check_osdmap_features require_osd_release unknown -> squid
Sep 30 17:38:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c45f1d3c497d8feede1bcd015752db78237d0538a1e30e4cecdfac6296bcee-merged.mount: Deactivated successfully.
Sep 30 17:38:37 compute-0 podman[85013]: 2025-09-30 17:38:37.786545875 +0000 UTC m=+0.700860314 container remove 14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab (image=quay.io/ceph/ceph:v19, name=reverent_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 12 set_numa_affinity not setting numa affinity
Sep 30 17:38:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-osd-0[82237]: 2025-09-30T17:38:37.787+0000 7f31b229f640 -1 osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Sep 30 17:38:37 compute-0 ceph-osd[82241]: osd.0 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Sep 30 17:38:37 compute-0 systemd[1]: libpod-conmon-14e21d664f4f4fcb03a62bffe65b76e4a4f586b3c2f577f5a3752d7d01f0b1ab.scope: Deactivated successfully.
Sep 30 17:38:37 compute-0 podman[85139]: 2025-09-30 17:38:37.80013143 +0000 UTC m=+0.267612908 container remove 67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:38:37 compute-0 sudo[85008]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:37 compute-0 systemd[1]: libpod-conmon-67d262341feb6886f3d44826506e7f3e1e98a55cd21e7ff24c85a60fe1681d83.scope: Deactivated successfully.
Sep 30 17:38:37 compute-0 ceph-mgr[74051]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/129916338; not ready for session (expect reconnect)
Sep 30 17:38:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:37 compute-0 ceph-mgr[74051]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 17:38:37 compute-0 ceph-mon[73755]: pgmap v51: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:37 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3985569669' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:38:37 compute-0 ceph-mon[73755]: OSD bench result of 3935.746627 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Sep 30 17:38:37 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:37 compute-0 podman[85193]: 2025-09-30 17:38:37.948838261 +0000 UTC m=+0.042815027 container create 028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Sep 30 17:38:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Sep 30 17:38:37 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/129916338,v1:192.168.122.100:6803/129916338] boot
Sep 30 17:38:37 compute-0 systemd[1]: Started libpod-conmon-028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75.scope.
Sep 30 17:38:37 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Sep 30 17:38:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:38:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:38 compute-0 ceph-osd[82241]: osd.0 13 state: booting -> active
Sep 30 17:38:38 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:38:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612d568281cd940f398e3278272f7b89db439a6c780cd7a6c3cb4c0e6decf3fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612d568281cd940f398e3278272f7b89db439a6c780cd7a6c3cb4c0e6decf3fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612d568281cd940f398e3278272f7b89db439a6c780cd7a6c3cb4c0e6decf3fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612d568281cd940f398e3278272f7b89db439a6c780cd7a6c3cb4c0e6decf3fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:38 compute-0 podman[85193]: 2025-09-30 17:38:37.931642195 +0000 UTC m=+0.025618981 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:38 compute-0 podman[85193]: 2025-09-30 17:38:38.034403511 +0000 UTC m=+0.128380287 container init 028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 17:38:38 compute-0 podman[85193]: 2025-09-30 17:38:38.041524601 +0000 UTC m=+0.135501367 container start 028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:38 compute-0 podman[85193]: 2025-09-30 17:38:38.045038171 +0000 UTC m=+0.139014947 container attach 028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:38 compute-0 sudo[85237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqcoophqwpzipkqoqvztrxwwphysniim ; /usr/bin/python3'
Sep 30 17:38:38 compute-0 sudo[85237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:38 compute-0 python3[85239]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:38 compute-0 podman[85242]: 2025-09-30 17:38:38.31275661 +0000 UTC m=+0.045069704 container create 635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab (image=quay.io/ceph/ceph:v19, name=bold_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 17:38:38 compute-0 optimistic_kare[85209]: {
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:     "0": [
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:         {
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "devices": [
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "/dev/loop3"
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             ],
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "lv_name": "ceph_lv0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "lv_size": "21470642176",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "name": "ceph_lv0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "tags": {
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.cluster_name": "ceph",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.crush_device_class": "",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.encrypted": "0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.osd_id": "0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.type": "block",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.vdo": "0",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:                 "ceph.with_tpm": "0"
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             },
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "type": "block",
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:             "vg_name": "ceph_vg0"
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:         }
Sep 30 17:38:38 compute-0 optimistic_kare[85209]:     ]
Sep 30 17:38:38 compute-0 optimistic_kare[85209]: }
Sep 30 17:38:38 compute-0 systemd[1]: libpod-028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75.scope: Deactivated successfully.
Sep 30 17:38:38 compute-0 podman[85193]: 2025-09-30 17:38:38.347962533 +0000 UTC m=+0.441939299 container died 028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:38:38 compute-0 systemd[1]: Started libpod-conmon-635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab.scope.
Sep 30 17:38:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-612d568281cd940f398e3278272f7b89db439a6c780cd7a6c3cb4c0e6decf3fe-merged.mount: Deactivated successfully.
Sep 30 17:38:38 compute-0 podman[85242]: 2025-09-30 17:38:38.293115312 +0000 UTC m=+0.025428446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3c9be66e475123231c267b3a6f7b97c7ef95fd15405d1f3158ba3b30b9377d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3c9be66e475123231c267b3a6f7b97c7ef95fd15405d1f3158ba3b30b9377d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:38 compute-0 podman[85193]: 2025-09-30 17:38:38.396568416 +0000 UTC m=+0.490545182 container remove 028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:38 compute-0 systemd[1]: libpod-conmon-028b32035fc71ac1c0ddaba94f448a46a41d4302db6a15c9922a2b51150dfd75.scope: Deactivated successfully.
Sep 30 17:38:38 compute-0 podman[85242]: 2025-09-30 17:38:38.410111729 +0000 UTC m=+0.142424853 container init 635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab (image=quay.io/ceph/ceph:v19, name=bold_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:38:38 compute-0 podman[85242]: 2025-09-30 17:38:38.416765028 +0000 UTC m=+0.149078122 container start 635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab (image=quay.io/ceph/ceph:v19, name=bold_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:38 compute-0 podman[85242]: 2025-09-30 17:38:38.422933524 +0000 UTC m=+0.155246618 container attach 635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab (image=quay.io/ceph/ceph:v19, name=bold_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:38 compute-0 sudo[85049]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:38 compute-0 sudo[85276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:38 compute-0 sudo[85276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:38 compute-0 sudo[85276]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:38 compute-0 sudo[85301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:38:38 compute-0 sudo[85301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 17:38:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4127359789' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:38 compute-0 podman[85387]: 2025-09-30 17:38:38.889868016 +0000 UTC m=+0.036568968 container create e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:38 compute-0 systemd[1]: Started libpod-conmon-e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3.scope.
Sep 30 17:38:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:38 compute-0 podman[85387]: 2025-09-30 17:38:38.956603109 +0000 UTC m=+0.103304061 container init e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:38:38 compute-0 podman[85387]: 2025-09-30 17:38:38.962596771 +0000 UTC m=+0.109297723 container start e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hertz, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:38 compute-0 eager_hertz[85403]: 167 167
Sep 30 17:38:38 compute-0 systemd[1]: libpod-e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3.scope: Deactivated successfully.
Sep 30 17:38:38 compute-0 podman[85387]: 2025-09-30 17:38:38.966164281 +0000 UTC m=+0.112865313 container attach e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:38:38 compute-0 podman[85387]: 2025-09-30 17:38:38.966451208 +0000 UTC m=+0.113152160 container died e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:38:38 compute-0 podman[85387]: 2025-09-30 17:38:38.873427399 +0000 UTC m=+0.020128371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Sep 30 17:38:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f9090328c5a3586592f6d06547cb4680b0935a9f2f76aa5dbfa495042a193b3-merged.mount: Deactivated successfully.
Sep 30 17:38:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4127359789' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Sep 30 17:38:38 compute-0 bold_meitner[85267]: pool 'vms' created
Sep 30 17:38:38 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Sep 30 17:38:39 compute-0 podman[85387]: 2025-09-30 17:38:39.001548969 +0000 UTC m=+0.148249921 container remove e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hertz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:39 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:38:39 compute-0 ceph-mon[73755]: osd.0 [v2:192.168.122.100:6802/129916338,v1:192.168.122.100:6803/129916338] boot
Sep 30 17:38:39 compute-0 ceph-mon[73755]: osdmap e13: 2 total, 2 up, 2 in
Sep 30 17:38:39 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:38:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4127359789' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:39 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:38:39 compute-0 systemd[1]: libpod-conmon-e6cad81bd61752c3ec0ef172a70cd3d855797a4c350f6b1031f7913c0176d5a3.scope: Deactivated successfully.
Sep 30 17:38:39 compute-0 systemd[1]: libpod-635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab.scope: Deactivated successfully.
Sep 30 17:38:39 compute-0 conmon[85267]: conmon 635c390f2a31ec82131e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab.scope/container/memory.events
Sep 30 17:38:39 compute-0 podman[85242]: 2025-09-30 17:38:39.021816603 +0000 UTC m=+0.754129697 container died 635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab (image=quay.io/ceph/ceph:v19, name=bold_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:39 compute-0 ceph-mgr[74051]: [devicehealth INFO root] creating main.db for devicehealth
Sep 30 17:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab3c9be66e475123231c267b3a6f7b97c7ef95fd15405d1f3158ba3b30b9377d-merged.mount: Deactivated successfully.
Sep 30 17:38:39 compute-0 podman[85242]: 2025-09-30 17:38:39.063933041 +0000 UTC m=+0.796246135 container remove 635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab (image=quay.io/ceph/ceph:v19, name=bold_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:38:39 compute-0 systemd[1]: libpod-conmon-635c390f2a31ec82131e826493be9ebe9bb4ca4f002e6993f039fec20fc642ab.scope: Deactivated successfully.
Sep 30 17:38:39 compute-0 sudo[85237]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:39 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 17:38:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Sep 30 17:38:39 compute-0 sudo[85454]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Sep 30 17:38:39 compute-0 sudo[85454]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Sep 30 17:38:39 compute-0 sudo[85454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Sep 30 17:38:39 compute-0 sudo[85454]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:39 compute-0 podman[85447]: 2025-09-30 17:38:39.157481453 +0000 UTC m=+0.041870143 container create cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Sep 30 17:38:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 17:38:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:38:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:38:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 17:38:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:38:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:38:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:39 compute-0 systemd[1]: Started libpod-conmon-cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f.scope.
Sep 30 17:38:39 compute-0 sudo[85490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvxnjytfqieweblornzjvwyvelbuhope ; /usr/bin/python3'
Sep 30 17:38:39 compute-0 sudo[85490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:39 compute-0 podman[85447]: 2025-09-30 17:38:39.135957537 +0000 UTC m=+0.020346247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22835ec94353ca7088d866b629403420207956b656b5bfc495c5d38dca4ccff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22835ec94353ca7088d866b629403420207956b656b5bfc495c5d38dca4ccff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22835ec94353ca7088d866b629403420207956b656b5bfc495c5d38dca4ccff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22835ec94353ca7088d866b629403420207956b656b5bfc495c5d38dca4ccff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:39 compute-0 podman[85447]: 2025-09-30 17:38:39.265184615 +0000 UTC m=+0.149573325 container init cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 17:38:39 compute-0 podman[85447]: 2025-09-30 17:38:39.27207917 +0000 UTC m=+0.156467860 container start cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 17:38:39 compute-0 podman[85447]: 2025-09-30 17:38:39.275899586 +0000 UTC m=+0.160288276 container attach cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:38:39 compute-0 python3[85495]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:39 compute-0 podman[85498]: 2025-09-30 17:38:39.409771462 +0000 UTC m=+0.037500243 container create 61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7 (image=quay.io/ceph/ceph:v19, name=cranky_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:39 compute-0 systemd[1]: Started libpod-conmon-61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7.scope.
Sep 30 17:38:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c5811d7c4374ea072311e43c92c72c8428ed886e6ad9cda7e8953ea56c115b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c5811d7c4374ea072311e43c92c72c8428ed886e6ad9cda7e8953ea56c115b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:39 compute-0 podman[85498]: 2025-09-30 17:38:39.39392637 +0000 UTC m=+0.021655171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:39 compute-0 podman[85498]: 2025-09-30 17:38:39.500181654 +0000 UTC m=+0.127910475 container init 61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7 (image=quay.io/ceph/ceph:v19, name=cranky_cerf, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:39 compute-0 podman[85498]: 2025-09-30 17:38:39.505550121 +0000 UTC m=+0.133278902 container start 61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7 (image=quay.io/ceph/ceph:v19, name=cranky_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 17:38:39 compute-0 podman[85498]: 2025-09-30 17:38:39.508617638 +0000 UTC m=+0.136346439 container attach 61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7 (image=quay.io/ceph/ceph:v19, name=cranky_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:38:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 17:38:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2260967211' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:39 compute-0 lvm[85605]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:38:39 compute-0 lvm[85605]: VG ceph_vg0 finished
Sep 30 17:38:39 compute-0 suspicious_chandrasekhar[85492]: {}
Sep 30 17:38:39 compute-0 systemd[1]: libpod-cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f.scope: Deactivated successfully.
Sep 30 17:38:39 compute-0 systemd[1]: libpod-cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f.scope: Consumed 1.037s CPU time.
Sep 30 17:38:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Sep 30 17:38:39 compute-0 podman[85612]: 2025-09-30 17:38:39.99920953 +0000 UTC m=+0.025259901 container died cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2260967211' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Sep 30 17:38:40 compute-0 cranky_cerf[85514]: pool 'volumes' created
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Sep 30 17:38:40 compute-0 ceph-mon[73755]: pgmap v53: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4127359789' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:40 compute-0 ceph-mon[73755]: osdmap e14: 2 total, 2 up, 2 in
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2260967211' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2260967211' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:40 compute-0 ceph-mon[73755]: osdmap e15: 2 total, 2 up, 2 in
Sep 30 17:38:40 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:38:40 compute-0 systemd[1]: libpod-61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7.scope: Deactivated successfully.
Sep 30 17:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b22835ec94353ca7088d866b629403420207956b656b5bfc495c5d38dca4ccff-merged.mount: Deactivated successfully.
Sep 30 17:38:40 compute-0 podman[85612]: 2025-09-30 17:38:40.040269882 +0000 UTC m=+0.066320233 container remove cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_chandrasekhar, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:38:40 compute-0 systemd[1]: libpod-conmon-cf86ef45a15c45da7cb0e3e9aef8ca05f12ab32cd46bf241ee0c822e8c95d54f.scope: Deactivated successfully.
Sep 30 17:38:40 compute-0 podman[85627]: 2025-09-30 17:38:40.068657651 +0000 UTC m=+0.028229846 container died 61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7 (image=quay.io/ceph/ceph:v19, name=cranky_cerf, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:40 compute-0 sudo[85301]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0c5811d7c4374ea072311e43c92c72c8428ed886e6ad9cda7e8953ea56c115b-merged.mount: Deactivated successfully.
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:40 compute-0 podman[85627]: 2025-09-30 17:38:40.114388621 +0000 UTC m=+0.073960806 container remove 61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7 (image=quay.io/ceph/ceph:v19, name=cranky_cerf, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:40 compute-0 systemd[1]: libpod-conmon-61c9eb97ddae0ab56afd4e380fe344dfd421c1f9cd641b2074ea68cfcb5ba4b7.scope: Deactivated successfully.
Sep 30 17:38:40 compute-0 sudo[85490]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:40 compute-0 sudo[85642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:38:40 compute-0 sudo[85642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:40 compute-0 sudo[85642]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:40 compute-0 sudo[85690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmjwpyswwmzokrrqwtlmpxjbsagifbyw ; /usr/bin/python3'
Sep 30 17:38:40 compute-0 sudo[85690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 17:38:40 compute-0 sudo[85693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:40 compute-0 sudo[85693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:40 compute-0 sudo[85693]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:40 compute-0 sudo[85718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:40 compute-0 sudo[85718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:40 compute-0 python3[85692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:40 compute-0 podman[85743]: 2025-09-30 17:38:40.435825763 +0000 UTC m=+0.034662250 container create daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700 (image=quay.io/ceph/ceph:v19, name=thirsty_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:38:40 compute-0 systemd[1]: Started libpod-conmon-daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700.scope.
Sep 30 17:38:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f889e479feaf5058b91f2ecd6615474793863e79e5f9522c95f605e6108157/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f889e479feaf5058b91f2ecd6615474793863e79e5f9522c95f605e6108157/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:40 compute-0 podman[85743]: 2025-09-30 17:38:40.421759636 +0000 UTC m=+0.020596143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:40 compute-0 podman[85743]: 2025-09-30 17:38:40.519191117 +0000 UTC m=+0.118027634 container init daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700 (image=quay.io/ceph/ceph:v19, name=thirsty_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:40 compute-0 podman[85743]: 2025-09-30 17:38:40.528428582 +0000 UTC m=+0.127265079 container start daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700 (image=quay.io/ceph/ceph:v19, name=thirsty_easley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:40 compute-0 podman[85743]: 2025-09-30 17:38:40.535101141 +0000 UTC m=+0.133937658 container attach daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700 (image=quay.io/ceph/ceph:v19, name=thirsty_easley, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 17:38:40 compute-0 podman[85778]: 2025-09-30 17:38:40.663288972 +0000 UTC m=+0.037364979 container create 913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d (image=quay.io/ceph/ceph:v19, name=goofy_yonath, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:40 compute-0 systemd[1]: Started libpod-conmon-913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d.scope.
Sep 30 17:38:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v56: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:40 compute-0 podman[85778]: 2025-09-30 17:38:40.730092996 +0000 UTC m=+0.104169003 container init 913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d (image=quay.io/ceph/ceph:v19, name=goofy_yonath, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 17:38:40 compute-0 podman[85778]: 2025-09-30 17:38:40.735821391 +0000 UTC m=+0.109897398 container start 913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d (image=quay.io/ceph/ceph:v19, name=goofy_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:38:40 compute-0 goofy_yonath[85813]: 167 167
Sep 30 17:38:40 compute-0 systemd[1]: libpod-913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d.scope: Deactivated successfully.
Sep 30 17:38:40 compute-0 conmon[85813]: conmon 913fd81ef2b6fe57e6e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d.scope/container/memory.events
Sep 30 17:38:40 compute-0 podman[85778]: 2025-09-30 17:38:40.740587312 +0000 UTC m=+0.114663349 container attach 913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d (image=quay.io/ceph/ceph:v19, name=goofy_yonath, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:40 compute-0 podman[85778]: 2025-09-30 17:38:40.740847539 +0000 UTC m=+0.114923546 container died 913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d (image=quay.io/ceph/ceph:v19, name=goofy_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 17:38:40 compute-0 podman[85778]: 2025-09-30 17:38:40.648133717 +0000 UTC m=+0.022209744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-53708746dd6916cf575e430db528bc1f5278d877c0ae06c31f69387a89e44874-merged.mount: Deactivated successfully.
Sep 30 17:38:40 compute-0 podman[85778]: 2025-09-30 17:38:40.784235389 +0000 UTC m=+0.158311396 container remove 913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d (image=quay.io/ceph/ceph:v19, name=goofy_yonath, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:40 compute-0 systemd[1]: libpod-conmon-913fd81ef2b6fe57e6e259482fc0260188309d4f47493931e2b298d5a888681d.scope: Deactivated successfully.
Sep 30 17:38:40 compute-0 sudo[85718]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.efvthf (monmap changed)...
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.efvthf (monmap changed)...
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.efvthf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.efvthf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.efvthf on compute-0
Sep 30 17:38:40 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.efvthf on compute-0
Sep 30 17:38:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 17:38:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/411356327' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:40 compute-0 sudo[85830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:40 compute-0 sudo[85830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:40 compute-0 sudo[85830]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:40 compute-0 sudo[85858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:40 compute-0 sudo[85858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/411356327' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Sep 30 17:38:41 compute-0 thirsty_easley[85759]: pool 'backups' created
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Sep 30 17:38:41 compute-0 systemd[1]: libpod-daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700.scope: Deactivated successfully.
Sep 30 17:38:41 compute-0 podman[85743]: 2025-09-30 17:38:41.048270134 +0000 UTC m=+0.647106651 container died daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700 (image=quay.io/ceph/ceph:v19, name=thirsty_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 17:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6f889e479feaf5058b91f2ecd6615474793863e79e5f9522c95f605e6108157-merged.mount: Deactivated successfully.
Sep 30 17:38:41 compute-0 podman[85743]: 2025-09-30 17:38:41.082175164 +0000 UTC m=+0.681011651 container remove daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700 (image=quay.io/ceph/ceph:v19, name=thirsty_easley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:38:41 compute-0 systemd[1]: libpod-conmon-daea436e35600bf882030d968027c529f6a7c8f1cee76e307c64a422fc743700.scope: Deactivated successfully.
Sep 30 17:38:41 compute-0 sudo[85690]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:41 compute-0 ceph-mon[73755]: Reconfiguring mon.compute-0 (monmap changed)...
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mon[73755]: Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.efvthf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/411356327' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/411356327' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:41 compute-0 ceph-mon[73755]: osdmap e16: 2 total, 2 up, 2 in
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.efvthf(active, since 100s), standbys: compute-1.glbusf
Sep 30 17:38:41 compute-0 sudo[85931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkodoelfpqvfcagcpvxpqlmttowpizut ; /usr/bin/python3'
Sep 30 17:38:41 compute-0 sudo[85931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:41 compute-0 podman[85938]: 2025-09-30 17:38:41.282541946 +0000 UTC m=+0.039460872 container create 399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b (image=quay.io/ceph/ceph:v19, name=practical_hoover, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 17:38:41 compute-0 systemd[1]: Started libpod-conmon-399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b.scope.
Sep 30 17:38:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:41 compute-0 podman[85938]: 2025-09-30 17:38:41.338723721 +0000 UTC m=+0.095642657 container init 399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b (image=quay.io/ceph/ceph:v19, name=practical_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:38:41 compute-0 podman[85938]: 2025-09-30 17:38:41.344721473 +0000 UTC m=+0.101640399 container start 399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b (image=quay.io/ceph/ceph:v19, name=practical_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 17:38:41 compute-0 practical_hoover[85954]: 167 167
Sep 30 17:38:41 compute-0 podman[85938]: 2025-09-30 17:38:41.348266013 +0000 UTC m=+0.105184949 container attach 399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b (image=quay.io/ceph/ceph:v19, name=practical_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:41 compute-0 systemd[1]: libpod-399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b.scope: Deactivated successfully.
Sep 30 17:38:41 compute-0 python3[85935]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:41 compute-0 podman[85938]: 2025-09-30 17:38:41.350452928 +0000 UTC m=+0.107371854 container died 399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b (image=quay.io/ceph/ceph:v19, name=practical_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:38:41 compute-0 podman[85938]: 2025-09-30 17:38:41.262943599 +0000 UTC m=+0.019862555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d8e3c9c8d915506dc1afbadfc3774b34d7874700137ce3dfa8c4738857d009-merged.mount: Deactivated successfully.
Sep 30 17:38:41 compute-0 podman[85938]: 2025-09-30 17:38:41.384989194 +0000 UTC m=+0.141908120 container remove 399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b (image=quay.io/ceph/ceph:v19, name=practical_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:41 compute-0 systemd[1]: libpod-conmon-399c4f880ae2a96059399c1744cdeaebfaf2f25a2da42018a67093ae56bd411b.scope: Deactivated successfully.
Sep 30 17:38:41 compute-0 podman[85959]: 2025-09-30 17:38:41.404720204 +0000 UTC m=+0.042556990 container create d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7 (image=quay.io/ceph/ceph:v19, name=busy_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:41 compute-0 sudo[85858]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:41 compute-0 systemd[1]: Started libpod-conmon-d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7.scope.
Sep 30 17:38:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe2c0b8a82c522aebd6d286dd5d3ffff0c69f894c2beb9a57a45a9f47ef8c0e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe2c0b8a82c522aebd6d286dd5d3ffff0c69f894c2beb9a57a45a9f47ef8c0e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:41 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Sep 30 17:38:41 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Sep 30 17:38:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:41 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Sep 30 17:38:41 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Sep 30 17:38:41 compute-0 podman[85959]: 2025-09-30 17:38:41.47750972 +0000 UTC m=+0.115346536 container init d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7 (image=quay.io/ceph/ceph:v19, name=busy_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:38:41 compute-0 podman[85959]: 2025-09-30 17:38:41.384798599 +0000 UTC m=+0.022635415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:41 compute-0 podman[85959]: 2025-09-30 17:38:41.483193924 +0000 UTC m=+0.121030710 container start d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7 (image=quay.io/ceph/ceph:v19, name=busy_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:38:41 compute-0 podman[85959]: 2025-09-30 17:38:41.486510009 +0000 UTC m=+0.124346925 container attach d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7 (image=quay.io/ceph/ceph:v19, name=busy_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:41 compute-0 sudo[85990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:41 compute-0 sudo[85990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:41 compute-0 sudo[85990]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:41 compute-0 sudo[86015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:41 compute-0 sudo[86015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 17:38:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3276171411' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:41 compute-0 podman[86075]: 2025-09-30 17:38:41.870552478 +0000 UTC m=+0.043825152 container create 6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:41 compute-0 systemd[1]: Started libpod-conmon-6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589.scope.
Sep 30 17:38:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:41 compute-0 podman[86075]: 2025-09-30 17:38:41.847398781 +0000 UTC m=+0.020671465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:41 compute-0 podman[86075]: 2025-09-30 17:38:41.94946634 +0000 UTC m=+0.122739034 container init 6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_rosalind, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 17:38:41 compute-0 podman[86075]: 2025-09-30 17:38:41.95655981 +0000 UTC m=+0.129832474 container start 6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_rosalind, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:38:41 compute-0 exciting_rosalind[86092]: 167 167
Sep 30 17:38:41 compute-0 systemd[1]: libpod-6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589.scope: Deactivated successfully.
Sep 30 17:38:41 compute-0 podman[86075]: 2025-09-30 17:38:41.960896589 +0000 UTC m=+0.134169263 container attach 6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 17:38:41 compute-0 podman[86075]: 2025-09-30 17:38:41.961368591 +0000 UTC m=+0.134641285 container died 6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_rosalind, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Sep 30 17:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-810b560aa2f1569237cceb2be765e8847a1ba70bcc4a6b8299454b392051d990-merged.mount: Deactivated successfully.
Sep 30 17:38:41 compute-0 podman[86075]: 2025-09-30 17:38:41.999446697 +0000 UTC m=+0.172719371 container remove 6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_rosalind, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:38:42 compute-0 systemd[1]: libpod-conmon-6b17e67c40eb32adabda63d5dd79d2a97ecfb9b84cacfd5c2dd97b6d12ea0589.scope: Deactivated successfully.
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3276171411' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Sep 30 17:38:42 compute-0 busy_hopper[85986]: pool 'images' created
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Sep 30 17:38:42 compute-0 sudo[86015]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:42 compute-0 systemd[1]: libpod-d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7.scope: Deactivated successfully.
Sep 30 17:38:42 compute-0 conmon[85986]: conmon d14978f023033f006e7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7.scope/container/memory.events
Sep 30 17:38:42 compute-0 podman[85959]: 2025-09-30 17:38:42.06263049 +0000 UTC m=+0.700467276 container died d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7 (image=quay.io/ceph/ceph:v19, name=busy_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Sep 30 17:38:42 compute-0 ceph-mon[73755]: pgmap v56: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:42 compute-0 ceph-mon[73755]: Reconfiguring mgr.compute-0.efvthf (monmap changed)...
Sep 30 17:38:42 compute-0 ceph-mon[73755]: Reconfiguring daemon mgr.compute-0.efvthf on compute-0
Sep 30 17:38:42 compute-0 ceph-mon[73755]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mgrmap e9: compute-0.efvthf(active, since 100s), standbys: compute-1.glbusf
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:42 compute-0 ceph-mon[73755]: Reconfiguring crash.compute-0 (monmap changed)...
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: Reconfiguring daemon crash.compute-0 on compute-0
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3276171411' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3276171411' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:42 compute-0 ceph-mon[73755]: osdmap e17: 2 total, 2 up, 2 in
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fe2c0b8a82c522aebd6d286dd5d3ffff0c69f894c2beb9a57a45a9f47ef8c0e-merged.mount: Deactivated successfully.
Sep 30 17:38:42 compute-0 podman[85959]: 2025-09-30 17:38:42.288071767 +0000 UTC m=+0.925908563 container remove d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7 (image=quay.io/ceph/ceph:v19, name=busy_hopper, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 17:38:42 compute-0 systemd[1]: libpod-conmon-d14978f023033f006e7d0c53065e5e0e9b806b6adb47e106f6028fe86b30eee7.scope: Deactivated successfully.
Sep 30 17:38:42 compute-0 sudo[85931]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:42 compute-0 sudo[86143]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjsssyxgcqipfsujomqlurkzxigqaeat ; /usr/bin/python3'
Sep 30 17:38:42 compute-0 sudo[86143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:42 compute-0 python3[86145]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:42 compute-0 podman[86146]: 2025-09-30 17:38:42.605194619 +0000 UTC m=+0.036550708 container create cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af (image=quay.io/ceph/ceph:v19, name=inspiring_hopper, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:38:42 compute-0 systemd[1]: Started libpod-conmon-cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af.scope.
Sep 30 17:38:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f60322fa406acff18cc84a3f7afb930d6c127855e8fadbd937abec91c50a12a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f60322fa406acff18cc84a3f7afb930d6c127855e8fadbd937abec91c50a12a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:42 compute-0 podman[86146]: 2025-09-30 17:38:42.675251266 +0000 UTC m=+0.106607375 container init cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af (image=quay.io/ceph/ceph:v19, name=inspiring_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:42 compute-0 podman[86146]: 2025-09-30 17:38:42.680220512 +0000 UTC m=+0.111576601 container start cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af (image=quay.io/ceph/ceph:v19, name=inspiring_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:42 compute-0 podman[86146]: 2025-09-30 17:38:42.68368374 +0000 UTC m=+0.115039829 container attach cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af (image=quay.io/ceph/ceph:v19, name=inspiring_hopper, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:42 compute-0 podman[86146]: 2025-09-30 17:38:42.588088386 +0000 UTC m=+0.019444505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v59: 5 pgs: 4 unknown, 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-1.glbusf (monmap changed)...
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-1.glbusf (monmap changed)...
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.glbusf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.glbusf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-1.glbusf on compute-1
Sep 30 17:38:42 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-1.glbusf on compute-1
Sep 30 17:38:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Sep 30 17:38:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Sep 30 17:38:43 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Sep 30 17:38:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 17:38:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1184914315' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:43 compute-0 ceph-mon[73755]: Reconfiguring mon.compute-1 (monmap changed)...
Sep 30 17:38:43 compute-0 ceph-mon[73755]: Reconfiguring daemon mon.compute-1 on compute-1
Sep 30 17:38:43 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:43 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:43 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.glbusf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 17:38:43 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 17:38:43 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:43 compute-0 ceph-mon[73755]: osdmap e18: 2 total, 2 up, 2 in
Sep 30 17:38:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1184914315' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Sep 30 17:38:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1184914315' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Sep 30 17:38:44 compute-0 inspiring_hopper[86162]: pool 'cephfs.cephfs.meta' created
Sep 30 17:38:44 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Sep 30 17:38:44 compute-0 systemd[1]: libpod-cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af.scope: Deactivated successfully.
Sep 30 17:38:44 compute-0 conmon[86162]: conmon cb80f106f140ed34d9e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af.scope/container/memory.events
Sep 30 17:38:44 compute-0 podman[86146]: 2025-09-30 17:38:44.096782607 +0000 UTC m=+1.528138706 container died cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af (image=quay.io/ceph/ceph:v19, name=inspiring_hopper, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f60322fa406acff18cc84a3f7afb930d6c127855e8fadbd937abec91c50a12a-merged.mount: Deactivated successfully.
Sep 30 17:38:44 compute-0 podman[86146]: 2025-09-30 17:38:44.206020098 +0000 UTC m=+1.637376187 container remove cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af (image=quay.io/ceph/ceph:v19, name=inspiring_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:44 compute-0 systemd[1]: libpod-conmon-cb80f106f140ed34d9e06f64b9d2ecc5cd5d15a7d810fed7b81388658e6171af.scope: Deactivated successfully.
Sep 30 17:38:44 compute-0 sudo[86143]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:44 compute-0 sudo[86225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjwbzsizvqxehkvunyfugdatigfksweu ; /usr/bin/python3'
Sep 30 17:38:44 compute-0 sudo[86225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:44 compute-0 python3[86227]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:44 compute-0 podman[86229]: 2025-09-30 17:38:44.535067543 +0000 UTC m=+0.047343962 container create 8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18 (image=quay.io/ceph/ceph:v19, name=kind_tu, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:44 compute-0 ceph-mon[73755]: pgmap v59: 5 pgs: 4 unknown, 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:44 compute-0 ceph-mon[73755]: Reconfiguring mgr.compute-1.glbusf (monmap changed)...
Sep 30 17:38:44 compute-0 ceph-mon[73755]: Reconfiguring daemon mgr.compute-1.glbusf on compute-1
Sep 30 17:38:44 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:44 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1184914315' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:44 compute-0 ceph-mon[73755]: osdmap e19: 2 total, 2 up, 2 in
Sep 30 17:38:44 compute-0 systemd[1]: Started libpod-conmon-8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18.scope.
Sep 30 17:38:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:44 compute-0 podman[86229]: 2025-09-30 17:38:44.508522939 +0000 UTC m=+0.020799378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a510ee006b00e7e6e1b0db31725b171658c1cd635f7a131251b63975b32b62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a510ee006b00e7e6e1b0db31725b171658c1cd635f7a131251b63975b32b62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:44 compute-0 podman[86229]: 2025-09-30 17:38:44.620474018 +0000 UTC m=+0.132750467 container init 8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18 (image=quay.io/ceph/ceph:v19, name=kind_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:44 compute-0 podman[86229]: 2025-09-30 17:38:44.632918133 +0000 UTC m=+0.145194552 container start 8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18 (image=quay.io/ceph/ceph:v19, name=kind_tu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 17:38:44 compute-0 podman[86229]: 2025-09-30 17:38:44.636999637 +0000 UTC m=+0.149276266 container attach 8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18 (image=quay.io/ceph/ceph:v19, name=kind_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v62: 6 pgs: 2 active+clean, 4 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 17:38:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4068993854' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4068993854' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Sep 30 17:38:45 compute-0 kind_tu[86245]: pool 'cephfs.cephfs.data' created
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Sep 30 17:38:45 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 20 pg[7.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:38:45 compute-0 systemd[1]: libpod-8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18.scope: Deactivated successfully.
Sep 30 17:38:45 compute-0 podman[86229]: 2025-09-30 17:38:45.090096028 +0000 UTC m=+0.602372447 container died 8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18 (image=quay.io/ceph/ceph:v19, name=kind_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Sep 30 17:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5a510ee006b00e7e6e1b0db31725b171658c1cd635f7a131251b63975b32b62-merged.mount: Deactivated successfully.
Sep 30 17:38:45 compute-0 podman[86229]: 2025-09-30 17:38:45.122221512 +0000 UTC m=+0.634497931 container remove 8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18 (image=quay.io/ceph/ceph:v19, name=kind_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:45 compute-0 systemd[1]: libpod-conmon-8981703f9ea6d2eff6ba12b922e11a048df3db26adde56721230f88ce29acb18.scope: Deactivated successfully.
Sep 30 17:38:45 compute-0 sudo[86225]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:45 compute-0 sshd-session[86188]: Connection closed by authenticating user root 185.156.73.233 port 28538 [preauth]
Sep 30 17:38:45 compute-0 sudo[86307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgjixrhjifxcwvqdrxqrdbdshkatcrvr ; /usr/bin/python3'
Sep 30 17:38:45 compute-0 sudo[86307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:45 compute-0 python3[86309]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:45 compute-0 podman[86310]: 2025-09-30 17:38:45.4616235 +0000 UTC m=+0.042700144 container create e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42 (image=quay.io/ceph/ceph:v19, name=xenodochial_franklin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:45 compute-0 systemd[1]: Started libpod-conmon-e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42.scope.
Sep 30 17:38:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e207e37b36a6c4fc90ae4c95781e5a0cd015da445e238b1bf1077414f305ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e207e37b36a6c4fc90ae4c95781e5a0cd015da445e238b1bf1077414f305ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:45 compute-0 podman[86310]: 2025-09-30 17:38:45.516473001 +0000 UTC m=+0.097549645 container init e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42 (image=quay.io/ceph/ceph:v19, name=xenodochial_franklin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:45 compute-0 podman[86310]: 2025-09-30 17:38:45.522443723 +0000 UTC m=+0.103520367 container start e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42 (image=quay.io/ceph/ceph:v19, name=xenodochial_franklin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 17:38:45 compute-0 podman[86310]: 2025-09-30 17:38:45.52551237 +0000 UTC m=+0.106589044 container attach e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42 (image=quay.io/ceph/ceph:v19, name=xenodochial_franklin, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:45 compute-0 podman[86310]: 2025-09-30 17:38:45.441051078 +0000 UTC m=+0.022127782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:45 compute-0 sudo[86349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:45 compute-0 sudo[86349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:45 compute-0 sudo[86349]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:45 compute-0 sudo[86374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:38:45 compute-0 sudo[86374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Sep 30 17:38:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/474181382' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: pgmap v62: 6 pgs: 2 active+clean, 4 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4068993854' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4068993854' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 17:38:45 compute-0 ceph-mon[73755]: osdmap e20: 2 total, 2 up, 2 in
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/474181382' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Sep 30 17:38:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Sep 30 17:38:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/474181382' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Sep 30 17:38:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Sep 30 17:38:46 compute-0 xenodochial_franklin[86326]: enabled application 'rbd' on pool 'vms'
Sep 30 17:38:46 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Sep 30 17:38:46 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 21 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:38:46 compute-0 systemd[1]: libpod-e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42.scope: Deactivated successfully.
Sep 30 17:38:46 compute-0 podman[86310]: 2025-09-30 17:38:46.105572061 +0000 UTC m=+0.686648705 container died e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42 (image=quay.io/ceph/ceph:v19, name=xenodochial_franklin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-97e207e37b36a6c4fc90ae4c95781e5a0cd015da445e238b1bf1077414f305ad-merged.mount: Deactivated successfully.
Sep 30 17:38:46 compute-0 podman[86310]: 2025-09-30 17:38:46.181267191 +0000 UTC m=+0.762343835 container remove e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42 (image=quay.io/ceph/ceph:v19, name=xenodochial_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 17:38:46 compute-0 systemd[1]: libpod-conmon-e37be6397c44eb19291201760fa1fe16d930fd65d166de3a937af0c6543cac42.scope: Deactivated successfully.
Sep 30 17:38:46 compute-0 sudo[86307]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:46 compute-0 podman[86449]: 2025-09-30 17:38:46.276780763 +0000 UTC m=+0.067485362 container create 6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:38:46 compute-0 systemd[1]: Started libpod-conmon-6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82.scope.
Sep 30 17:38:46 compute-0 podman[86449]: 2025-09-30 17:38:46.231736001 +0000 UTC m=+0.022440640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:46 compute-0 sudo[86491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dethcljbdchqpjynlajqkdossbifhoas ; /usr/bin/python3'
Sep 30 17:38:46 compute-0 sudo[86491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:46 compute-0 podman[86449]: 2025-09-30 17:38:46.375103737 +0000 UTC m=+0.165808346 container init 6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_galileo, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:46 compute-0 podman[86449]: 2025-09-30 17:38:46.384451874 +0000 UTC m=+0.175156473 container start 6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:46 compute-0 podman[86449]: 2025-09-30 17:38:46.388002814 +0000 UTC m=+0.178707413 container attach 6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_galileo, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:46 compute-0 cool_galileo[86489]: 167 167
Sep 30 17:38:46 compute-0 systemd[1]: libpod-6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82.scope: Deactivated successfully.
Sep 30 17:38:46 compute-0 conmon[86489]: conmon 6f52c05e4b9a74ef400c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82.scope/container/memory.events
Sep 30 17:38:46 compute-0 podman[86449]: 2025-09-30 17:38:46.391237816 +0000 UTC m=+0.181942415 container died 6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a3e0e543546afdf9a8c572240bd61992600159687e0d236dee850ad1085bdfe-merged.mount: Deactivated successfully.
Sep 30 17:38:46 compute-0 podman[86449]: 2025-09-30 17:38:46.428147282 +0000 UTC m=+0.218851881 container remove 6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:38:46 compute-0 systemd[1]: libpod-conmon-6f52c05e4b9a74ef400c3250b3c7c9c0504d973b80db6b97e4e7dff461336b82.scope: Deactivated successfully.
Sep 30 17:38:46 compute-0 python3[86494]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:46 compute-0 podman[86512]: 2025-09-30 17:38:46.618591442 +0000 UTC m=+0.103715121 container create 25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:46 compute-0 podman[86512]: 2025-09-30 17:38:46.534186221 +0000 UTC m=+0.019309910 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:46 compute-0 systemd[1]: Started libpod-conmon-25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc.scope.
Sep 30 17:38:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391616b9a2f18292fcbf1a186413dbfbec5e1cbc1264759153f14726ce879cf7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391616b9a2f18292fcbf1a186413dbfbec5e1cbc1264759153f14726ce879cf7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:46 compute-0 podman[86522]: 2025-09-30 17:38:46.777186604 +0000 UTC m=+0.247493688 container create 75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:46 compute-0 podman[86512]: 2025-09-30 17:38:46.782007736 +0000 UTC m=+0.267131475 container init 25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:46 compute-0 podman[86512]: 2025-09-30 17:38:46.789861905 +0000 UTC m=+0.274985584 container start 25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:46 compute-0 podman[86512]: 2025-09-30 17:38:46.794455282 +0000 UTC m=+0.279578961 container attach 25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:38:46 compute-0 systemd[1]: Started libpod-conmon-75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9.scope.
Sep 30 17:38:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8054aae0b3c93df188c90589cb6885535c8a2026b897633303d5a89d8732bebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8054aae0b3c93df188c90589cb6885535c8a2026b897633303d5a89d8732bebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8054aae0b3c93df188c90589cb6885535c8a2026b897633303d5a89d8732bebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8054aae0b3c93df188c90589cb6885535c8a2026b897633303d5a89d8732bebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8054aae0b3c93df188c90589cb6885535c8a2026b897633303d5a89d8732bebd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:46 compute-0 podman[86522]: 2025-09-30 17:38:46.849358004 +0000 UTC m=+0.319665108 container init 75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:46 compute-0 podman[86522]: 2025-09-30 17:38:46.759846704 +0000 UTC m=+0.230153808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:46 compute-0 podman[86522]: 2025-09-30 17:38:46.859874881 +0000 UTC m=+0.330181965 container start 75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 17:38:46 compute-0 podman[86522]: 2025-09-30 17:38:46.876187405 +0000 UTC m=+0.346494489 container attach 75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 17:38:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Sep 30 17:38:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/474181382' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Sep 30 17:38:47 compute-0 ceph-mon[73755]: osdmap e21: 2 total, 2 up, 2 in
Sep 30 17:38:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Sep 30 17:38:47 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Sep 30 17:38:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Sep 30 17:38:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/449678829' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Sep 30 17:38:47 compute-0 romantic_visvesvaraya[86553]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:38:47 compute-0 romantic_visvesvaraya[86553]: --> All data devices are unavailable
Sep 30 17:38:47 compute-0 systemd[1]: libpod-75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9.scope: Deactivated successfully.
Sep 30 17:38:47 compute-0 podman[86522]: 2025-09-30 17:38:47.174003388 +0000 UTC m=+0.644310472 container died 75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8054aae0b3c93df188c90589cb6885535c8a2026b897633303d5a89d8732bebd-merged.mount: Deactivated successfully.
Sep 30 17:38:47 compute-0 podman[86522]: 2025-09-30 17:38:47.218014854 +0000 UTC m=+0.688321928 container remove 75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:47 compute-0 systemd[1]: libpod-conmon-75dbb3a7afabdd87852c700b62fbe08a0da4784394111a192301a5a8fa263dd9.scope: Deactivated successfully.
Sep 30 17:38:47 compute-0 sudo[86374]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:47 compute-0 sudo[86599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:47 compute-0 sudo[86599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:47 compute-0 sudo[86599]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:47 compute-0 sudo[86624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:38:47 compute-0 sudo[86624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:47 compute-0 podman[86687]: 2025-09-30 17:38:47.729476485 +0000 UTC m=+0.035266965 container create 665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_murdock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:47 compute-0 systemd[1]: Started libpod-conmon-665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df.scope.
Sep 30 17:38:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:47 compute-0 podman[86687]: 2025-09-30 17:38:47.808021867 +0000 UTC m=+0.113812367 container init 665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_murdock, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 17:38:47 compute-0 podman[86687]: 2025-09-30 17:38:47.71430739 +0000 UTC m=+0.020097890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:47 compute-0 podman[86687]: 2025-09-30 17:38:47.813854125 +0000 UTC m=+0.119644605 container start 665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 17:38:47 compute-0 sharp_murdock[86703]: 167 167
Sep 30 17:38:47 compute-0 systemd[1]: libpod-665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df.scope: Deactivated successfully.
Sep 30 17:38:47 compute-0 podman[86687]: 2025-09-30 17:38:47.818319858 +0000 UTC m=+0.124110348 container attach 665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 17:38:47 compute-0 podman[86687]: 2025-09-30 17:38:47.819374005 +0000 UTC m=+0.125164495 container died 665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8735e96a17cff76cc0e41eb905a9a41f9113e320df119ce85eeb2b73e989389-merged.mount: Deactivated successfully.
Sep 30 17:38:47 compute-0 podman[86687]: 2025-09-30 17:38:47.88622766 +0000 UTC m=+0.192018140 container remove 665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:38:47 compute-0 systemd[1]: libpod-conmon-665dcb31131a97dda3862db69e7e3c79e77579f4e839d22a3fd313f6c07415df.scope: Deactivated successfully.
Sep 30 17:38:48 compute-0 podman[86728]: 2025-09-30 17:38:48.027887173 +0000 UTC m=+0.041058822 container create 9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dubinsky, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:48 compute-0 systemd[1]: Started libpod-conmon-9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf.scope.
Sep 30 17:38:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:48 compute-0 podman[86728]: 2025-09-30 17:38:48.0096326 +0000 UTC m=+0.022804269 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba156a909c43c6263da64b4585bc0abdec1bed1ba2be6268d91eb335fddd1d14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba156a909c43c6263da64b4585bc0abdec1bed1ba2be6268d91eb335fddd1d14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba156a909c43c6263da64b4585bc0abdec1bed1ba2be6268d91eb335fddd1d14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba156a909c43c6263da64b4585bc0abdec1bed1ba2be6268d91eb335fddd1d14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Sep 30 17:38:48 compute-0 ceph-mon[73755]: pgmap v65: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:48 compute-0 ceph-mon[73755]: osdmap e22: 2 total, 2 up, 2 in
Sep 30 17:38:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/449678829' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Sep 30 17:38:48 compute-0 podman[86728]: 2025-09-30 17:38:48.131677355 +0000 UTC m=+0.144849034 container init 9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:48 compute-0 podman[86728]: 2025-09-30 17:38:48.13898743 +0000 UTC m=+0.152159079 container start 9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dubinsky, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:48 compute-0 podman[86728]: 2025-09-30 17:38:48.142601001 +0000 UTC m=+0.155772670 container attach 9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Sep 30 17:38:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/449678829' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Sep 30 17:38:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Sep 30 17:38:48 compute-0 wonderful_poitras[86544]: enabled application 'rbd' on pool 'volumes'
Sep 30 17:38:48 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Sep 30 17:38:48 compute-0 systemd[1]: libpod-25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc.scope: Deactivated successfully.
Sep 30 17:38:48 compute-0 podman[86512]: 2025-09-30 17:38:48.164713702 +0000 UTC m=+1.649837381 container died 25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-391616b9a2f18292fcbf1a186413dbfbec5e1cbc1264759153f14726ce879cf7-merged.mount: Deactivated successfully.
Sep 30 17:38:48 compute-0 podman[86512]: 2025-09-30 17:38:48.234735068 +0000 UTC m=+1.719858747 container remove 25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 17:38:48 compute-0 systemd[1]: libpod-conmon-25bf2dcb99a4fffee70741f7e544b86e5e5746436b626b1d5a08842b8e415ffc.scope: Deactivated successfully.
Sep 30 17:38:48 compute-0 sudo[86491]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:48 compute-0 sudo[86787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkjejrnhiptacrjeuetfxvffhrpgxhnx ; /usr/bin/python3'
Sep 30 17:38:48 compute-0 sudo[86787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]: {
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:     "0": [
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:         {
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "devices": [
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "/dev/loop3"
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             ],
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "lv_name": "ceph_lv0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "lv_size": "21470642176",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "name": "ceph_lv0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "tags": {
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.cluster_name": "ceph",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.crush_device_class": "",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.encrypted": "0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.osd_id": "0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.type": "block",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.vdo": "0",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:                 "ceph.with_tpm": "0"
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             },
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "type": "block",
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:             "vg_name": "ceph_vg0"
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:         }
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]:     ]
Sep 30 17:38:48 compute-0 heuristic_dubinsky[86744]: }
Sep 30 17:38:48 compute-0 systemd[1]: libpod-9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf.scope: Deactivated successfully.
Sep 30 17:38:48 compute-0 podman[86728]: 2025-09-30 17:38:48.447327439 +0000 UTC m=+0.460499088 container died 9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba156a909c43c6263da64b4585bc0abdec1bed1ba2be6268d91eb335fddd1d14-merged.mount: Deactivated successfully.
Sep 30 17:38:48 compute-0 python3[86790]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:48 compute-0 podman[86728]: 2025-09-30 17:38:48.733395794 +0000 UTC m=+0.746567443 container remove 9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:48 compute-0 sudo[86624]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:48 compute-0 podman[86802]: 2025-09-30 17:38:48.824970427 +0000 UTC m=+0.292449888 container create d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c (image=quay.io/ceph/ceph:v19, name=boring_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:48 compute-0 sudo[86813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:48 compute-0 sudo[86813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:48 compute-0 sudo[86813]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:48 compute-0 podman[86802]: 2025-09-30 17:38:48.785301551 +0000 UTC m=+0.252781032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:48 compute-0 sudo[86840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:38:48 compute-0 sudo[86840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:48 compute-0 systemd[1]: Started libpod-conmon-d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c.scope.
Sep 30 17:38:48 compute-0 systemd[1]: libpod-conmon-9b24bb4d1913fda20551efff8efdf4e299bf7d2f3b2c577b8f8465131e1ae6bf.scope: Deactivated successfully.
Sep 30 17:38:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef81f8fdd18a67d40f5d17d142096d84b9631044ac6e0500c93f7ae5b98502c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef81f8fdd18a67d40f5d17d142096d84b9631044ac6e0500c93f7ae5b98502c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:48 compute-0 podman[86802]: 2025-09-30 17:38:48.969319078 +0000 UTC m=+0.436798539 container init d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c (image=quay.io/ceph/ceph:v19, name=boring_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:48 compute-0 podman[86802]: 2025-09-30 17:38:48.976608173 +0000 UTC m=+0.444087634 container start d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c (image=quay.io/ceph/ceph:v19, name=boring_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:49 compute-0 podman[86802]: 2025-09-30 17:38:49.030011487 +0000 UTC m=+0.497490958 container attach d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c (image=quay.io/ceph/ceph:v19, name=boring_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Sep 30 17:38:49 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:38:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/449678829' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Sep 30 17:38:49 compute-0 ceph-mon[73755]: osdmap e23: 2 total, 2 up, 2 in
Sep 30 17:38:49 compute-0 ceph-mon[73755]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:38:49 compute-0 podman[86930]: 2025-09-30 17:38:49.249468323 +0000 UTC m=+0.035386799 container create fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:49 compute-0 systemd[1]: Started libpod-conmon-fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730.scope.
Sep 30 17:38:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:49 compute-0 podman[86930]: 2025-09-30 17:38:49.325184803 +0000 UTC m=+0.111103359 container init fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:49 compute-0 podman[86930]: 2025-09-30 17:38:49.233665542 +0000 UTC m=+0.019584038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:49 compute-0 podman[86930]: 2025-09-30 17:38:49.330532938 +0000 UTC m=+0.116451414 container start fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_poitras, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:49 compute-0 podman[86930]: 2025-09-30 17:38:49.334315374 +0000 UTC m=+0.120233880 container attach fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_poitras, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:49 compute-0 objective_poitras[86947]: 167 167
Sep 30 17:38:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Sep 30 17:38:49 compute-0 systemd[1]: libpod-fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730.scope: Deactivated successfully.
Sep 30 17:38:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3927088647' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Sep 30 17:38:49 compute-0 podman[86953]: 2025-09-30 17:38:49.371368304 +0000 UTC m=+0.020537952 container died fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_poitras, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-38bd59906a0ee6538ad1cb5f9b0442f4bcdebc2d1f2cf44127d2af2753ef18c3-merged.mount: Deactivated successfully.
Sep 30 17:38:49 compute-0 podman[86953]: 2025-09-30 17:38:49.406552766 +0000 UTC m=+0.055722384 container remove fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_poitras, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:49 compute-0 systemd[1]: libpod-conmon-fbc57e20cff1bfeb76444987af72e30ad5071569ac28c7f935284a3fc7dd1730.scope: Deactivated successfully.
Sep 30 17:38:49 compute-0 podman[86975]: 2025-09-30 17:38:49.539601071 +0000 UTC m=+0.034910097 container create f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 17:38:49 compute-0 systemd[1]: Started libpod-conmon-f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071.scope.
Sep 30 17:38:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc2900becee5c9259eb57019adf1ef083cca57afa1f72dd70973000c76a63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc2900becee5c9259eb57019adf1ef083cca57afa1f72dd70973000c76a63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc2900becee5c9259eb57019adf1ef083cca57afa1f72dd70973000c76a63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc2900becee5c9259eb57019adf1ef083cca57afa1f72dd70973000c76a63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:49 compute-0 podman[86975]: 2025-09-30 17:38:49.617549157 +0000 UTC m=+0.112858203 container init f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 17:38:49 compute-0 podman[86975]: 2025-09-30 17:38:49.524480847 +0000 UTC m=+0.019789913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:49 compute-0 podman[86975]: 2025-09-30 17:38:49.623975861 +0000 UTC m=+0.119284887 container start f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 17:38:49 compute-0 podman[86975]: 2025-09-30 17:38:49.627450649 +0000 UTC m=+0.122759675 container attach f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_meitner, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Sep 30 17:38:50 compute-0 ceph-mon[73755]: pgmap v68: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3927088647' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Sep 30 17:38:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3927088647' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Sep 30 17:38:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Sep 30 17:38:50 compute-0 boring_hamilton[86868]: enabled application 'rbd' on pool 'backups'
Sep 30 17:38:50 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Sep 30 17:38:50 compute-0 systemd[1]: libpod-d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c.scope: Deactivated successfully.
Sep 30 17:38:50 compute-0 podman[86802]: 2025-09-30 17:38:50.222983512 +0000 UTC m=+1.690462983 container died d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c (image=quay.io/ceph/ceph:v19, name=boring_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Sep 30 17:38:50 compute-0 lvm[87069]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:38:50 compute-0 lvm[87069]: VG ceph_vg0 finished
Sep 30 17:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ef81f8fdd18a67d40f5d17d142096d84b9631044ac6e0500c93f7ae5b98502c-merged.mount: Deactivated successfully.
Sep 30 17:38:50 compute-0 jovial_meitner[86993]: {}
Sep 30 17:38:50 compute-0 podman[86802]: 2025-09-30 17:38:50.282186183 +0000 UTC m=+1.749665654 container remove d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c (image=quay.io/ceph/ceph:v19, name=boring_hamilton, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 17:38:50 compute-0 systemd[1]: libpod-conmon-d9e349bcf9e2873e77baa68af8023cc66d033b203dd72ec6d17efe88e76fa76c.scope: Deactivated successfully.
Sep 30 17:38:50 compute-0 sudo[86787]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:50 compute-0 systemd[1]: libpod-f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071.scope: Deactivated successfully.
Sep 30 17:38:50 compute-0 podman[86975]: 2025-09-30 17:38:50.308228674 +0000 UTC m=+0.803537720 container died f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_meitner, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:38:50 compute-0 systemd[1]: libpod-f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071.scope: Consumed 1.099s CPU time.
Sep 30 17:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d8cc2900becee5c9259eb57019adf1ef083cca57afa1f72dd70973000c76a63-merged.mount: Deactivated successfully.
Sep 30 17:38:50 compute-0 podman[86975]: 2025-09-30 17:38:50.357106293 +0000 UTC m=+0.852415309 container remove f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_meitner, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:38:50 compute-0 systemd[1]: libpod-conmon-f7ebc0c25844879ec74a930a0af931aa37bb92d5d30c5c241605cef4cf461071.scope: Deactivated successfully.
Sep 30 17:38:50 compute-0 sudo[86840]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:38:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:38:50 compute-0 sudo[87120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtagvaedtohxmtfcfrrevecfxufoxxfy ; /usr/bin/python3'
Sep 30 17:38:50 compute-0 sudo[87120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:50 compute-0 sudo[87123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:38:50 compute-0 sudo[87123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:50 compute-0 sudo[87123]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:50 compute-0 python3[87122]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:50 compute-0 podman[87148]: 2025-09-30 17:38:50.646950084 +0000 UTC m=+0.039947314 container create 8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061 (image=quay.io/ceph/ceph:v19, name=elegant_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:38:50 compute-0 systemd[1]: Started libpod-conmon-8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061.scope.
Sep 30 17:38:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e0a480e9d413986ac87f559211296e95696fd3b581a108b685db73bff523d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e0a480e9d413986ac87f559211296e95696fd3b581a108b685db73bff523d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:50 compute-0 podman[87148]: 2025-09-30 17:38:50.722914861 +0000 UTC m=+0.115912111 container init 8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061 (image=quay.io/ceph/ceph:v19, name=elegant_panini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:38:50 compute-0 podman[87148]: 2025-09-30 17:38:50.628875296 +0000 UTC m=+0.021872556 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:50 compute-0 podman[87148]: 2025-09-30 17:38:50.729553739 +0000 UTC m=+0.122550979 container start 8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061 (image=quay.io/ceph/ceph:v19, name=elegant_panini, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:50 compute-0 podman[87148]: 2025-09-30 17:38:50.734364061 +0000 UTC m=+0.127361331 container attach 8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061 (image=quay.io/ceph/ceph:v19, name=elegant_panini, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Sep 30 17:38:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3683221879' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Sep 30 17:38:51 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3927088647' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Sep 30 17:38:51 compute-0 ceph-mon[73755]: osdmap e24: 2 total, 2 up, 2 in
Sep 30 17:38:51 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:51 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:51 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3683221879' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Sep 30 17:38:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Sep 30 17:38:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3683221879' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Sep 30 17:38:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Sep 30 17:38:51 compute-0 elegant_panini[87163]: enabled application 'rbd' on pool 'images'
Sep 30 17:38:51 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Sep 30 17:38:51 compute-0 systemd[1]: libpod-8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061.scope: Deactivated successfully.
Sep 30 17:38:51 compute-0 podman[87148]: 2025-09-30 17:38:51.451422716 +0000 UTC m=+0.844419966 container died 8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061 (image=quay.io/ceph/ceph:v19, name=elegant_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 17:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9e0a480e9d413986ac87f559211296e95696fd3b581a108b685db73bff523d6-merged.mount: Deactivated successfully.
Sep 30 17:38:51 compute-0 podman[87148]: 2025-09-30 17:38:51.491173764 +0000 UTC m=+0.884170994 container remove 8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061 (image=quay.io/ceph/ceph:v19, name=elegant_panini, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:51 compute-0 systemd[1]: libpod-conmon-8da37fc0d6d4b2b1155deea296abdf23960f184b4ba2d6a9bc2733344f95b061.scope: Deactivated successfully.
Sep 30 17:38:51 compute-0 sudo[87120]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:51 compute-0 sudo[87224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjdnvxvwqqpfjghoavholysnxsbfnvot ; /usr/bin/python3'
Sep 30 17:38:51 compute-0 sudo[87224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:51 compute-0 python3[87226]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:51 compute-0 podman[87227]: 2025-09-30 17:38:51.854406016 +0000 UTC m=+0.064135808 container create 5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4 (image=quay.io/ceph/ceph:v19, name=optimistic_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 17:38:51 compute-0 systemd[1]: Started libpod-conmon-5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4.scope.
Sep 30 17:38:51 compute-0 podman[87227]: 2025-09-30 17:38:51.81359009 +0000 UTC m=+0.023319902 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ba95bda25b2a8fdc6dcd34ae0e57c9d88446527f7acfe26d3bad1cf5800b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ba95bda25b2a8fdc6dcd34ae0e57c9d88446527f7acfe26d3bad1cf5800b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:51 compute-0 podman[87227]: 2025-09-30 17:38:51.961747388 +0000 UTC m=+0.171477210 container init 5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4 (image=quay.io/ceph/ceph:v19, name=optimistic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:38:51 compute-0 podman[87227]: 2025-09-30 17:38:51.969066003 +0000 UTC m=+0.178795805 container start 5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4 (image=quay.io/ceph/ceph:v19, name=optimistic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 17:38:51 compute-0 podman[87227]: 2025-09-30 17:38:51.9764341 +0000 UTC m=+0.186163892 container attach 5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4 (image=quay.io/ceph/ceph:v19, name=optimistic_einstein, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Sep 30 17:38:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4246258036' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Sep 30 17:38:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Sep 30 17:38:52 compute-0 ceph-mon[73755]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3683221879' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Sep 30 17:38:52 compute-0 ceph-mon[73755]: osdmap e25: 2 total, 2 up, 2 in
Sep 30 17:38:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4246258036' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Sep 30 17:38:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4246258036' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Sep 30 17:38:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Sep 30 17:38:52 compute-0 optimistic_einstein[87242]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Sep 30 17:38:52 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Sep 30 17:38:52 compute-0 systemd[1]: libpod-5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4.scope: Deactivated successfully.
Sep 30 17:38:52 compute-0 podman[87227]: 2025-09-30 17:38:52.615881248 +0000 UTC m=+0.825611070 container died 5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4 (image=quay.io/ceph/ceph:v19, name=optimistic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-331ba95bda25b2a8fdc6dcd34ae0e57c9d88446527f7acfe26d3bad1cf5800b5-merged.mount: Deactivated successfully.
Sep 30 17:38:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:52 compute-0 podman[87227]: 2025-09-30 17:38:52.749602259 +0000 UTC m=+0.959332051 container remove 5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4 (image=quay.io/ceph/ceph:v19, name=optimistic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:38:52 compute-0 systemd[1]: libpod-conmon-5363ce4e172faba783a9a64f4a6a7aa7b50f470f22c0cd79dc43b5c4beb931c4.scope: Deactivated successfully.
Sep 30 17:38:52 compute-0 sudo[87224]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:52 compute-0 sudo[87304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctiuspjghlzaxsemymgbdeonusycnywf ; /usr/bin/python3'
Sep 30 17:38:52 compute-0 sudo[87304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:53 compute-0 python3[87306]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:53 compute-0 podman[87307]: 2025-09-30 17:38:53.138559703 +0000 UTC m=+0.090498066 container create b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2 (image=quay.io/ceph/ceph:v19, name=interesting_kirch, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:38:53 compute-0 podman[87307]: 2025-09-30 17:38:53.077992157 +0000 UTC m=+0.029930540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:53 compute-0 systemd[1]: Started libpod-conmon-b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2.scope.
Sep 30 17:38:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f285805a0489faa4ee8f016a5eccf45584868c2db406faeb14b2402dbe87642/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f285805a0489faa4ee8f016a5eccf45584868c2db406faeb14b2402dbe87642/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:53 compute-0 podman[87307]: 2025-09-30 17:38:53.242775776 +0000 UTC m=+0.194714159 container init b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2 (image=quay.io/ceph/ceph:v19, name=interesting_kirch, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:38:53 compute-0 podman[87307]: 2025-09-30 17:38:53.248220294 +0000 UTC m=+0.200158667 container start b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2 (image=quay.io/ceph/ceph:v19, name=interesting_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:38:53 compute-0 podman[87307]: 2025-09-30 17:38:53.252448741 +0000 UTC m=+0.204387124 container attach b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2 (image=quay.io/ceph/ceph:v19, name=interesting_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 17:38:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Sep 30 17:38:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1098035916' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Sep 30 17:38:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4246258036' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Sep 30 17:38:53 compute-0 ceph-mon[73755]: osdmap e26: 2 total, 2 up, 2 in
Sep 30 17:38:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Sep 30 17:38:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1098035916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Sep 30 17:38:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Sep 30 17:38:53 compute-0 interesting_kirch[87323]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Sep 30 17:38:53 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Sep 30 17:38:53 compute-0 systemd[1]: libpod-b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2.scope: Deactivated successfully.
Sep 30 17:38:53 compute-0 podman[87307]: 2025-09-30 17:38:53.76224516 +0000 UTC m=+0.714183523 container died b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2 (image=quay.io/ceph/ceph:v19, name=interesting_kirch, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f285805a0489faa4ee8f016a5eccf45584868c2db406faeb14b2402dbe87642-merged.mount: Deactivated successfully.
Sep 30 17:38:53 compute-0 podman[87307]: 2025-09-30 17:38:53.858555903 +0000 UTC m=+0.810494266 container remove b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2 (image=quay.io/ceph/ceph:v19, name=interesting_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 17:38:53 compute-0 systemd[1]: libpod-conmon-b63fb31dfcbbc08407ba024213e4dc8e3ee6f0b9f417b6aa494a81ff0984bcd2.scope: Deactivated successfully.
Sep 30 17:38:53 compute-0 sudo[87304]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:54 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:38:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:54 compute-0 python3[87435]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:38:54 compute-0 ceph-mon[73755]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1098035916' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Sep 30 17:38:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1098035916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Sep 30 17:38:54 compute-0 ceph-mon[73755]: osdmap e27: 2 total, 2 up, 2 in
Sep 30 17:38:54 compute-0 ceph-mon[73755]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:38:54 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 17:38:54 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Cluster is now healthy
Sep 30 17:38:55 compute-0 python3[87506]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759253934.5269816-33645-124911772358596/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=8a900d20291ff3adbd378d89da08616d10603f6d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:38:55 compute-0 sudo[87606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efesmvvqgzpsuomxdzqgxadtvdqnwaxy ; /usr/bin/python3'
Sep 30 17:38:55 compute-0 sudo[87606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:55 compute-0 python3[87608]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:38:55 compute-0 sudo[87606]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:55 compute-0 ceph-mon[73755]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:55 compute-0 ceph-mon[73755]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 17:38:55 compute-0 ceph-mon[73755]: Cluster is now healthy
Sep 30 17:38:55 compute-0 sudo[87681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwfmtzghpqhsecmxfbdebcviqxesuwjt ; /usr/bin/python3'
Sep 30 17:38:55 compute-0 sudo[87681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:56 compute-0 python3[87683]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759253935.4341118-33659-69340220498161/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=3a9ea9094e29b790eaa114a3d2045d62b99d0eea backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:38:56 compute-0 sudo[87681]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:56 compute-0 sudo[87731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhdbkzcgrfyoujpeedskiiattwczhcv ; /usr/bin/python3'
Sep 30 17:38:56 compute-0 sudo[87731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:56 compute-0 python3[87733]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:56 compute-0 podman[87734]: 2025-09-30 17:38:56.527643443 +0000 UTC m=+0.048628855 container create 6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61 (image=quay.io/ceph/ceph:v19, name=keen_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Sep 30 17:38:56 compute-0 systemd[1]: Started libpod-conmon-6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61.scope.
Sep 30 17:38:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bc90cb4783b3dcc4a7600b7bd0c770b7bdaba8766d8d219067a657689b8d1f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bc90cb4783b3dcc4a7600b7bd0c770b7bdaba8766d8d219067a657689b8d1f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bc90cb4783b3dcc4a7600b7bd0c770b7bdaba8766d8d219067a657689b8d1f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:56 compute-0 podman[87734]: 2025-09-30 17:38:56.506740662 +0000 UTC m=+0.027726094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:56 compute-0 podman[87734]: 2025-09-30 17:38:56.618428845 +0000 UTC m=+0.139414277 container init 6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61 (image=quay.io/ceph/ceph:v19, name=keen_fermat, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 17:38:56 compute-0 podman[87734]: 2025-09-30 17:38:56.624841418 +0000 UTC m=+0.145826830 container start 6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61 (image=quay.io/ceph/ceph:v19, name=keen_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:56 compute-0 podman[87734]: 2025-09-30 17:38:56.628267634 +0000 UTC m=+0.149253076 container attach 6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61 (image=quay.io/ceph/ceph:v19, name=keen_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Sep 30 17:38:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3634785233' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 17:38:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3634785233' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Sep 30 17:38:57 compute-0 keen_fermat[87750]: 
Sep 30 17:38:57 compute-0 keen_fermat[87750]: [global]
Sep 30 17:38:57 compute-0 keen_fermat[87750]:         fsid = 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:38:57 compute-0 keen_fermat[87750]:         mon_host = 192.168.122.100
Sep 30 17:38:57 compute-0 systemd[1]: libpod-6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61.scope: Deactivated successfully.
Sep 30 17:38:57 compute-0 conmon[87750]: conmon 6dccf4309b1710744599 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61.scope/container/memory.events
Sep 30 17:38:57 compute-0 podman[87734]: 2025-09-30 17:38:57.055335085 +0000 UTC m=+0.576320487 container died 6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61 (image=quay.io/ceph/ceph:v19, name=keen_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-00bc90cb4783b3dcc4a7600b7bd0c770b7bdaba8766d8d219067a657689b8d1f-merged.mount: Deactivated successfully.
Sep 30 17:38:57 compute-0 podman[87734]: 2025-09-30 17:38:57.093159625 +0000 UTC m=+0.614145037 container remove 6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61 (image=quay.io/ceph/ceph:v19, name=keen_fermat, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 17:38:57 compute-0 systemd[1]: libpod-conmon-6dccf4309b1710744599e376c8caacdada17c5f5c246226ca776618a1cbdec61.scope: Deactivated successfully.
Sep 30 17:38:57 compute-0 sudo[87731]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:57 compute-0 sudo[87811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esakkxxszahkmjhftrxowsmeoydtnahf ; /usr/bin/python3'
Sep 30 17:38:57 compute-0 sudo[87811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:57 compute-0 python3[87813]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:57 compute-0 podman[87814]: 2025-09-30 17:38:57.447634674 +0000 UTC m=+0.048046429 container create e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2 (image=quay.io/ceph/ceph:v19, name=wonderful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 17:38:57 compute-0 systemd[1]: Started libpod-conmon-e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2.scope.
Sep 30 17:38:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41203053ac015feb10d105a9ede8ab17389072fabdb4514780d53452e49abe0a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41203053ac015feb10d105a9ede8ab17389072fabdb4514780d53452e49abe0a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41203053ac015feb10d105a9ede8ab17389072fabdb4514780d53452e49abe0a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:57 compute-0 podman[87814]: 2025-09-30 17:38:57.421142283 +0000 UTC m=+0.021554078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:57 compute-0 podman[87814]: 2025-09-30 17:38:57.519310782 +0000 UTC m=+0.119722577 container init e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2 (image=quay.io/ceph/ceph:v19, name=wonderful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 17:38:57 compute-0 podman[87814]: 2025-09-30 17:38:57.525587382 +0000 UTC m=+0.125999147 container start e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2 (image=quay.io/ceph/ceph:v19, name=wonderful_joliot, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 17:38:57 compute-0 podman[87814]: 2025-09-30 17:38:57.529768258 +0000 UTC m=+0.130180043 container attach e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2 (image=quay.io/ceph/ceph:v19, name=wonderful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:57 compute-0 ceph-mon[73755]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3634785233' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 17:38:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3634785233' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Sep 30 17:38:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3768454336' entity='client.admin' 
Sep 30 17:38:58 compute-0 wonderful_joliot[87830]: set ssl_option
Sep 30 17:38:58 compute-0 systemd[1]: libpod-e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2.scope: Deactivated successfully.
Sep 30 17:38:58 compute-0 conmon[87830]: conmon e804d11dedfe1ea217a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2.scope/container/memory.events
Sep 30 17:38:58 compute-0 podman[87814]: 2025-09-30 17:38:58.032654281 +0000 UTC m=+0.633066046 container died e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2 (image=quay.io/ceph/ceph:v19, name=wonderful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-41203053ac015feb10d105a9ede8ab17389072fabdb4514780d53452e49abe0a-merged.mount: Deactivated successfully.
Sep 30 17:38:58 compute-0 podman[87814]: 2025-09-30 17:38:58.070504801 +0000 UTC m=+0.670916556 container remove e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2 (image=quay.io/ceph/ceph:v19, name=wonderful_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:38:58 compute-0 systemd[1]: libpod-conmon-e804d11dedfe1ea217a8e4c4682df23bb0070e634492b12a5893a02bcf8f42f2.scope: Deactivated successfully.
Sep 30 17:38:58 compute-0 sudo[87811]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:58 compute-0 sudo[87903]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zamzbflkienmdyhagvomigjfcojkyvyw ; /usr/bin/python3'
Sep 30 17:38:58 compute-0 sudo[87903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:38:58 compute-0 sudo[87877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:58 compute-0 sudo[87877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:58 compute-0 sudo[87877]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:58 compute-0 sudo[87919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:38:58 compute-0 sudo[87919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:58 compute-0 python3[87916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:38:58 compute-0 podman[87944]: 2025-09-30 17:38:58.415105941 +0000 UTC m=+0.047579918 container create 68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012 (image=quay.io/ceph/ceph:v19, name=unruffled_noether, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:58 compute-0 systemd[1]: Started libpod-conmon-68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012.scope.
Sep 30 17:38:58 compute-0 podman[87944]: 2025-09-30 17:38:58.392077617 +0000 UTC m=+0.024551614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:38:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d7ca4d2ec208b96fb803cf204966808b1418b31f3e7de4e33569648894e5ce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d7ca4d2ec208b96fb803cf204966808b1418b31f3e7de4e33569648894e5ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d7ca4d2ec208b96fb803cf204966808b1418b31f3e7de4e33569648894e5ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:58 compute-0 podman[87944]: 2025-09-30 17:38:58.500861886 +0000 UTC m=+0.133335883 container init 68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012 (image=quay.io/ceph/ceph:v19, name=unruffled_noether, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:58 compute-0 podman[87944]: 2025-09-30 17:38:58.506110029 +0000 UTC m=+0.138584006 container start 68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012 (image=quay.io/ceph/ceph:v19, name=unruffled_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 17:38:58 compute-0 podman[87944]: 2025-09-30 17:38:58.509912635 +0000 UTC m=+0.142386612 container attach 68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012 (image=quay.io/ceph/ceph:v19, name=unruffled_noether, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:58 compute-0 podman[88003]: 2025-09-30 17:38:58.631273643 +0000 UTC m=+0.032682080 container create df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:38:58 compute-0 systemd[1]: Started libpod-conmon-df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85.scope.
Sep 30 17:38:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:58 compute-0 podman[88003]: 2025-09-30 17:38:58.703362351 +0000 UTC m=+0.104770818 container init df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:38:58 compute-0 podman[88003]: 2025-09-30 17:38:58.708947903 +0000 UTC m=+0.110356340 container start df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:38:58 compute-0 sharp_hawking[88037]: 167 167
Sep 30 17:38:58 compute-0 podman[88003]: 2025-09-30 17:38:58.712158994 +0000 UTC m=+0.113567461 container attach df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Sep 30 17:38:58 compute-0 systemd[1]: libpod-df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85.scope: Deactivated successfully.
Sep 30 17:38:58 compute-0 conmon[88037]: conmon df20297e8e35bfab0fc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85.scope/container/memory.events
Sep 30 17:38:58 compute-0 podman[88003]: 2025-09-30 17:38:58.713661502 +0000 UTC m=+0.115069939 container died df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:58 compute-0 podman[88003]: 2025-09-30 17:38:58.618020847 +0000 UTC m=+0.019429304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e62145b06b60978506ad09c700facd4595d6dce527711e1d4aa626e61969c17-merged.mount: Deactivated successfully.
Sep 30 17:38:58 compute-0 podman[88003]: 2025-09-30 17:38:58.743427007 +0000 UTC m=+0.144835444 container remove df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:58 compute-0 systemd[1]: libpod-conmon-df20297e8e35bfab0fc820355b088f27333c5f258b498854cf37e658df621b85.scope: Deactivated successfully.
Sep 30 17:38:58 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14272 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:38:58 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1
Sep 30 17:38:58 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:58 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Sep 30 17:38:58 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Sep 30 17:38:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 17:38:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:58 compute-0 unruffled_noether[87959]: Scheduled rgw.rgw update...
Sep 30 17:38:58 compute-0 unruffled_noether[87959]: Scheduled ingress.rgw.default update...
Sep 30 17:38:58 compute-0 podman[88061]: 2025-09-30 17:38:58.888473475 +0000 UTC m=+0.044223142 container create a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_curie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:58 compute-0 systemd[1]: libpod-68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012.scope: Deactivated successfully.
Sep 30 17:38:58 compute-0 podman[87944]: 2025-09-30 17:38:58.90009933 +0000 UTC m=+0.532573307 container died 68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012 (image=quay.io/ceph/ceph:v19, name=unruffled_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:38:58 compute-0 systemd[1]: Started libpod-conmon-a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a.scope.
Sep 30 17:38:58 compute-0 podman[87944]: 2025-09-30 17:38:58.943300255 +0000 UTC m=+0.575774232 container remove 68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012 (image=quay.io/ceph/ceph:v19, name=unruffled_noether, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:38:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:58 compute-0 systemd[1]: libpod-conmon-68b6993dc292194c3bfb148a266354ba98a4aed2e4745fea089b005e6af5b012.scope: Deactivated successfully.
Sep 30 17:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f19188e5e3d7f3275057752375bcdf66453336ea1e1a2795d6339ece151ec13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f19188e5e3d7f3275057752375bcdf66453336ea1e1a2795d6339ece151ec13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f19188e5e3d7f3275057752375bcdf66453336ea1e1a2795d6339ece151ec13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f19188e5e3d7f3275057752375bcdf66453336ea1e1a2795d6339ece151ec13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f19188e5e3d7f3275057752375bcdf66453336ea1e1a2795d6339ece151ec13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:38:58 compute-0 sudo[87903]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:58 compute-0 podman[88061]: 2025-09-30 17:38:58.870001407 +0000 UTC m=+0.025751084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:58 compute-0 podman[88061]: 2025-09-30 17:38:58.978951959 +0000 UTC m=+0.134701616 container init a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Sep 30 17:38:58 compute-0 podman[88061]: 2025-09-30 17:38:58.985808023 +0000 UTC m=+0.141557670 container start a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_curie, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:58 compute-0 podman[88061]: 2025-09-30 17:38:58.99001288 +0000 UTC m=+0.145762547 container attach a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3768454336' entity='client.admin' 
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:59 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5d7ca4d2ec208b96fb803cf204966808b1418b31f3e7de4e33569648894e5ce-merged.mount: Deactivated successfully.
Sep 30 17:38:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:38:59 compute-0 friendly_curie[88092]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:38:59 compute-0 friendly_curie[88092]: --> All data devices are unavailable
Sep 30 17:38:59 compute-0 systemd[1]: libpod-a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a.scope: Deactivated successfully.
Sep 30 17:38:59 compute-0 podman[88061]: 2025-09-30 17:38:59.315089034 +0000 UTC m=+0.470838701 container died a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_curie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f19188e5e3d7f3275057752375bcdf66453336ea1e1a2795d6339ece151ec13-merged.mount: Deactivated successfully.
Sep 30 17:38:59 compute-0 podman[88061]: 2025-09-30 17:38:59.360051514 +0000 UTC m=+0.515801161 container remove a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:38:59 compute-0 python3[88178]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:38:59 compute-0 systemd[1]: libpod-conmon-a8c47e6d3e9d65f76cc08adeeec9c861387c5590f40356e9dc150e8ffaf7798a.scope: Deactivated successfully.
Sep 30 17:38:59 compute-0 sudo[87919]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:59 compute-0 sudo[88196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:38:59 compute-0 sudo[88196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:59 compute-0 sudo[88196]: pam_unix(sudo:session): session closed for user root
Sep 30 17:38:59 compute-0 sudo[88249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:38:59 compute-0 sudo[88249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:38:59 compute-0 python3[88315]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759253939.0925965-33678-102030395936333/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:38:59 compute-0 podman[88380]: 2025-09-30 17:38:59.852688958 +0000 UTC m=+0.035245325 container create e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_poincare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:38:59 compute-0 systemd[1]: Started libpod-conmon-e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9.scope.
Sep 30 17:38:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:38:59 compute-0 podman[88380]: 2025-09-30 17:38:59.908818592 +0000 UTC m=+0.091374969 container init e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 17:38:59 compute-0 podman[88380]: 2025-09-30 17:38:59.915707256 +0000 UTC m=+0.098263623 container start e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_poincare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:38:59 compute-0 podman[88380]: 2025-09-30 17:38:59.918880437 +0000 UTC m=+0.101436804 container attach e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_poincare, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 17:38:59 compute-0 modest_poincare[88396]: 167 167
Sep 30 17:38:59 compute-0 systemd[1]: libpod-e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9.scope: Deactivated successfully.
Sep 30 17:38:59 compute-0 podman[88380]: 2025-09-30 17:38:59.921018641 +0000 UTC m=+0.103575008 container died e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 17:38:59 compute-0 podman[88380]: 2025-09-30 17:38:59.837261257 +0000 UTC m=+0.019817654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf1af313f6d8f921e9e19b20a3036152aca1153feae52e1cc93c7c2e1b45c71f-merged.mount: Deactivated successfully.
Sep 30 17:38:59 compute-0 podman[88380]: 2025-09-30 17:38:59.95725 +0000 UTC m=+0.139806367 container remove e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:38:59 compute-0 systemd[1]: libpod-conmon-e5b8449bd392f29acc83f7124fb6f6d66eba77d56ab3ae046ab3bc10264ff7a9.scope: Deactivated successfully.
Sep 30 17:39:00 compute-0 podman[88420]: 2025-09-30 17:39:00.103767465 +0000 UTC m=+0.040538499 container create 8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jang, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:39:00 compute-0 systemd[1]: Started libpod-conmon-8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17.scope.
Sep 30 17:39:00 compute-0 sudo[88457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epgnuhsaxehnvytkbyyxcvftywbrvhrp ; /usr/bin/python3'
Sep 30 17:39:00 compute-0 sudo[88457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:00 compute-0 podman[88420]: 2025-09-30 17:39:00.08581379 +0000 UTC m=+0.022584844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc457fcb44e26d85d89addd349c8224e091cc3085d45c1861d94cc7fe6c0f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc457fcb44e26d85d89addd349c8224e091cc3085d45c1861d94cc7fe6c0f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc457fcb44e26d85d89addd349c8224e091cc3085d45c1861d94cc7fe6c0f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cc457fcb44e26d85d89addd349c8224e091cc3085d45c1861d94cc7fe6c0f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:00 compute-0 ceph-mon[73755]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:00 compute-0 ceph-mon[73755]: from='client.14272 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:00 compute-0 ceph-mon[73755]: Saving service rgw.rgw spec with placement compute-0;compute-1
Sep 30 17:39:00 compute-0 ceph-mon[73755]: Saving service ingress.rgw.default spec with placement count:2
Sep 30 17:39:00 compute-0 podman[88420]: 2025-09-30 17:39:00.20571088 +0000 UTC m=+0.142481944 container init 8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jang, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:00 compute-0 podman[88420]: 2025-09-30 17:39:00.213169479 +0000 UTC m=+0.149940523 container start 8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jang, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:39:00 compute-0 podman[88420]: 2025-09-30 17:39:00.216404661 +0000 UTC m=+0.153175705 container attach 8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:00 compute-0 python3[88463]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:00 compute-0 podman[88467]: 2025-09-30 17:39:00.344555762 +0000 UTC m=+0.039487633 container create 89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad (image=quay.io/ceph/ceph:v19, name=infallible_noyce, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 17:39:00 compute-0 systemd[1]: Started libpod-conmon-89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad.scope.
Sep 30 17:39:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bba1992980e7cb54e6d2ea7213dc818e3ccd42fa8173c5f46a45071097eb7d7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bba1992980e7cb54e6d2ea7213dc818e3ccd42fa8173c5f46a45071097eb7d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bba1992980e7cb54e6d2ea7213dc818e3ccd42fa8173c5f46a45071097eb7d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:00 compute-0 podman[88467]: 2025-09-30 17:39:00.416788023 +0000 UTC m=+0.111719914 container init 89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad (image=quay.io/ceph/ceph:v19, name=infallible_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:00 compute-0 podman[88467]: 2025-09-30 17:39:00.423458233 +0000 UTC m=+0.118390104 container start 89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad (image=quay.io/ceph/ceph:v19, name=infallible_noyce, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 17:39:00 compute-0 podman[88467]: 2025-09-30 17:39:00.329635933 +0000 UTC m=+0.024567834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:00 compute-0 podman[88467]: 2025-09-30 17:39:00.426927461 +0000 UTC m=+0.121859352 container attach 89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad (image=quay.io/ceph/ceph:v19, name=infallible_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 17:39:00 compute-0 dazzling_jang[88461]: {
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:     "0": [
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:         {
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "devices": [
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "/dev/loop3"
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             ],
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "lv_name": "ceph_lv0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "lv_size": "21470642176",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "name": "ceph_lv0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "tags": {
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.cluster_name": "ceph",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.crush_device_class": "",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.encrypted": "0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.osd_id": "0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.type": "block",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.vdo": "0",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:                 "ceph.with_tpm": "0"
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             },
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "type": "block",
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:             "vg_name": "ceph_vg0"
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:         }
Sep 30 17:39:00 compute-0 dazzling_jang[88461]:     ]
Sep 30 17:39:00 compute-0 dazzling_jang[88461]: }
Sep 30 17:39:00 compute-0 systemd[1]: libpod-8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17.scope: Deactivated successfully.
Sep 30 17:39:00 compute-0 podman[88420]: 2025-09-30 17:39:00.531203765 +0000 UTC m=+0.467974809 container died 8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jang, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-08cc457fcb44e26d85d89addd349c8224e091cc3085d45c1861d94cc7fe6c0f2-merged.mount: Deactivated successfully.
Sep 30 17:39:00 compute-0 podman[88420]: 2025-09-30 17:39:00.568532702 +0000 UTC m=+0.505303746 container remove 8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:00 compute-0 systemd[1]: libpod-conmon-8ade3cc2e286209e0478b4df64772f26fb1dc305affd73eebe9a5c982b47ff17.scope: Deactivated successfully.
Sep 30 17:39:00 compute-0 sudo[88249]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:00 compute-0 sudo[88522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:00 compute-0 sudo[88522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:00 compute-0 sudo[88522]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:00 compute-0 sudo[88547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:39:00 compute-0 sudo[88547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:39:00
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'backups', 'volumes', '.mgr', 'vms']
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14276 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service node-exporter spec with placement *
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Sep 30 17:39:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Sep 30 17:39:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Sep 30 17:39:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Sep 30 17:39:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Sep 30 17:39:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Sep 30 17:39:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Sep 30 17:39:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Sep 30 17:39:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:00 compute-0 infallible_noyce[88483]: Scheduled node-exporter update...
Sep 30 17:39:00 compute-0 infallible_noyce[88483]: Scheduled grafana update...
Sep 30 17:39:00 compute-0 infallible_noyce[88483]: Scheduled prometheus update...
Sep 30 17:39:00 compute-0 infallible_noyce[88483]: Scheduled alertmanager update...
Sep 30 17:39:00 compute-0 systemd[1]: libpod-89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad.scope: Deactivated successfully.
Sep 30 17:39:00 compute-0 podman[88467]: 2025-09-30 17:39:00.885296625 +0000 UTC m=+0.580228516 container died 89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad (image=quay.io/ceph/ceph:v19, name=infallible_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bba1992980e7cb54e6d2ea7213dc818e3ccd42fa8173c5f46a45071097eb7d7-merged.mount: Deactivated successfully.
Sep 30 17:39:00 compute-0 podman[88467]: 2025-09-30 17:39:00.924213709 +0000 UTC m=+0.619145580 container remove 89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad (image=quay.io/ceph/ceph:v19, name=infallible_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 17:39:00 compute-0 systemd[1]: libpod-conmon-89dc031e6248e2790daa1ae533e741aa4932d2da847d48011b5a263912794bad.scope: Deactivated successfully.
Sep 30 17:39:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:39:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:39:00 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:39:00 compute-0 sudo[88457]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:01 compute-0 podman[88626]: 2025-09-30 17:39:01.07744791 +0000 UTC m=+0.038211096 container create 39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:01 compute-0 systemd[1]: Started libpod-conmon-39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e.scope.
Sep 30 17:39:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:01 compute-0 podman[88626]: 2025-09-30 17:39:01.147029074 +0000 UTC m=+0.107792290 container init 39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 17:39:01 compute-0 podman[88626]: 2025-09-30 17:39:01.153108131 +0000 UTC m=+0.113871317 container start 39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shirley, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 17:39:01 compute-0 silly_shirley[88642]: 167 167
Sep 30 17:39:01 compute-0 podman[88626]: 2025-09-30 17:39:01.156491888 +0000 UTC m=+0.117255094 container attach 39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shirley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:01 compute-0 systemd[1]: libpod-39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e.scope: Deactivated successfully.
Sep 30 17:39:01 compute-0 podman[88626]: 2025-09-30 17:39:01.061608961 +0000 UTC m=+0.022372157 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:01 compute-0 conmon[88642]: conmon 39ce4cfa73dba2e025fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e.scope/container/memory.events
Sep 30 17:39:01 compute-0 podman[88626]: 2025-09-30 17:39:01.158041468 +0000 UTC m=+0.118804654 container died 39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbe4914006aa3a108ac59c0d7e50bca22d5d677319b07f9c728be6a5ef1e5de5-merged.mount: Deactivated successfully.
Sep 30 17:39:01 compute-0 podman[88626]: 2025-09-30 17:39:01.19494893 +0000 UTC m=+0.155712116 container remove 39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 17:39:01 compute-0 systemd[1]: libpod-conmon-39ce4cfa73dba2e025fb745af377721bfc8abcee95a3b2591c1e1c1ad77dfd7e.scope: Deactivated successfully.
Sep 30 17:39:01 compute-0 sudo[88683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alfwimietepixfyjxdttwyqgdusiuoeb ; /usr/bin/python3'
Sep 30 17:39:01 compute-0 sudo[88683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:01 compute-0 podman[88691]: 2025-09-30 17:39:01.366632617 +0000 UTC m=+0.066383823 container create cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lederberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:01 compute-0 python3[88685]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:01 compute-0 systemd[1]: Started libpod-conmon-cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8.scope.
Sep 30 17:39:01 compute-0 podman[88691]: 2025-09-30 17:39:01.321514953 +0000 UTC m=+0.021266179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b0fc0ada3e4c3af1fb99c6c228dca74d420505dd153b0ee03080dc7b052450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b0fc0ada3e4c3af1fb99c6c228dca74d420505dd153b0ee03080dc7b052450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b0fc0ada3e4c3af1fb99c6c228dca74d420505dd153b0ee03080dc7b052450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b0fc0ada3e4c3af1fb99c6c228dca74d420505dd153b0ee03080dc7b052450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:01 compute-0 podman[88707]: 2025-09-30 17:39:01.480640496 +0000 UTC m=+0.059760562 container create c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3 (image=quay.io/ceph/ceph:v19, name=distracted_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:01 compute-0 podman[88691]: 2025-09-30 17:39:01.514052678 +0000 UTC m=+0.213804094 container init cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:01 compute-0 systemd[1]: Started libpod-conmon-c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3.scope.
Sep 30 17:39:01 compute-0 podman[88691]: 2025-09-30 17:39:01.521081989 +0000 UTC m=+0.220833185 container start cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lederberg, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:01 compute-0 podman[88707]: 2025-09-30 17:39:01.439327501 +0000 UTC m=+0.018447587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df816cac2417e5e5cc80d2552b5f526b4e64f2ab33dc0035a789b5e2f5f8b522/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df816cac2417e5e5cc80d2552b5f526b4e64f2ab33dc0035a789b5e2f5f8b522/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df816cac2417e5e5cc80d2552b5f526b4e64f2ab33dc0035a789b5e2f5f8b522/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:01 compute-0 podman[88691]: 2025-09-30 17:39:01.591462074 +0000 UTC m=+0.291213310 container attach cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:01 compute-0 podman[88707]: 2025-09-30 17:39:01.657160078 +0000 UTC m=+0.236280164 container init c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3 (image=quay.io/ceph/ceph:v19, name=distracted_pare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:01 compute-0 podman[88707]: 2025-09-30 17:39:01.663270286 +0000 UTC m=+0.242390352 container start c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3 (image=quay.io/ceph/ceph:v19, name=distracted_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 17:39:01 compute-0 podman[88707]: 2025-09-30 17:39:01.702769194 +0000 UTC m=+0.281889260 container attach c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3 (image=quay.io/ceph/ceph:v19, name=distracted_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:01 compute-0 ceph-mon[73755]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:01 compute-0 ceph-mon[73755]: from='client.14276 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:01 compute-0 ceph-mon[73755]: Saving service node-exporter spec with placement *
Sep 30 17:39:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:01 compute-0 ceph-mon[73755]: Saving service grafana spec with placement compute-0;count:1
Sep 30 17:39:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:01 compute-0 ceph-mon[73755]: Saving service prometheus spec with placement compute-0;count:1
Sep 30 17:39:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:01 compute-0 ceph-mon[73755]: Saving service alertmanager spec with placement compute-0;count:1
Sep 30 17:39:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:01 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Sep 30 17:39:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Sep 30 17:39:01 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Sep 30 17:39:01 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 2f5f3a9f-a9d0-44c1-a283-128f981724ad (PG autoscaler increasing pool 2 PGs from 1 to 32)
Sep 30 17:39:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:39:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3197899215' entity='client.admin' 
Sep 30 17:39:02 compute-0 systemd[1]: libpod-c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3.scope: Deactivated successfully.
Sep 30 17:39:02 compute-0 podman[88707]: 2025-09-30 17:39:02.060412477 +0000 UTC m=+0.639532573 container died c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3 (image=quay.io/ceph/ceph:v19, name=distracted_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-df816cac2417e5e5cc80d2552b5f526b4e64f2ab33dc0035a789b5e2f5f8b522-merged.mount: Deactivated successfully.
Sep 30 17:39:02 compute-0 podman[88707]: 2025-09-30 17:39:02.099042603 +0000 UTC m=+0.678162669 container remove c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3 (image=quay.io/ceph/ceph:v19, name=distracted_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:02 compute-0 systemd[1]: libpod-conmon-c829874e588990056452227bac3f0a85855c61e51f95f50ad721bf24a248a6d3.scope: Deactivated successfully.
Sep 30 17:39:02 compute-0 sudo[88683]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:02 compute-0 lvm[88832]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:39:02 compute-0 lvm[88832]: VG ceph_vg0 finished
Sep 30 17:39:02 compute-0 wizardly_lederberg[88708]: {}
Sep 30 17:39:02 compute-0 systemd[1]: libpod-cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8.scope: Deactivated successfully.
Sep 30 17:39:02 compute-0 systemd[1]: libpod-cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8.scope: Consumed 1.092s CPU time.
Sep 30 17:39:02 compute-0 podman[88691]: 2025-09-30 17:39:02.230251776 +0000 UTC m=+0.930002992 container died cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5b0fc0ada3e4c3af1fb99c6c228dca74d420505dd153b0ee03080dc7b052450-merged.mount: Deactivated successfully.
Sep 30 17:39:02 compute-0 sudo[88866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knlzqebvrlapqxarvbemnnpfyssrcogv ; /usr/bin/python3'
Sep 30 17:39:02 compute-0 podman[88691]: 2025-09-30 17:39:02.274196759 +0000 UTC m=+0.973947965 container remove cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lederberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:39:02 compute-0 sudo[88866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:02 compute-0 systemd[1]: libpod-conmon-cda5b2721c772c5e69ba1a5bfe3789d155e6743ef064481976c20f410496f6a8.scope: Deactivated successfully.
Sep 30 17:39:02 compute-0 sudo[88547]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:02 compute-0 sudo[88872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:39:02 compute-0 sudo[88872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:02 compute-0 sudo[88872]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:02 compute-0 python3[88871]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:02 compute-0 podman[88897]: 2025-09-30 17:39:02.476131935 +0000 UTC m=+0.038363199 container create fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18 (image=quay.io/ceph/ceph:v19, name=lucid_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:02 compute-0 systemd[1]: Started libpod-conmon-fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18.scope.
Sep 30 17:39:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244043b41111d1e8fe6b622410fad041d96256a706743019aa4af5e4889b4a15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244043b41111d1e8fe6b622410fad041d96256a706743019aa4af5e4889b4a15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244043b41111d1e8fe6b622410fad041d96256a706743019aa4af5e4889b4a15/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:02 compute-0 podman[88897]: 2025-09-30 17:39:02.534632464 +0000 UTC m=+0.096863758 container init fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18 (image=quay.io/ceph/ceph:v19, name=lucid_perlman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:39:02 compute-0 podman[88897]: 2025-09-30 17:39:02.540447784 +0000 UTC m=+0.102679048 container start fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18 (image=quay.io/ceph/ceph:v19, name=lucid_perlman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:39:02 compute-0 podman[88897]: 2025-09-30 17:39:02.543645476 +0000 UTC m=+0.105876740 container attach fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18 (image=quay.io/ceph/ceph:v19, name=lucid_perlman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:39:02 compute-0 podman[88897]: 2025-09-30 17:39:02.459966088 +0000 UTC m=+0.022197372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3474613208' entity='client.admin' 
Sep 30 17:39:02 compute-0 systemd[1]: libpod-fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18.scope: Deactivated successfully.
Sep 30 17:39:02 compute-0 podman[88897]: 2025-09-30 17:39:02.895279773 +0000 UTC m=+0.457511067 container died fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18 (image=quay.io/ceph/ceph:v19, name=lucid_perlman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Sep 30 17:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-244043b41111d1e8fe6b622410fad041d96256a706743019aa4af5e4889b4a15-merged.mount: Deactivated successfully.
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Sep 30 17:39:02 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 306d2c0d-80b6-4839-a0ab-6e1199884a43 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:02 compute-0 podman[88897]: 2025-09-30 17:39:02.929334691 +0000 UTC m=+0.491565955 container remove fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18 (image=quay.io/ceph/ceph:v19, name=lucid_perlman, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:02 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:02 compute-0 ceph-mon[73755]: osdmap e28: 2 total, 2 up, 2 in
Sep 30 17:39:02 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3197899215' entity='client.admin' 
Sep 30 17:39:02 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:02 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:02 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3474613208' entity='client.admin' 
Sep 30 17:39:02 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 29 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29 pruub=9.089283943s) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active pruub 44.137817383s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:02 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 29 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29 pruub=9.089283943s) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown pruub 44.137817383s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:02 compute-0 systemd[1]: libpod-conmon-fba816c03feba14526907878cad906e3ab428cc9b163f0d2420858d741650b18.scope: Deactivated successfully.
Sep 30 17:39:02 compute-0 sudo[88866]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:03 compute-0 sudo[88973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itpdihkfmjpcxsazrcuhddrypwogfvqd ; /usr/bin/python3'
Sep 30 17:39:03 compute-0 sudo[88973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:03 compute-0 python3[88975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:03 compute-0 podman[88976]: 2025-09-30 17:39:03.283435202 +0000 UTC m=+0.036745979 container create 4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 17:39:03 compute-0 systemd[1]: Started libpod-conmon-4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e.scope.
Sep 30 17:39:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de64b2b0a93fac072c4efc4d9d993844db0915b94af1823889ab007780a49a1b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de64b2b0a93fac072c4efc4d9d993844db0915b94af1823889ab007780a49a1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de64b2b0a93fac072c4efc4d9d993844db0915b94af1823889ab007780a49a1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:03 compute-0 podman[88976]: 2025-09-30 17:39:03.355451049 +0000 UTC m=+0.108761856 container init 4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:03 compute-0 podman[88976]: 2025-09-30 17:39:03.361953107 +0000 UTC m=+0.115263884 container start 4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 17:39:03 compute-0 podman[88976]: 2025-09-30 17:39:03.269068581 +0000 UTC m=+0.022379378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:03 compute-0 podman[88976]: 2025-09-30 17:39:03.365163859 +0000 UTC m=+0.118474666 container attach 4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Sep 30 17:39:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Sep 30 17:39:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2657039316' entity='client.admin' 
Sep 30 17:39:03 compute-0 systemd[1]: libpod-4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e.scope: Deactivated successfully.
Sep 30 17:39:03 compute-0 podman[88976]: 2025-09-30 17:39:03.808238274 +0000 UTC m=+0.561549051 container died 4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 17:39:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:39:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:39:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-de64b2b0a93fac072c4efc4d9d993844db0915b94af1823889ab007780a49a1b-merged.mount: Deactivated successfully.
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:39:04 compute-0 podman[88976]: 2025-09-30 17:39:04.040924864 +0000 UTC m=+0.794235641 container remove 4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e (image=quay.io/ceph/ceph:v19, name=tender_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:04 compute-0 sudo[88973]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:04 compute-0 systemd[1]: libpod-conmon-4965fbfc772de2317f172b84776a4e723b80d8af5c6e62c0e22851fee56be16e.scope: Deactivated successfully.
Sep 30 17:39:04 compute-0 sudo[89052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxlnlbxosrmfonneqgbwqflzkqbhwigf ; /usr/bin/python3'
Sep 30 17:39:04 compute-0 sudo[89052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:04 compute-0 python3[89054]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:04 compute-0 sudo[89052]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v82: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Sep 30 17:39:04 compute-0 ceph-mon[73755]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:04 compute-0 ceph-mon[73755]: osdmap e29: 2 total, 2 up, 2 in
Sep 30 17:39:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2657039316' entity='client.admin' 
Sep 30 17:39:04 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Sep 30 17:39:04 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 8f1aa554-2407-4539-89f2-70b86fb05d08 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1c( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1d( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.8( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.7( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.2( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.5( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.3( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.b( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.f( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.11( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.12( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.14( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.16( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.17( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1a( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.18( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.8( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.7( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.2( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.0( empty local-lis/les=29/30 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.3( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.b( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.f( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.12( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.11( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.14( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.17( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.1a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.16( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.18( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 30 pg[2.5( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:39:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:39:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:04 compute-0 sudo[89068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:04 compute-0 sudo[89068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:04 compute-0 sudo[89068]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:04 compute-0 sudo[89093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:39:04 compute-0 sudo[89093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:04 compute-0 sudo[89139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqwgulbmpptcvisfdxrxzsnfwcthhkiv ; /usr/bin/python3'
Sep 30 17:39:04 compute-0 sudo[89139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:05 compute-0 ceph-mgr[74051]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Sep 30 17:39:05 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Sep 30 17:39:05 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Sep 30 17:39:05 compute-0 python3[89143]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.efvthf/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:05 compute-0 podman[89144]: 2025-09-30 17:39:05.163176551 +0000 UTC m=+0.033126145 container create 2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db (image=quay.io/ceph/ceph:v19, name=silly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 17:39:05 compute-0 systemd[1]: Started libpod-conmon-2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db.scope.
Sep 30 17:39:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5fe7b4ac70f0234d84128068ae8773f1fa2a592e711ea684ccd71e23a3b74c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5fe7b4ac70f0234d84128068ae8773f1fa2a592e711ea684ccd71e23a3b74c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5fe7b4ac70f0234d84128068ae8773f1fa2a592e711ea684ccd71e23a3b74c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:05 compute-0 podman[89144]: 2025-09-30 17:39:05.228027744 +0000 UTC m=+0.097977358 container init 2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db (image=quay.io/ceph/ceph:v19, name=silly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Sep 30 17:39:05 compute-0 podman[89144]: 2025-09-30 17:39:05.233493855 +0000 UTC m=+0.103443449 container start 2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db (image=quay.io/ceph/ceph:v19, name=silly_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 17:39:05 compute-0 podman[89144]: 2025-09-30 17:39:05.236742978 +0000 UTC m=+0.106692572 container attach 2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db (image=quay.io/ceph/ceph:v19, name=silly_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:39:05 compute-0 podman[89144]: 2025-09-30 17:39:05.148168585 +0000 UTC m=+0.018118199 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:05 compute-0 podman[89203]: 2025-09-30 17:39:05.333190905 +0000 UTC m=+0.034604643 container create 05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 17:39:05 compute-0 systemd[1]: Started libpod-conmon-05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128.scope.
Sep 30 17:39:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:05 compute-0 podman[89203]: 2025-09-30 17:39:05.318078206 +0000 UTC m=+0.019491964 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:05 compute-0 podman[89203]: 2025-09-30 17:39:05.458749333 +0000 UTC m=+0.160163101 container init 05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_mclaren, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 17:39:05 compute-0 podman[89203]: 2025-09-30 17:39:05.46446325 +0000 UTC m=+0.165876988 container start 05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:05 compute-0 happy_mclaren[89238]: 167 167
Sep 30 17:39:05 compute-0 systemd[1]: libpod-05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128.scope: Deactivated successfully.
Sep 30 17:39:05 compute-0 conmon[89238]: conmon 05040beba0cc01f01ef2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128.scope/container/memory.events
Sep 30 17:39:05 compute-0 podman[89203]: 2025-09-30 17:39:05.485598755 +0000 UTC m=+0.187012513 container attach 05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 17:39:05 compute-0 podman[89203]: 2025-09-30 17:39:05.486219561 +0000 UTC m=+0.187633289 container died 05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 17:39:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.efvthf/server_addr}] v 0)
Sep 30 17:39:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/252688255' entity='client.admin' 
Sep 30 17:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-41be0e224f961759b7a728d969492ed777f43646a20319e225f0d3e5120c4e44-merged.mount: Deactivated successfully.
Sep 30 17:39:05 compute-0 systemd[1]: libpod-2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db.scope: Deactivated successfully.
Sep 30 17:39:05 compute-0 podman[89203]: 2025-09-30 17:39:05.733473877 +0000 UTC m=+0.434887615 container remove 05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:39:05 compute-0 systemd[1]: libpod-conmon-05040beba0cc01f01ef2d6eec4b5b7dc25ab47b7200a7e09098f02b49807f128.scope: Deactivated successfully.
Sep 30 17:39:05 compute-0 podman[89144]: 2025-09-30 17:39:05.745516557 +0000 UTC m=+0.615466141 container died 2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db (image=quay.io/ceph/ceph:v19, name=silly_hopper, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:39:05 compute-0 ceph-mon[73755]: pgmap v82: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:05 compute-0 ceph-mon[73755]: osdmap e30: 2 total, 2 up, 2 in
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/252688255' entity='client.admin' 
Sep 30 17:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb5fe7b4ac70f0234d84128068ae8773f1fa2a592e711ea684ccd71e23a3b74c-merged.mount: Deactivated successfully.
Sep 30 17:39:06 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Sep 30 17:39:06 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Sep 30 17:39:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Sep 30 17:39:06 compute-0 podman[89144]: 2025-09-30 17:39:06.157652203 +0000 UTC m=+1.027601797 container remove 2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db (image=quay.io/ceph/ceph:v19, name=silly_hopper, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:06 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Sep 30 17:39:06 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 052d8f4c-d17b-41fa-8ecd-77c0986570db (PG autoscaler increasing pool 5 PGs from 1 to 32)
Sep 30 17:39:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:39:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:06 compute-0 systemd[1]: libpod-conmon-2040f75ddf4a01b59c629dcb0bbb0c54ffed017894a664aae2b6bc29f94ea5db.scope: Deactivated successfully.
Sep 30 17:39:06 compute-0 sudo[89139]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:06 compute-0 podman[89275]: 2025-09-30 17:39:06.228703696 +0000 UTC m=+0.383970021 container create 8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ramanujan, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:06 compute-0 podman[89275]: 2025-09-30 17:39:06.19979426 +0000 UTC m=+0.355060605 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:06 compute-0 systemd[1]: Started libpod-conmon-8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2.scope.
Sep 30 17:39:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29042513c47b494727f809da84b0e4e642ce898caeb42a943e26d2c1a71293ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29042513c47b494727f809da84b0e4e642ce898caeb42a943e26d2c1a71293ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29042513c47b494727f809da84b0e4e642ce898caeb42a943e26d2c1a71293ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29042513c47b494727f809da84b0e4e642ce898caeb42a943e26d2c1a71293ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29042513c47b494727f809da84b0e4e642ce898caeb42a943e26d2c1a71293ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:06 compute-0 podman[89275]: 2025-09-30 17:39:06.369754193 +0000 UTC m=+0.525020538 container init 8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:06 compute-0 podman[89275]: 2025-09-30 17:39:06.376758953 +0000 UTC m=+0.532025278 container start 8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:39:06 compute-0 podman[89275]: 2025-09-30 17:39:06.398400451 +0000 UTC m=+0.553666796 container attach 8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ramanujan, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:06 compute-0 wonderful_ramanujan[89293]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:39:06 compute-0 wonderful_ramanujan[89293]: --> All data devices are unavailable
Sep 30 17:39:06 compute-0 systemd[1]: libpod-8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2.scope: Deactivated successfully.
Sep 30 17:39:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:06 compute-0 podman[89308]: 2025-09-30 17:39:06.768036572 +0000 UTC m=+0.024225635 container died 8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ramanujan, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-29042513c47b494727f809da84b0e4e642ce898caeb42a943e26d2c1a71293ee-merged.mount: Deactivated successfully.
Sep 30 17:39:06 compute-0 systemd[75040]: Starting Mark boot as successful...
Sep 30 17:39:06 compute-0 sudo[89346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhbzxmdyjenwkibdyxlpchmijemxqhbk ; /usr/bin/python3'
Sep 30 17:39:06 compute-0 systemd[75040]: Finished Mark boot as successful.
Sep 30 17:39:06 compute-0 sudo[89346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:07 compute-0 podman[89308]: 2025-09-30 17:39:07.012427164 +0000 UTC m=+0.268616207 container remove 8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:39:07 compute-0 systemd[1]: libpod-conmon-8896d3538d35a2f6100563bb7c8b407eec9ed1fa88ac3cc99535c8e4f72451d2.scope: Deactivated successfully.
Sep 30 17:39:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Sep 30 17:39:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Sep 30 17:39:07 compute-0 ceph-mon[73755]: 2.1c scrub starts
Sep 30 17:39:07 compute-0 ceph-mon[73755]: 2.1c scrub ok
Sep 30 17:39:07 compute-0 ceph-mon[73755]: 2.1f scrub starts
Sep 30 17:39:07 compute-0 ceph-mon[73755]: 2.1f scrub ok
Sep 30 17:39:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:07 compute-0 ceph-mon[73755]: osdmap e31: 2 total, 2 up, 2 in
Sep 30 17:39:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:07 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:07 compute-0 sudo[89093]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:07 compute-0 python3[89349]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.glbusf/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:07 compute-0 sudo[89350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:07 compute-0 sudo[89350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:07 compute-0 sudo[89350]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Sep 30 17:39:07 compute-0 sudo[89387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:39:07 compute-0 sudo[89387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:07 compute-0 podman[89359]: 2025-09-30 17:39:07.170714755 +0000 UTC m=+0.080713702 container create 9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4 (image=quay.io/ceph/ceph:v19, name=nice_blackwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:39:07 compute-0 podman[89359]: 2025-09-30 17:39:07.117246837 +0000 UTC m=+0.027245804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Sep 30 17:39:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Sep 30 17:39:07 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 4a539c7f-e241-466a-9f42-0a6fcd2b1db9 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Sep 30 17:39:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:39:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:07 compute-0 systemd[1]: Started libpod-conmon-9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4.scope.
Sep 30 17:39:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117b60f41ff88e5589e8c4ac3f6c63508ee48bb2264e5a147399ce1245974e4a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117b60f41ff88e5589e8c4ac3f6c63508ee48bb2264e5a147399ce1245974e4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117b60f41ff88e5589e8c4ac3f6c63508ee48bb2264e5a147399ce1245974e4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:07 compute-0 podman[89359]: 2025-09-30 17:39:07.386309944 +0000 UTC m=+0.296308911 container init 9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4 (image=quay.io/ceph/ceph:v19, name=nice_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 17:39:07 compute-0 podman[89359]: 2025-09-30 17:39:07.393965802 +0000 UTC m=+0.303964749 container start 9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4 (image=quay.io/ceph/ceph:v19, name=nice_blackwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:39:07 compute-0 podman[89359]: 2025-09-30 17:39:07.436818667 +0000 UTC m=+0.346817614 container attach 9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4 (image=quay.io/ceph/ceph:v19, name=nice_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:39:07 compute-0 podman[89468]: 2025-09-30 17:39:07.557786626 +0000 UTC m=+0.037271512 container create 7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kilby, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:07 compute-0 systemd[1]: Started libpod-conmon-7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e.scope.
Sep 30 17:39:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:07 compute-0 podman[89468]: 2025-09-30 17:39:07.541115406 +0000 UTC m=+0.020600312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:07 compute-0 podman[89468]: 2025-09-30 17:39:07.651144913 +0000 UTC m=+0.130629829 container init 7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:39:07 compute-0 podman[89468]: 2025-09-30 17:39:07.657920018 +0000 UTC m=+0.137404924 container start 7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:07 compute-0 podman[89468]: 2025-09-30 17:39:07.661326896 +0000 UTC m=+0.140811812 container attach 7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 17:39:07 compute-0 mystifying_kilby[89490]: 167 167
Sep 30 17:39:07 compute-0 systemd[1]: libpod-7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e.scope: Deactivated successfully.
Sep 30 17:39:07 compute-0 conmon[89490]: conmon 7fe5fc1ef6227670b69c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e.scope/container/memory.events
Sep 30 17:39:07 compute-0 podman[89468]: 2025-09-30 17:39:07.665222596 +0000 UTC m=+0.144707482 container died 7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kilby, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 17:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee2be0452f0a78fa4c956bf9414f1ee69c31dae26bb20a5ae222a9c3e13c2fc6-merged.mount: Deactivated successfully.
Sep 30 17:39:07 compute-0 podman[89468]: 2025-09-30 17:39:07.708882862 +0000 UTC m=+0.188367748 container remove 7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_kilby, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:07 compute-0 systemd[1]: libpod-conmon-7fe5fc1ef6227670b69c6ac30f2a682b1cd3ed1c9d3abfa3e7f31fccbc44de8e.scope: Deactivated successfully.
Sep 30 17:39:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.glbusf/server_addr}] v 0)
Sep 30 17:39:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2268718532' entity='client.admin' 
Sep 30 17:39:07 compute-0 systemd[1]: libpod-9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4.scope: Deactivated successfully.
Sep 30 17:39:07 compute-0 podman[89359]: 2025-09-30 17:39:07.82592399 +0000 UTC m=+0.735922947 container died 9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4 (image=quay.io/ceph/ceph:v19, name=nice_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 17:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-117b60f41ff88e5589e8c4ac3f6c63508ee48bb2264e5a147399ce1245974e4a-merged.mount: Deactivated successfully.
Sep 30 17:39:07 compute-0 podman[89359]: 2025-09-30 17:39:07.86276028 +0000 UTC m=+0.772759227 container remove 9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4 (image=quay.io/ceph/ceph:v19, name=nice_blackwell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:07 compute-0 systemd[1]: libpod-conmon-9fc64ab36a348a4a6e587f6df815919016d4b95fe2bac5bc21241aaafc44f5e4.scope: Deactivated successfully.
Sep 30 17:39:07 compute-0 podman[89515]: 2025-09-30 17:39:07.884195523 +0000 UTC m=+0.052863524 container create 90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:07 compute-0 sudo[89346]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:07 compute-0 systemd[1]: Started libpod-conmon-90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a.scope.
Sep 30 17:39:07 compute-0 podman[89515]: 2025-09-30 17:39:07.85540212 +0000 UTC m=+0.024070141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9935ed9e49c6f5a52993d9b9ec34644439a8127942aecd8afa8f44dec32da8ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9935ed9e49c6f5a52993d9b9ec34644439a8127942aecd8afa8f44dec32da8ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9935ed9e49c6f5a52993d9b9ec34644439a8127942aecd8afa8f44dec32da8ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9935ed9e49c6f5a52993d9b9ec34644439a8127942aecd8afa8f44dec32da8ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Sep 30 17:39:07 compute-0 podman[89515]: 2025-09-30 17:39:07.984585871 +0000 UTC m=+0.153253902 container init 90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Sep 30 17:39:07 compute-0 podman[89515]: 2025-09-30 17:39:07.991579652 +0000 UTC m=+0.160247653 container start 90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ellis, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:07 compute-0 podman[89515]: 2025-09-30 17:39:07.994971989 +0000 UTC m=+0.163640010 container attach 90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ellis, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:08 compute-0 sudo[89569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbkktwqxssvstjhuzhwklrtoujtztid ; /usr/bin/python3'
Sep 30 17:39:08 compute-0 sudo[89569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:08 compute-0 python3[89571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Sep 30 17:39:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Sep 30 17:39:08 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev eed85c07-915e-4b01-b3de-f4e9a07a4636 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 2f5f3a9f-a9d0-44c1-a283-128f981724ad (PG autoscaler increasing pool 2 PGs from 1 to 32)
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 2f5f3a9f-a9d0-44c1-a283-128f981724ad (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 306d2c0d-80b6-4839-a0ab-6e1199884a43 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 306d2c0d-80b6-4839-a0ab-6e1199884a43 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 8f1aa554-2407-4539-89f2-70b86fb05d08 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 8f1aa554-2407-4539-89f2-70b86fb05d08 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 052d8f4c-d17b-41fa-8ecd-77c0986570db (PG autoscaler increasing pool 5 PGs from 1 to 32)
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 052d8f4c-d17b-41fa-8ecd-77c0986570db (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 4a539c7f-e241-466a-9f42-0a6fcd2b1db9 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 4a539c7f-e241-466a-9f42-0a6fcd2b1db9 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev eed85c07-915e-4b01-b3de-f4e9a07a4636 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event eed85c07-915e-4b01-b3de-f4e9a07a4636 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Sep 30 17:39:08 compute-0 podman[89574]: 2025-09-30 17:39:08.26649506 +0000 UTC m=+0.048311996 container create 823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a (image=quay.io/ceph/ceph:v19, name=nifty_carson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 17:39:08 compute-0 ceph-mon[73755]: pgmap v85: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:08 compute-0 nice_ellis[89541]: {
Sep 30 17:39:08 compute-0 ceph-mon[73755]: 2.1d scrub starts
Sep 30 17:39:08 compute-0 ceph-mon[73755]: 2.1d scrub ok
Sep 30 17:39:08 compute-0 nice_ellis[89541]:     "0": [
Sep 30 17:39:08 compute-0 nice_ellis[89541]:         {
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "devices": [
Sep 30 17:39:08 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "/dev/loop3"
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             ],
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "lv_name": "ceph_lv0",
Sep 30 17:39:08 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "lv_size": "21470642176",
Sep 30 17:39:08 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:39:08 compute-0 ceph-mon[73755]: osdmap e32: 2 total, 2 up, 2 in
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "name": "ceph_lv0",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "tags": {
Sep 30 17:39:08 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:39:08 compute-0 ceph-mon[73755]: 3.18 deep-scrub starts
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:39:08 compute-0 ceph-mon[73755]: 3.18 deep-scrub ok
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.cluster_name": "ceph",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.crush_device_class": "",
Sep 30 17:39:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2268718532' entity='client.admin' 
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.encrypted": "0",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.osd_id": "0",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.type": "block",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.vdo": "0",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:                 "ceph.with_tpm": "0"
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             },
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "type": "block",
Sep 30 17:39:08 compute-0 nice_ellis[89541]:             "vg_name": "ceph_vg0"
Sep 30 17:39:08 compute-0 nice_ellis[89541]:         }
Sep 30 17:39:08 compute-0 nice_ellis[89541]:     ]
Sep 30 17:39:08 compute-0 nice_ellis[89541]: }
Sep 30 17:39:08 compute-0 systemd[1]: Started libpod-conmon-823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a.scope.
Sep 30 17:39:08 compute-0 podman[89515]: 2025-09-30 17:39:08.328869999 +0000 UTC m=+0.497538000 container died 90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ellis, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:39:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:08 compute-0 systemd[1]: libpod-90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a.scope: Deactivated successfully.
Sep 30 17:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/475410f3c9fca190660ae72ec22417ad45b243d823d473e484b89dabad2f0606/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/475410f3c9fca190660ae72ec22417ad45b243d823d473e484b89dabad2f0606/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:08 compute-0 podman[89574]: 2025-09-30 17:39:08.245794057 +0000 UTC m=+0.027611013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/475410f3c9fca190660ae72ec22417ad45b243d823d473e484b89dabad2f0606/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:08 compute-0 podman[89574]: 2025-09-30 17:39:08.351334928 +0000 UTC m=+0.133151894 container init 823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a (image=quay.io/ceph/ceph:v19, name=nifty_carson, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:08 compute-0 podman[89574]: 2025-09-30 17:39:08.357936718 +0000 UTC m=+0.139753654 container start 823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a (image=quay.io/ceph/ceph:v19, name=nifty_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 17:39:08 compute-0 podman[89574]: 2025-09-30 17:39:08.364661852 +0000 UTC m=+0.146478868 container attach 823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a (image=quay.io/ceph/ceph:v19, name=nifty_carson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 17:39:08 compute-0 podman[89515]: 2025-09-30 17:39:08.379935206 +0000 UTC m=+0.548603207 container remove 90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ellis, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:08 compute-0 systemd[1]: libpod-conmon-90c0aac7b022a25622853b7f5cf77a82bf2ada3564cbc63b7cc99ae2b1c6f18a.scope: Deactivated successfully.
Sep 30 17:39:08 compute-0 sudo[89387]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:08 compute-0 sudo[89605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:08 compute-0 sudo[89605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:08 compute-0 sudo[89605]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:08 compute-0 sudo[89649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:39:08 compute-0 sudo[89649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9935ed9e49c6f5a52993d9b9ec34644439a8127942aecd8afa8f44dec32da8ba-merged.mount: Deactivated successfully.
Sep 30 17:39:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v88: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Sep 30 17:39:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/98408612' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Sep 30 17:39:08 compute-0 podman[89714]: 2025-09-30 17:39:08.883176632 +0000 UTC m=+0.035057365 container create bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:39:08 compute-0 systemd[1]: Started libpod-conmon-bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd.scope.
Sep 30 17:39:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:08 compute-0 podman[89714]: 2025-09-30 17:39:08.948975409 +0000 UTC m=+0.100856162 container init bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hertz, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Sep 30 17:39:08 compute-0 podman[89714]: 2025-09-30 17:39:08.955698392 +0000 UTC m=+0.107579125 container start bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:39:08 compute-0 wonderful_hertz[89730]: 167 167
Sep 30 17:39:08 compute-0 systemd[1]: libpod-bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd.scope: Deactivated successfully.
Sep 30 17:39:08 compute-0 podman[89714]: 2025-09-30 17:39:08.959254574 +0000 UTC m=+0.111135307 container attach bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:08 compute-0 conmon[89730]: conmon bc700e387eef65c2b203 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd.scope/container/memory.events
Sep 30 17:39:08 compute-0 podman[89714]: 2025-09-30 17:39:08.96065252 +0000 UTC m=+0.112533253 container died bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:08 compute-0 podman[89714]: 2025-09-30 17:39:08.868443182 +0000 UTC m=+0.020323935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ecc3a209f05d0d001cae6a388da94f878d0477fe8fbd5a233e0b2af7eb40332-merged.mount: Deactivated successfully.
Sep 30 17:39:08 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.a scrub starts
Sep 30 17:39:08 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.a scrub ok
Sep 30 17:39:08 compute-0 podman[89714]: 2025-09-30 17:39:08.997351206 +0000 UTC m=+0.149231939 container remove bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2)
Sep 30 17:39:09 compute-0 systemd[1]: libpod-conmon-bc700e387eef65c2b203dd5b4a10cf2e09116c792820b194356cba1de861e8bd.scope: Deactivated successfully.
Sep 30 17:39:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:09 compute-0 podman[89754]: 2025-09-30 17:39:09.152041745 +0000 UTC m=+0.041084741 container create cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_colden, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:09 compute-0 systemd[1]: Started libpod-conmon-cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56.scope.
Sep 30 17:39:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b4983f5fdd991384718a79aa0e0e584127e55128e9a8f25ecafaaef54c27f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b4983f5fdd991384718a79aa0e0e584127e55128e9a8f25ecafaaef54c27f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b4983f5fdd991384718a79aa0e0e584127e55128e9a8f25ecafaaef54c27f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b4983f5fdd991384718a79aa0e0e584127e55128e9a8f25ecafaaef54c27f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:09 compute-0 podman[89754]: 2025-09-30 17:39:09.229526773 +0000 UTC m=+0.118569789 container init cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_colden, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:39:09 compute-0 podman[89754]: 2025-09-30 17:39:09.135645672 +0000 UTC m=+0.024688698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:09 compute-0 podman[89754]: 2025-09-30 17:39:09.236034841 +0000 UTC m=+0.125077847 container start cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_colden, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:09 compute-0 podman[89754]: 2025-09-30 17:39:09.23988849 +0000 UTC m=+0.128931506 container attach cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Sep 30 17:39:09 compute-0 lvm[89844]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:39:09 compute-0 lvm[89844]: VG ceph_vg0 finished
Sep 30 17:39:09 compute-0 lvm[89848]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:39:09 compute-0 lvm[89848]: VG ceph_vg0 finished
Sep 30 17:39:09 compute-0 suspicious_colden[89770]: {}
Sep 30 17:39:09 compute-0 systemd[1]: libpod-cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56.scope: Deactivated successfully.
Sep 30 17:39:09 compute-0 systemd[1]: libpod-cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56.scope: Consumed 1.024s CPU time.
Sep 30 17:39:09 compute-0 podman[89754]: 2025-09-30 17:39:09.915747326 +0000 UTC m=+0.804790322 container died cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_colden, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-72b4983f5fdd991384718a79aa0e0e584127e55128e9a8f25ecafaaef54c27f6-merged.mount: Deactivated successfully.
Sep 30 17:39:09 compute-0 podman[89754]: 2025-09-30 17:39:09.970459587 +0000 UTC m=+0.859502583 container remove cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:09 compute-0 systemd[1]: libpod-conmon-cc442ba85c702f9a2b0545205d5804b6a627374a83899e80caeb425dd8f0ca56.scope: Deactivated successfully.
Sep 30 17:39:09 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Sep 30 17:39:10 compute-0 sudo[89649]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:10 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Sep 30 17:39:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:10 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 10 completed events
Sep 30 17:39:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:39:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v89: 131 pgs: 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:10 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Sep 30 17:39:11 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Sep 30 17:39:12 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Sep 30 17:39:12 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Sep 30 17:39:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:39:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:13 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Sep 30 17:39:13 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Sep 30 17:39:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e34 e34: 2 total, 2 up, 2 in
Sep 30 17:39:14 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Sep 30 17:39:14 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Sep 30 17:39:14 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Sep 30 17:39:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 34 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=11.485886574s) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active pruub 58.220973969s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 34 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=11.485886574s) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown pruub 58.220973969s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:15 compute-0 ceph-mon[73755]: 2.1b scrub starts
Sep 30 17:39:15 compute-0 ceph-mon[73755]: 2.1b scrub ok
Sep 30 17:39:15 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:39:15 compute-0 ceph-mon[73755]: osdmap e33: 2 total, 2 up, 2 in
Sep 30 17:39:15 compute-0 ceph-mon[73755]: 3.15 scrub starts
Sep 30 17:39:15 compute-0 ceph-mon[73755]: 3.15 scrub ok
Sep 30 17:39:15 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:15 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/98408612' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Sep 30 17:39:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Sep 30 17:39:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Sep 30 17:39:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/98408612' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Sep 30 17:39:15 compute-0 nifty_carson[89591]: module 'dashboard' is already disabled
Sep 30 17:39:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:15 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.efvthf(active, since 2m), standbys: compute-1.glbusf
Sep 30 17:39:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Sep 30 17:39:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:15 compute-0 systemd[1]: libpod-823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a.scope: Deactivated successfully.
Sep 30 17:39:15 compute-0 podman[89574]: 2025-09-30 17:39:15.070283386 +0000 UTC m=+6.852100352 container died 823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a (image=quay.io/ceph/ceph:v19, name=nifty_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-475410f3c9fca190660ae72ec22417ad45b243d823d473e484b89dabad2f0606-merged.mount: Deactivated successfully.
Sep 30 17:39:15 compute-0 podman[89574]: 2025-09-30 17:39:15.1061204 +0000 UTC m=+6.887937376 container remove 823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a (image=quay.io/ceph/ceph:v19, name=nifty_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 17:39:15 compute-0 systemd[1]: libpod-conmon-823767e2e22e0d933c1befdb5fcf15593af16c6dcd436153903d13f6ad8d610a.scope: Deactivated successfully.
Sep 30 17:39:15 compute-0 sudo[89569]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:15 compute-0 sudo[89899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqbhwlxvopckipvnpvddizfirhypjuf ; /usr/bin/python3'
Sep 30 17:39:15 compute-0 sudo[89899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:15 compute-0 python3[89901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:15 compute-0 podman[89902]: 2025-09-30 17:39:15.474107498 +0000 UTC m=+0.049039515 container create 73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf (image=quay.io/ceph/ceph:v19, name=agitated_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:15 compute-0 systemd[1]: Started libpod-conmon-73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf.scope.
Sep 30 17:39:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b80f790f2aa52b3f23045a53bf64dd59d27b55aa0bbf4084b00f0db109958d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b80f790f2aa52b3f23045a53bf64dd59d27b55aa0bbf4084b00f0db109958d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b80f790f2aa52b3f23045a53bf64dd59d27b55aa0bbf4084b00f0db109958d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:15 compute-0 podman[89902]: 2025-09-30 17:39:15.454313258 +0000 UTC m=+0.029245295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:15 compute-0 podman[89902]: 2025-09-30 17:39:15.553765103 +0000 UTC m=+0.128697140 container init 73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf (image=quay.io/ceph/ceph:v19, name=agitated_bhabha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:15 compute-0 podman[89902]: 2025-09-30 17:39:15.561361928 +0000 UTC m=+0.136293945 container start 73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf (image=quay.io/ceph/ceph:v19, name=agitated_bhabha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:39:15 compute-0 podman[89902]: 2025-09-30 17:39:15.565112035 +0000 UTC m=+0.140044172 container attach 73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf (image=quay.io/ceph/ceph:v19, name=agitated_bhabha, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 17:39:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Sep 30 17:39:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/577280954' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Sep 30 17:39:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Sep 30 17:39:16 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e35 e35: 2 total, 2 up, 2 in
Sep 30 17:39:16 compute-0 ceph-mon[73755]: pgmap v88: 131 pgs: 1 peering, 124 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.a scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.a scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 3.16 scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 3.16 scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.9 scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.9 scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 3.19 scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: pgmap v89: 131 pgs: 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.6 scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.6 scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.8 scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.8 scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: pgmap v90: 131 pgs: 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.7 scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.7 scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.1e deep-scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: osdmap e34: 2 total, 2 up, 2 in
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 2.1e deep-scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 3.19 scrub ok
Sep 30 17:39:16 compute-0 ceph-mon[73755]: 3.17 scrub starts
Sep 30 17:39:16 compute-0 ceph-mon[73755]: pgmap v92: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/98408612' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:16 compute-0 ceph-mon[73755]: mgrmap e10: compute-0.efvthf(active, since 2m), standbys: compute-1.glbusf
Sep 30 17:39:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/577280954' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/577280954' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  e: '/usr/bin/ceph-mgr'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  0: '/usr/bin/ceph-mgr'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  1: '-n'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  2: 'mgr.compute-0.efvthf'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  3: '-f'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  4: '--setuser'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  5: 'ceph'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  6: '--setgroup'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  7: 'ceph'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  8: '--default-log-to-file=false'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  9: '--default-log-to-journald=true'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  10: '--default-log-to-stderr=false'
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr respawn  exe_path /proc/self/exe
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e35: 2 total, 2 up, 2 in
Sep 30 17:39:16 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.efvthf(active, since 2m), standbys: compute-1.glbusf
Sep 30 17:39:16 compute-0 systemd[1]: libpod-73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 podman[89902]: 2025-09-30 17:39:16.729080788 +0000 UTC m=+1.304012825 container died 73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf (image=quay.io/ceph/ceph:v19, name=agitated_bhabha, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 17:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8b80f790f2aa52b3f23045a53bf64dd59d27b55aa0bbf4084b00f0db109958d-merged.mount: Deactivated successfully.
Sep 30 17:39:16 compute-0 podman[89902]: 2025-09-30 17:39:16.77416773 +0000 UTC m=+1.349099877 container remove 73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf (image=quay.io/ceph/ceph:v19, name=agitated_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:39:16 compute-0 sshd-session[75146]: Connection closed by 192.168.122.100 port 45670
Sep 30 17:39:16 compute-0 sshd-session[75175]: Connection closed by 192.168.122.100 port 45674
Sep 30 17:39:16 compute-0 sshd-session[75291]: Connection closed by 192.168.122.100 port 45712
Sep 30 17:39:16 compute-0 sshd-session[75347]: Connection closed by 192.168.122.100 port 45732
Sep 30 17:39:16 compute-0 sshd-session[75318]: Connection closed by 192.168.122.100 port 45728
Sep 30 17:39:16 compute-0 sshd-session[75262]: Connection closed by 192.168.122.100 port 45706
Sep 30 17:39:16 compute-0 sshd-session[75233]: Connection closed by 192.168.122.100 port 45690
Sep 30 17:39:16 compute-0 systemd[1]: libpod-conmon-73980f213e44758e12c0a364311496556f6ff75db95788480034b6eb453ddfaf.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 sshd-session[75204]: Connection closed by 192.168.122.100 port 45686
Sep 30 17:39:16 compute-0 sshd-session[75117]: Connection closed by 192.168.122.100 port 45662
Sep 30 17:39:16 compute-0 sshd-session[75088]: Connection closed by 192.168.122.100 port 45648
Sep 30 17:39:16 compute-0 sshd-session[75058]: Connection closed by 192.168.122.100 port 45630
Sep 30 17:39:16 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 sshd-session[75059]: Connection closed by 192.168.122.100 port 45642
Sep 30 17:39:16 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 sshd-session[75259]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 sshd-session[75201]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 sshd-session[75085]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 sshd-session[75315]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 sshd-session[75143]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 sshd-session[75288]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 sshd-session[75172]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 sshd-session[75053]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 sshd-session[75230]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 sshd-session[75036]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 32 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 sshd-session[75344]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 sshd-session[75114]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:16 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 24 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 33 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Sep 30 17:39:16 compute-0 systemd[1]: session-34.scope: Consumed 29.300s CPU time.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 25 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 27 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 30 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 sudo[89899]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 29 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 26 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 22 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 28 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 31 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Session 34 logged out. Waiting for processes to exit.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 24.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 32.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 33.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 30.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 27.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 28.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 31.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 25.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 22.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 29.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 26.
Sep 30 17:39:16 compute-0 systemd-logind[811]: Removed session 34.
Sep 30 17:39:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setuser ceph since I am not root
Sep 30 17:39:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setgroup ceph since I am not root
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: pidfile_write: ignore empty --pid-file
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.18( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.18( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.1c( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.1b( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.1a( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.1a( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.1d( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.1b( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.1a( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.1c( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.c( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.e( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.e( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.9( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.f( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.1( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.1( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.5( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.3( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.2( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.7( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.5( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.4( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.d( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.a( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.1f( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.1c( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.19( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.979629517s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945846558s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.19( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.979603767s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945846558s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.1d( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.12( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.13( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.15( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.979123116s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945774078s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.15( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.979109764s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945774078s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.10( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.c( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.11( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.13( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.978704453s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945762634s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.13( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.978692055s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945762634s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.16( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.17( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.14( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.15( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.10( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.978164673s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945682526s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.10( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.978147507s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945682526s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.a( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977959633s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945613861s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.b( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977946281s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945613861s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.8( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977901459s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945667267s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977876663s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945667267s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977756500s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945613861s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977728844s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945613861s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.e( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.6( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.9( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.5( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.4( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.7( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.1( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977417946s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945556641s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.1( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977397919s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945556641s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.4( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977348328s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945537567s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.1( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.4( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977334976s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945537567s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.6( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977138519s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945423126s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.3( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.2( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.d( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.6( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.977120399s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945423126s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.c( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.976755142s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945255280s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.f( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.1e( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.976725578s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945255280s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.1b( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.976530075s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945167542s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.19( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.1b( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.976516724s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945167542s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.18( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.1b( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.1e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.976780891s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945537567s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.1e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.976763725s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945537567s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[7.1a( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.9( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.976350784s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.945152283s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.9( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.976337433s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.945152283s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.1f( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.971425056s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 60.940292358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[2.1f( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=11.971416473s) [1] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.940292358s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.a( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.d( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.e( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.8( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.f( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.9( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.10( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.9( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.16( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.11( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.15( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.13( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.15( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.13( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.15( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.10( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.11( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[4.1f( empty local-lis/les=0/0 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.16( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[5.1f( empty local-lis/les=0/0 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 35 pg[3.14( empty local-lis/les=0/0 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'alerts'
Sep 30 17:39:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:16.957+0000 7f99d92f4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:39:16 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'balancer'
Sep 30 17:39:16 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Sep 30 17:39:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Sep 30 17:39:17 compute-0 sudo[89997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysqthztbvrbtuupdonulvidgbguqtezz ; /usr/bin/python3'
Sep 30 17:39:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:17.042+0000 7f99d92f4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:39:17 compute-0 ceph-mgr[74051]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:39:17 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'cephadm'
Sep 30 17:39:17 compute-0 sudo[89997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:17 compute-0 python3[89999]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:17 compute-0 podman[90000]: 2025-09-30 17:39:17.250353369 +0000 UTC m=+0.039031668 container create 8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768 (image=quay.io/ceph/ceph:v19, name=nice_tesla, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:17 compute-0 systemd[1]: Started libpod-conmon-8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768.scope.
Sep 30 17:39:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/384e14cb1245cc88b571f3266fa93a49876ee2cd477b2c237d248bfadff41376/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/384e14cb1245cc88b571f3266fa93a49876ee2cd477b2c237d248bfadff41376/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/384e14cb1245cc88b571f3266fa93a49876ee2cd477b2c237d248bfadff41376/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:17 compute-0 podman[90000]: 2025-09-30 17:39:17.3244969 +0000 UTC m=+0.113175229 container init 8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768 (image=quay.io/ceph/ceph:v19, name=nice_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:17 compute-0 podman[90000]: 2025-09-30 17:39:17.233303649 +0000 UTC m=+0.021981968 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:17 compute-0 podman[90000]: 2025-09-30 17:39:17.331120891 +0000 UTC m=+0.119799190 container start 8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768 (image=quay.io/ceph/ceph:v19, name=nice_tesla, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:39:17 compute-0 podman[90000]: 2025-09-30 17:39:17.334453197 +0000 UTC m=+0.123131516 container attach 8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768 (image=quay.io/ceph/ceph:v19, name=nice_tesla, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 2.4 scrub starts
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 2.4 scrub ok
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 3.17 scrub ok
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 3.14 scrub starts
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 3.14 scrub ok
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 2.2 scrub starts
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 2.2 scrub ok
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 3.11 scrub starts
Sep 30 17:39:17 compute-0 ceph-mon[73755]: 3.11 scrub ok
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/577280954' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Sep 30 17:39:17 compute-0 ceph-mon[73755]: from='mgr.14120 192.168.122.100:0/490895231' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:17 compute-0 ceph-mon[73755]: osdmap e35: 2 total, 2 up, 2 in
Sep 30 17:39:17 compute-0 ceph-mon[73755]: mgrmap e11: compute-0.efvthf(active, since 2m), standbys: compute-1.glbusf
Sep 30 17:39:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e36 e36: 2 total, 2 up, 2 in
Sep 30 17:39:17 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e36: 2 total, 2 up, 2 in
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.1f( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'crash'
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.1c( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.14( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.13( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.1d( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.10( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.11( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.12( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.16( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.13( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.16( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.17( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.10( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.14( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.15( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.e( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.11( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.a( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.c( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.8( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.b( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.f( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.a( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.9( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.e( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.6( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.d( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.0( empty local-lis/les=34/36 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.5( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.4( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.7( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.3( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.1( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.5( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.2( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.9( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.d( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.15( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.c( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.3( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.f( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.1a( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.1e( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.1d( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.19( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[3.1c( empty local-lis/les=35/36 n=0 ec=31/15 lis/c=31/31 les/c/f=32/32/0 sis=35) [0] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.18( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.1b( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[7.1a( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [0] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=35/36 n=0 ec=32/16 lis/c=32/32 les/c/f=33/33/0 sis=35) [0] r=0 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:17.939+0000 7f99d92f4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:39:17 compute-0 ceph-mgr[74051]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:39:17 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'dashboard'
Sep 30 17:39:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Sep 30 17:39:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'devicehealth'
Sep 30 17:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:18.612+0000 7f99d92f4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 17:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 17:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 17:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]:   from numpy import show_config as show_numpy_config
Sep 30 17:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:18.778+0000 7f99d92f4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'influx'
Sep 30 17:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:18.850+0000 7f99d92f4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'insights'
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'iostat'
Sep 30 17:39:18 compute-0 ceph-mon[73755]: 2.1a scrub starts
Sep 30 17:39:18 compute-0 ceph-mon[73755]: 2.1a scrub ok
Sep 30 17:39:18 compute-0 ceph-mon[73755]: 5.19 scrub starts
Sep 30 17:39:18 compute-0 ceph-mon[73755]: 5.19 scrub ok
Sep 30 17:39:18 compute-0 ceph-mon[73755]: osdmap e36: 2 total, 2 up, 2 in
Sep 30 17:39:18 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Sep 30 17:39:18 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Sep 30 17:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:18.987+0000 7f99d92f4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:39:18 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'k8sevents'
Sep 30 17:39:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:19 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'localpool'
Sep 30 17:39:19 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 17:39:19 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mirroring'
Sep 30 17:39:19 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'nfs'
Sep 30 17:39:19 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Sep 30 17:39:19 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 2.18 scrub starts
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 2.18 scrub ok
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 3.1e scrub starts
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 3.1e scrub ok
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 2.17 scrub starts
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 2.17 scrub ok
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 3.1f deep-scrub starts
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 3.1f deep-scrub ok
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 2.16 deep-scrub starts
Sep 30 17:39:19 compute-0 ceph-mon[73755]: 2.16 deep-scrub ok
Sep 30 17:39:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:20.026+0000 7f99d92f4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'orchestrator'
Sep 30 17:39:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:20.261+0000 7f99d92f4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 17:39:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:20.341+0000 7f99d92f4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_support'
Sep 30 17:39:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:20.406+0000 7f99d92f4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 17:39:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:20.481+0000 7f99d92f4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'progress'
Sep 30 17:39:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:20.552+0000 7f99d92f4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'prometheus'
Sep 30 17:39:20 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Sep 30 17:39:20 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Sep 30 17:39:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:20.903+0000 7f99d92f4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:39:20 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rbd_support'
Sep 30 17:39:20 compute-0 ceph-mon[73755]: 4.19 deep-scrub starts
Sep 30 17:39:20 compute-0 ceph-mon[73755]: 4.19 deep-scrub ok
Sep 30 17:39:20 compute-0 ceph-mon[73755]: 2.14 scrub starts
Sep 30 17:39:20 compute-0 ceph-mon[73755]: 2.14 scrub ok
Sep 30 17:39:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:21.003+0000 7f99d92f4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:39:21 compute-0 ceph-mgr[74051]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:39:21 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'restful'
Sep 30 17:39:21 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rgw'
Sep 30 17:39:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:21.446+0000 7f99d92f4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:39:21 compute-0 ceph-mgr[74051]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:39:21 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rook'
Sep 30 17:39:21 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Sep 30 17:39:21 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Sep 30 17:39:21 compute-0 ceph-mon[73755]: 3.1b scrub starts
Sep 30 17:39:21 compute-0 ceph-mon[73755]: 3.1b scrub ok
Sep 30 17:39:21 compute-0 ceph-mon[73755]: 2.12 scrub starts
Sep 30 17:39:21 compute-0 ceph-mon[73755]: 2.12 scrub ok
Sep 30 17:39:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:22.052+0000 7f99d92f4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'selftest'
Sep 30 17:39:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:22.148+0000 7f99d92f4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'snap_schedule'
Sep 30 17:39:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:22.241+0000 7f99d92f4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'stats'
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'status'
Sep 30 17:39:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:22.409+0000 7f99d92f4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telegraf'
Sep 30 17:39:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:22.487+0000 7f99d92f4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telemetry'
Sep 30 17:39:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:22.650+0000 7f99d92f4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 17:39:22 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf restarted
Sep 30 17:39:22 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf started
Sep 30 17:39:22 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Sep 30 17:39:22 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Sep 30 17:39:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:22.891+0000 7f99d92f4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:22 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'volumes'
Sep 30 17:39:22 compute-0 ceph-mon[73755]: 4.1c deep-scrub starts
Sep 30 17:39:22 compute-0 ceph-mon[73755]: 4.1c deep-scrub ok
Sep 30 17:39:22 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf restarted
Sep 30 17:39:22 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf started
Sep 30 17:39:22 compute-0 ceph-mon[73755]: 2.11 scrub starts
Sep 30 17:39:22 compute-0 ceph-mon[73755]: 2.11 scrub ok
Sep 30 17:39:22 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.efvthf(active, since 2m), standbys: compute-1.glbusf
Sep 30 17:39:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:23.171+0000 7f99d92f4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'zabbix'
Sep 30 17:39:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:23.241+0000 7f99d92f4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Active manager daemon compute-0.efvthf restarted
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.efvthf
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: ms_deliver_dispatch: unhandled message 0x5566732e6340 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e37 e37: 2 total, 2 up, 2 in
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e37: 2 total, 2 up, 2 in
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr handle_mgr_map Activating!
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.efvthf(active, starting, since 0.019525s), standbys: compute-1.glbusf
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr handle_mgr_map I am now activating
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e1 all = 1
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: balancer
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [balancer INFO root] Starting
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Manager daemon compute-0.efvthf is now available
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:39:23
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: cephadm
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: crash
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: dashboard
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO access_control] Loading user roles DB version=2
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO sso] Loading SSO DB version=1
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO root] Configured CherryPy, starting engine...
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: devicehealth
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Starting
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: iostat
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: nfs
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: orchestrator
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: pg_autoscaler
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: progress
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [progress INFO root] Loading...
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f995c6b5d90>, <progress.module.GhostEvent object at 0x7f995c6b5d60>, <progress.module.GhostEvent object at 0x7f995c6b5d30>, <progress.module.GhostEvent object at 0x7f995c6b59a0>, <progress.module.GhostEvent object at 0x7f995c6b5dc0>, <progress.module.GhostEvent object at 0x7f995c6b5df0>, <progress.module.GhostEvent object at 0x7f995c6b5e20>, <progress.module.GhostEvent object at 0x7f995c6b5e50>, <progress.module.GhostEvent object at 0x7f995c6b5e80>, <progress.module.GhostEvent object at 0x7f995c6b5eb0>] historic events
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] recovery thread starting
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] starting setup
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: rbd_support
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: restful
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: status
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: telemetry
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [restful WARNING root] server not running: no certificate configured
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] PerfHandler: starting
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: vms, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: volumes, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: backups, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: images, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TaskHandler: starting
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"} v 0)
Sep 30 17:39:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [rbd_support INFO root] setup complete
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: volumes
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Sep 30 17:39:23 compute-0 sshd-session[90166]: Accepted publickey for ceph-admin from 192.168.122.100 port 42812 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:39:23 compute-0 systemd-logind[811]: New session 35 of user ceph-admin.
Sep 30 17:39:23 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Sep 30 17:39:23 compute-0 sshd-session[90166]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:39:23 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.module] Engine started.
Sep 30 17:39:23 compute-0 sudo[90181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:23 compute-0 sudo[90181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:23 compute-0 sudo[90181]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:23 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Sep 30 17:39:23 compute-0 sudo[90207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:39:23 compute-0 sudo[90207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:23 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mgrmap e12: compute-0.efvthf(active, since 2m), standbys: compute-1.glbusf
Sep 30 17:39:23 compute-0 ceph-mon[73755]: Active manager daemon compute-0.efvthf restarted
Sep 30 17:39:23 compute-0 ceph-mon[73755]: Activating manager daemon compute-0.efvthf
Sep 30 17:39:23 compute-0 ceph-mon[73755]: osdmap e37: 2 total, 2 up, 2 in
Sep 30 17:39:23 compute-0 ceph-mon[73755]: mgrmap e13: compute-0.efvthf(active, starting, since 0.019525s), standbys: compute-1.glbusf
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: Manager daemon compute-0.efvthf is now available
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:39:23 compute-0 ceph-mon[73755]: 5.1d scrub starts
Sep 30 17:39:23 compute-0 ceph-mon[73755]: 5.1d scrub ok
Sep 30 17:39:23 compute-0 ceph-mon[73755]: 2.f deep-scrub starts
Sep 30 17:39:23 compute-0 ceph-mon[73755]: 2.f deep-scrub ok
Sep 30 17:39:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:24 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.efvthf(active, since 1.03711s), standbys: compute-1.glbusf
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14314 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Sep 30 17:39:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:24 compute-0 nice_tesla[90016]: Option GRAFANA_API_USERNAME updated
Sep 30 17:39:24 compute-0 systemd[1]: libpod-8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768.scope: Deactivated successfully.
Sep 30 17:39:24 compute-0 podman[90320]: 2025-09-30 17:39:24.36575523 +0000 UTC m=+0.021184937 container died 8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768 (image=quay.io/ceph/ceph:v19, name=nice_tesla, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:24 compute-0 podman[90305]: 2025-09-30 17:39:24.374149486 +0000 UTC m=+0.064511084 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 17:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-384e14cb1245cc88b571f3266fa93a49876ee2cd477b2c237d248bfadff41376-merged.mount: Deactivated successfully.
Sep 30 17:39:24 compute-0 podman[90320]: 2025-09-30 17:39:24.404577791 +0000 UTC m=+0.060007478 container remove 8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768 (image=quay.io/ceph/ceph:v19, name=nice_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 17:39:24 compute-0 systemd[1]: libpod-conmon-8f124fdfd3ee1378078aea78235bd7f902017761febb6754300b602aafd00768.scope: Deactivated successfully.
Sep 30 17:39:24 compute-0 sudo[89997]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:24 compute-0 podman[90305]: 2025-09-30 17:39:24.476723211 +0000 UTC m=+0.167084809 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:24] ENGINE Bus STARTING
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:24] ENGINE Bus STARTING
Sep 30 17:39:24 compute-0 sudo[90399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqowtildddowigndjuamnxpbokzoknvo ; /usr/bin/python3'
Sep 30 17:39:24 compute-0 sudo[90399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:24] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:24] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:39:24 compute-0 python3[90407]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Sep 30 17:39:24 compute-0 sudo[90207]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:24] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:24] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:24] ENGINE Bus STARTED
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:24] ENGINE Bus STARTED
Sep 30 17:39:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:24] ENGINE Client ('192.168.122.100', 34486) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:39:24 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:24] ENGINE Client ('192.168.122.100', 34486) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:39:24 compute-0 podman[90456]: 2025-09-30 17:39:24.783306806 +0000 UTC m=+0.036682756 container create e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f (image=quay.io/ceph/ceph:v19, name=kind_noether, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:24 compute-0 systemd[1]: Started libpod-conmon-e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f.scope.
Sep 30 17:39:24 compute-0 sudo[90467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:24 compute-0 sudo[90467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:24 compute-0 sudo[90467]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da423d7aedf2b70faa58cf3541cec37c07d89af727a228a70ae4d599d857bc8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da423d7aedf2b70faa58cf3541cec37c07d89af727a228a70ae4d599d857bc8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da423d7aedf2b70faa58cf3541cec37c07d89af727a228a70ae4d599d857bc8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:24 compute-0 podman[90456]: 2025-09-30 17:39:24.858027733 +0000 UTC m=+0.111403673 container init e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f (image=quay.io/ceph/ceph:v19, name=kind_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 17:39:24 compute-0 podman[90456]: 2025-09-30 17:39:24.7687053 +0000 UTC m=+0.022081260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:24 compute-0 podman[90456]: 2025-09-30 17:39:24.868846712 +0000 UTC m=+0.122222662 container start e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f (image=quay.io/ceph/ceph:v19, name=kind_noether, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 17:39:24 compute-0 podman[90456]: 2025-09-30 17:39:24.872284311 +0000 UTC m=+0.125660281 container attach e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f (image=quay.io/ceph/ceph:v19, name=kind_noether, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 17:39:24 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.b scrub starts
Sep 30 17:39:24 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.b scrub ok
Sep 30 17:39:24 compute-0 sudo[90497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:39:24 compute-0 sudo[90497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14330 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 kind_noether[90493]: Option GRAFANA_API_PASSWORD updated
Sep 30 17:39:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v4: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:25 compute-0 systemd[1]: libpod-e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f.scope: Deactivated successfully.
Sep 30 17:39:25 compute-0 podman[90456]: 2025-09-30 17:39:25.285053534 +0000 UTC m=+0.538429484 container died e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f (image=quay.io/ceph/ceph:v19, name=kind_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mgrmap e14: compute-0.efvthf(active, since 1.03711s), standbys: compute-1.glbusf
Sep 30 17:39:25 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: 4.1d deep-scrub starts
Sep 30 17:39:25 compute-0 ceph-mon[73755]: 4.1d deep-scrub ok
Sep 30 17:39:25 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:24] ENGINE Bus STARTING
Sep 30 17:39:25 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:24] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:39:25 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:24] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:39:25 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:24] ENGINE Bus STARTED
Sep 30 17:39:25 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:24] ENGINE Client ('192.168.122.100', 34486) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:39:25 compute-0 ceph-mon[73755]: 2.b scrub starts
Sep 30 17:39:25 compute-0 ceph-mon[73755]: 2.b scrub ok
Sep 30 17:39:25 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:25 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Sep 30 17:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1da423d7aedf2b70faa58cf3541cec37c07d89af727a228a70ae4d599d857bc8-merged.mount: Deactivated successfully.
Sep 30 17:39:25 compute-0 podman[90456]: 2025-09-30 17:39:25.326428091 +0000 UTC m=+0.579804041 container remove e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f (image=quay.io/ceph/ceph:v19, name=kind_noether, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 17:39:25 compute-0 systemd[1]: libpod-conmon-e57e7fad862230dfe03747a8d7d6fbfe7e7b4e277e2a8d2b1dc1d0dd040c866f.scope: Deactivated successfully.
Sep 30 17:39:25 compute-0 sudo[90399]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:25 compute-0 sudo[90497]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:25 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 17:39:25 compute-0 sudo[90598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:25 compute-0 sudo[90598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:25 compute-0 sudo[90598]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:25 compute-0 sudo[90624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 17:39:25 compute-0 sudo[90669]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnnogaggpmurwlizbxdlluhwruhihrwa ; /usr/bin/python3'
Sep 30 17:39:25 compute-0 sudo[90669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:25 compute-0 sudo[90624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:25 compute-0 python3[90672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:25 compute-0 podman[90676]: 2025-09-30 17:39:25.730732726 +0000 UTC m=+0.035361703 container create 043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f (image=quay.io/ceph/ceph:v19, name=epic_agnesi, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:25 compute-0 systemd[1]: Started libpod-conmon-043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f.scope.
Sep 30 17:39:25 compute-0 sudo[90624]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7423a8db2b85b4f747f911b560a357866178f448f04a0f11ae280b461ed1789/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7423a8db2b85b4f747f911b560a357866178f448f04a0f11ae280b461ed1789/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7423a8db2b85b4f747f911b560a357866178f448f04a0f11ae280b461ed1789/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Sep 30 17:39:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:39:25 compute-0 ceph-mgr[74051]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:39:25 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:39:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 17:39:25 compute-0 ceph-mgr[74051]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:25 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:25 compute-0 podman[90676]: 2025-09-30 17:39:25.715615286 +0000 UTC m=+0.020244283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:25 compute-0 podman[90676]: 2025-09-30 17:39:25.812906785 +0000 UTC m=+0.117535792 container init 043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f (image=quay.io/ceph/ceph:v19, name=epic_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:25 compute-0 podman[90676]: 2025-09-30 17:39:25.819140116 +0000 UTC m=+0.123769093 container start 043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f (image=quay.io/ceph/ceph:v19, name=epic_agnesi, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:25 compute-0 podman[90676]: 2025-09-30 17:39:25.821878936 +0000 UTC m=+0.126507943 container attach 043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f (image=quay.io/ceph/ceph:v19, name=epic_agnesi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:25 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Sep 30 17:39:25 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.efvthf(active, since 2s), standbys: compute-1.glbusf
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:26 compute-0 epic_agnesi[90705]: Option ALERTMANAGER_API_HOST updated
Sep 30 17:39:26 compute-0 systemd[1]: libpod-043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f.scope: Deactivated successfully.
Sep 30 17:39:26 compute-0 podman[90676]: 2025-09-30 17:39:26.184766064 +0000 UTC m=+0.489395041 container died 043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f (image=quay.io/ceph/ceph:v19, name=epic_agnesi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7423a8db2b85b4f747f911b560a357866178f448f04a0f11ae280b461ed1789-merged.mount: Deactivated successfully.
Sep 30 17:39:26 compute-0 podman[90676]: 2025-09-30 17:39:26.218994226 +0000 UTC m=+0.523623203 container remove 043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f (image=quay.io/ceph/ceph:v19, name=epic_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:39:26 compute-0 systemd[1]: libpod-conmon-043ff87c02c7facab07832f5dbfa73d98ed6bf6fcfb176dff03fd05903dcb49f.scope: Deactivated successfully.
Sep 30 17:39:26 compute-0 sudo[90669]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e38 e38: 2 total, 2 up, 2 in
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e38: 2 total, 2 up, 2 in
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.1d( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571734428s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.978363037s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.1d( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571710587s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.978363037s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.13( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571584702s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.978355408s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.13( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571561813s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.978355408s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.10( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571606636s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.978446960s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.10( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571586609s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.978446960s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.14( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571981430s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.978973389s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.14( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571969032s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.978973389s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.a( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571676254s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.979049683s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.a( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571645737s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.979049683s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.b( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571788788s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.979255676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.8( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571663857s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.979141235s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.b( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571771622s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.979255676s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.8( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571646690s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.979141235s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.9( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571644783s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.979194641s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.9( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571623802s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.979194641s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.e( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571652412s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.979255676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.e( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571634293s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.979255676s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.6( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571576118s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.979240417s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.6( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571563721s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.979240417s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.4( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571544647s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.979263306s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.4( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.571528435s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.979263306s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.3( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573599815s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.981399536s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.3( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573589325s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.981399536s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.2( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.572158813s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.980010986s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.2( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.572143555s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.980010986s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.1e( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573557854s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.981536865s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.1e( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573546410s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.981536865s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.f( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573398590s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.981430054s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.f( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573380470s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.981430054s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.18( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573757172s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.981880188s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.18( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573740959s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.981880188s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.1b( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573803902s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active pruub 73.981956482s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:39:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 38 pg[7.1b( empty local-lis/les=34/36 n=0 ec=34/20 lis/c=34/34 les/c/f=36/36/0 sis=38 pruub=15.573789597s) [1] r=-1 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.981956482s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:39:26 compute-0 ceph-mon[73755]: from='client.14330 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:26 compute-0 ceph-mon[73755]: pgmap v4: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:26 compute-0 ceph-mon[73755]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Sep 30 17:39:26 compute-0 ceph-mon[73755]: 4.f scrub starts
Sep 30 17:39:26 compute-0 ceph-mon[73755]: 4.f scrub ok
Sep 30 17:39:26 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:26 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:26 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:39:26 compute-0 ceph-mon[73755]: Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:39:26 compute-0 ceph-mon[73755]: Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:26 compute-0 ceph-mon[73755]: 2.3 deep-scrub starts
Sep 30 17:39:26 compute-0 ceph-mon[73755]: 2.3 deep-scrub ok
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mgrmap e15: compute-0.efvthf(active, since 2s), standbys: compute-1.glbusf
Sep 30 17:39:26 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:26 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:26 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:39:26 compute-0 ceph-mon[73755]: osdmap e38: 2 total, 2 up, 2 in
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:39:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:39:26 compute-0 sudo[90767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obtnvwyoshjrhjeznokljwpbmupmkuos ; /usr/bin/python3'
Sep 30 17:39:26 compute-0 sudo[90767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:26 compute-0 sudo[90768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:39:26 compute-0 sudo[90768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[90768]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 sudo[90795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:39:26 compute-0 sudo[90795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[90795]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 python3[90773]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:26 compute-0 sudo[90820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:39:26 compute-0 sudo[90820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[90820]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 podman[90843]: 2025-09-30 17:39:26.558140451 +0000 UTC m=+0.037383605 container create 280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b (image=quay.io/ceph/ceph:v19, name=blissful_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:26 compute-0 sudo[90851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:26 compute-0 sudo[90851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[90851]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 systemd[1]: Started libpod-conmon-280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b.scope.
Sep 30 17:39:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052e1178d60b5e64c5011bf000422458ddb7d66e064d85d632f3aed80db5df9d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052e1178d60b5e64c5011bf000422458ddb7d66e064d85d632f3aed80db5df9d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052e1178d60b5e64c5011bf000422458ddb7d66e064d85d632f3aed80db5df9d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:26 compute-0 podman[90843]: 2025-09-30 17:39:26.628389963 +0000 UTC m=+0.107633147 container init 280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b (image=quay.io/ceph/ceph:v19, name=blissful_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:26 compute-0 sudo[90886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:39:26 compute-0 sudo[90886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[90886]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 podman[90843]: 2025-09-30 17:39:26.635300681 +0000 UTC m=+0.114543845 container start 280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b (image=quay.io/ceph/ceph:v19, name=blissful_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:39:26 compute-0 podman[90843]: 2025-09-30 17:39:26.541919623 +0000 UTC m=+0.021162807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:26 compute-0 podman[90843]: 2025-09-30 17:39:26.641615294 +0000 UTC m=+0.120858468 container attach 280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b (image=quay.io/ceph/ceph:v19, name=blissful_haibt, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:39:26 compute-0 sudo[90937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:39:26 compute-0 sudo[90937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[90937]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 sudo[90981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:39:26 compute-0 sudo[90981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[90981]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 sudo[91006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 17:39:26 compute-0 sudo[91006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[91006]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:26 compute-0 sudo[91031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:39:26 compute-0 sudo[91031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[91031]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Sep 30 17:39:26 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Sep 30 17:39:26 compute-0 sudo[91056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:39:26 compute-0 sudo[91056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:26 compute-0 sudo[91056]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:26 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:27 compute-0 sudo[91081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:39:27 compute-0 sudo[91081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91081]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Sep 30 17:39:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:27 compute-0 blissful_haibt[90885]: Option PROMETHEUS_API_HOST updated
Sep 30 17:39:27 compute-0 systemd[1]: libpod-280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b.scope: Deactivated successfully.
Sep 30 17:39:27 compute-0 podman[90843]: 2025-09-30 17:39:27.053894034 +0000 UTC m=+0.533137218 container died 280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b (image=quay.io/ceph/ceph:v19, name=blissful_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 17:39:27 compute-0 sudo[91106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:27 compute-0 sudo[91106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91106]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 sudo[91143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:39:27 compute-0 sudo[91143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91143]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-052e1178d60b5e64c5011bf000422458ddb7d66e064d85d632f3aed80db5df9d-merged.mount: Deactivated successfully.
Sep 30 17:39:27 compute-0 podman[90843]: 2025-09-30 17:39:27.205452553 +0000 UTC m=+0.684695717 container remove 280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b (image=quay.io/ceph/ceph:v19, name=blissful_haibt, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:27 compute-0 systemd[1]: libpod-conmon-280ebb44d38cd0c2f2b48895db5dff83e756f0e3c32432c0ca6b22afb671907b.scope: Deactivated successfully.
Sep 30 17:39:27 compute-0 sudo[90767]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 sudo[91192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:39:27 compute-0 sudo[91192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91192]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Sep 30 17:39:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e39 e39: 2 total, 2 up, 2 in
Sep 30 17:39:27 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e39: 2 total, 2 up, 2 in
Sep 30 17:39:27 compute-0 sudo[91217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:39:27 compute-0 sudo[91217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91217]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.1a( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.19( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.12( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=38/39 n=0 ec=34/19 lis/c=34/34 les/c/f=36/36/0 sis=38) [0] r=0 lpr=38 pi=[34,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:39:27 compute-0 sudo[91285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfnbloseosomocworywayoefgcwyqgyf ; /usr/bin/python3'
Sep 30 17:39:27 compute-0 sudo[91285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:27 compute-0 sudo[91248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:27 compute-0 sudo[91248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91248]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:27 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:27 compute-0 ceph-mon[73755]: from='client.14338 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:27 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:27 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:27 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:39:27 compute-0 ceph-mon[73755]: Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:39:27 compute-0 ceph-mon[73755]: Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:27 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:27 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:39:27 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:39:27 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:39:27 compute-0 ceph-mon[73755]: 3.8 scrub starts
Sep 30 17:39:27 compute-0 ceph-mon[73755]: 3.8 scrub ok
Sep 30 17:39:27 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:27 compute-0 ceph-mon[73755]: osdmap e39: 2 total, 2 up, 2 in
Sep 30 17:39:27 compute-0 sudo[91293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:39:27 compute-0 sudo[91293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91293]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 sudo[91318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:39:27 compute-0 sudo[91318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91318]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 python3[91291]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:27 compute-0 sudo[91343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:39:27 compute-0 sudo[91343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91343]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 podman[91366]: 2025-09-30 17:39:27.564156351 +0000 UTC m=+0.039518790 container create 71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581 (image=quay.io/ceph/ceph:v19, name=compassionate_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:27 compute-0 sudo[91378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:27 compute-0 sudo[91378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91378]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 systemd[1]: Started libpod-conmon-71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581.scope.
Sep 30 17:39:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7710b49fcf03a44d8a16e290b23dc7d3fd7f7ecea58a580d1822bd75f3f80cc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7710b49fcf03a44d8a16e290b23dc7d3fd7f7ecea58a580d1822bd75f3f80cc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7710b49fcf03a44d8a16e290b23dc7d3fd7f7ecea58a580d1822bd75f3f80cc0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:27 compute-0 sudo[91408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:39:27 compute-0 sudo[91408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 podman[91366]: 2025-09-30 17:39:27.548659101 +0000 UTC m=+0.024021560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:27 compute-0 sudo[91408]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:27 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:27 compute-0 podman[91366]: 2025-09-30 17:39:27.673751817 +0000 UTC m=+0.149114276 container init 71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581 (image=quay.io/ceph/ceph:v19, name=compassionate_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:39:27 compute-0 podman[91366]: 2025-09-30 17:39:27.681394954 +0000 UTC m=+0.156757393 container start 71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581 (image=quay.io/ceph/ceph:v19, name=compassionate_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 17:39:27 compute-0 podman[91366]: 2025-09-30 17:39:27.684829592 +0000 UTC m=+0.160192031 container attach 71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581 (image=quay.io/ceph/ceph:v19, name=compassionate_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:39:27 compute-0 sudo[91460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:39:27 compute-0 sudo[91460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91460]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 sudo[91485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:39:27 compute-0 sudo[91485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91485]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 sudo[91529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:27 compute-0 sudo[91529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91529]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:27 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:27 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Sep 30 17:39:27 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Sep 30 17:39:27 compute-0 sudo[91554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:39:27 compute-0 sudo[91554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91554]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:27 compute-0 sudo[91579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:39:27 compute-0 sudo[91579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:27 compute-0 sudo[91579]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:28 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14346 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Sep 30 17:39:28 compute-0 sudo[91604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:39:28 compute-0 sudo[91604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:28 compute-0 sudo[91604]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:28 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.efvthf(active, since 4s), standbys: compute-1.glbusf
Sep 30 17:39:28 compute-0 compassionate_dijkstra[91412]: Option GRAFANA_API_URL updated
Sep 30 17:39:28 compute-0 systemd[1]: libpod-71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581.scope: Deactivated successfully.
Sep 30 17:39:28 compute-0 podman[91366]: 2025-09-30 17:39:28.086860189 +0000 UTC m=+0.562222628 container died 71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581 (image=quay.io/ceph/ceph:v19, name=compassionate_dijkstra, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 17:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7710b49fcf03a44d8a16e290b23dc7d3fd7f7ecea58a580d1822bd75f3f80cc0-merged.mount: Deactivated successfully.
Sep 30 17:39:28 compute-0 podman[91366]: 2025-09-30 17:39:28.126036279 +0000 UTC m=+0.601398708 container remove 71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581 (image=quay.io/ceph/ceph:v19, name=compassionate_dijkstra, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 17:39:28 compute-0 sudo[91630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:28 compute-0 sudo[91630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:28 compute-0 sudo[91630]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:28 compute-0 systemd[1]: libpod-conmon-71f181c2e88c01111ab60a5072b270c6bce8fcfb7a6177fa498d94570b7dd581.scope: Deactivated successfully.
Sep 30 17:39:28 compute-0 sudo[91285]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:28 compute-0 sudo[91667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:39:28 compute-0 sudo[91667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:28 compute-0 sudo[91667]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:28 compute-0 sudo[91717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:39:28 compute-0 sudo[91761]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppgayqsxkpuqmanygzbsdtcxocsgpdcz ; /usr/bin/python3'
Sep 30 17:39:28 compute-0 sudo[91717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:28 compute-0 sudo[91761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:28 compute-0 sudo[91717]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:28 compute-0 sudo[91766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:39:28 compute-0 sudo[91766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:28 compute-0 sudo[91766]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:28 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:28 compute-0 ceph-mon[73755]: 2.0 scrub starts
Sep 30 17:39:28 compute-0 ceph-mon[73755]: 2.0 scrub ok
Sep 30 17:39:28 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:28 compute-0 ceph-mon[73755]: from='client.14342 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:28 compute-0 ceph-mon[73755]: pgmap v6: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:28 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:28 compute-0 ceph-mon[73755]: 4.3 scrub starts
Sep 30 17:39:28 compute-0 ceph-mon[73755]: 4.3 scrub ok
Sep 30 17:39:28 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:28 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:28 compute-0 ceph-mon[73755]: mgrmap e16: compute-0.efvthf(active, since 4s), standbys: compute-1.glbusf
Sep 30 17:39:28 compute-0 python3[91765]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:28 compute-0 sudo[91791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:28 compute-0 sudo[91791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:28 compute-0 sudo[91791]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:28 compute-0 podman[91815]: 2025-09-30 17:39:28.488633999 +0000 UTC m=+0.046352477 container create 70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f (image=quay.io/ceph/ceph:v19, name=bold_golick, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:28 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:28 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:28 compute-0 systemd[1]: Started libpod-conmon-70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f.scope.
Sep 30 17:39:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9983bb6a0ab133650cb0ba03c5d38d98e10361e5641f8cb9ffa6757413bcb89/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9983bb6a0ab133650cb0ba03c5d38d98e10361e5641f8cb9ffa6757413bcb89/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9983bb6a0ab133650cb0ba03c5d38d98e10361e5641f8cb9ffa6757413bcb89/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:28 compute-0 podman[91815]: 2025-09-30 17:39:28.547143908 +0000 UTC m=+0.104862386 container init 70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f (image=quay.io/ceph/ceph:v19, name=bold_golick, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:28 compute-0 podman[91815]: 2025-09-30 17:39:28.553741688 +0000 UTC m=+0.111460166 container start 70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f (image=quay.io/ceph/ceph:v19, name=bold_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:39:28 compute-0 podman[91815]: 2025-09-30 17:39:28.557038623 +0000 UTC m=+0.114757101 container attach 70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f (image=quay.io/ceph/ceph:v19, name=bold_golick, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 17:39:28 compute-0 podman[91815]: 2025-09-30 17:39:28.471036345 +0000 UTC m=+0.028754823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:28 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Sep 30 17:39:28 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Sep 30 17:39:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Sep 30 17:39:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2417511691' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Sep 30 17:39:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v8: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:39:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:39:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:39:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 0d250875-160a-45dc-889e-ba080a8928eb (Updating node-exporter deployment (+2 -> 2))
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Sep 30 17:39:29 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:29 compute-0 ceph-mon[73755]: 2.5 scrub starts
Sep 30 17:39:29 compute-0 ceph-mon[73755]: 2.5 scrub ok
Sep 30 17:39:29 compute-0 ceph-mon[73755]: from='client.14346 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:29 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:29 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:29 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:29 compute-0 ceph-mon[73755]: 3.4 scrub starts
Sep 30 17:39:29 compute-0 ceph-mon[73755]: 3.4 scrub ok
Sep 30 17:39:29 compute-0 ceph-mon[73755]: 4.1f scrub starts
Sep 30 17:39:29 compute-0 ceph-mon[73755]: 4.1f scrub ok
Sep 30 17:39:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2417511691' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Sep 30 17:39:29 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:29 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:29 compute-0 sudo[91856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2417511691' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Sep 30 17:39:29 compute-0 sudo[91856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  e: '/usr/bin/ceph-mgr'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  0: '/usr/bin/ceph-mgr'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  1: '-n'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  2: 'mgr.compute-0.efvthf'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  3: '-f'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  4: '--setuser'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  5: 'ceph'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  6: '--setgroup'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  7: 'ceph'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  8: '--default-log-to-file=false'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  9: '--default-log-to-journald=true'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  10: '--default-log-to-stderr=false'
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr respawn  exe_path /proc/self/exe
Sep 30 17:39:29 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.efvthf(active, since 6s), standbys: compute-1.glbusf
Sep 30 17:39:29 compute-0 sudo[91856]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:29 compute-0 systemd[1]: libpod-70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f.scope: Deactivated successfully.
Sep 30 17:39:29 compute-0 podman[91815]: 2025-09-30 17:39:29.560177469 +0000 UTC m=+1.117895987 container died 70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f (image=quay.io/ceph/ceph:v19, name=bold_golick, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9983bb6a0ab133650cb0ba03c5d38d98e10361e5641f8cb9ffa6757413bcb89-merged.mount: Deactivated successfully.
Sep 30 17:39:29 compute-0 podman[91815]: 2025-09-30 17:39:29.603561717 +0000 UTC m=+1.161280195 container remove 70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f (image=quay.io/ceph/ceph:v19, name=bold_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:29 compute-0 systemd[1]: libpod-conmon-70b3a4fe3492f62c14b91dde1bbd58241a178060a4e8731f401bb151beb6ee0f.scope: Deactivated successfully.
Sep 30 17:39:29 compute-0 sudo[91761]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:29 compute-0 sshd-session[90180]: Read error from remote host 192.168.122.100 port 42812: Connection reset by peer
Sep 30 17:39:29 compute-0 sshd-session[90166]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:39:29 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Sep 30 17:39:29 compute-0 systemd[1]: session-35.scope: Consumed 4.161s CPU time.
Sep 30 17:39:29 compute-0 systemd-logind[811]: Session 35 logged out. Waiting for processes to exit.
Sep 30 17:39:29 compute-0 systemd-logind[811]: Removed session 35.
Sep 30 17:39:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setuser ceph since I am not root
Sep 30 17:39:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setgroup ceph since I am not root
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: pidfile_write: ignore empty --pid-file
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'alerts'
Sep 30 17:39:29 compute-0 sudo[91939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzutcfwvvrjbxilfeohjwpmuzlsctyuk ; /usr/bin/python3'
Sep 30 17:39:29 compute-0 sudo[91939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:29.799+0000 7f2d43821140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'balancer'
Sep 30 17:39:29 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Sep 30 17:39:29 compute-0 python3[91941]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:29 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Sep 30 17:39:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:29.885+0000 7f2d43821140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:39:29 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'cephadm'
Sep 30 17:39:29 compute-0 podman[91942]: 2025-09-30 17:39:29.951742365 +0000 UTC m=+0.057276338 container create 6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb (image=quay.io/ceph/ceph:v19, name=gracious_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:39:30 compute-0 systemd[1]: Started libpod-conmon-6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb.scope.
Sep 30 17:39:30 compute-0 podman[91942]: 2025-09-30 17:39:29.915694666 +0000 UTC m=+0.021228659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3fabcef59bba3f1ad877e5b758f5a9e642564ebc01d9c4c7924b7387b3f479/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3fabcef59bba3f1ad877e5b758f5a9e642564ebc01d9c4c7924b7387b3f479/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3fabcef59bba3f1ad877e5b758f5a9e642564ebc01d9c4c7924b7387b3f479/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:30 compute-0 podman[91942]: 2025-09-30 17:39:30.064611316 +0000 UTC m=+0.170145319 container init 6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb (image=quay.io/ceph/ceph:v19, name=gracious_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 17:39:30 compute-0 podman[91942]: 2025-09-30 17:39:30.070278122 +0000 UTC m=+0.175812095 container start 6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb (image=quay.io/ceph/ceph:v19, name=gracious_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:39:30 compute-0 podman[91942]: 2025-09-30 17:39:30.100324287 +0000 UTC m=+0.205858260 container attach 6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb (image=quay.io/ceph/ceph:v19, name=gracious_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Sep 30 17:39:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1732324391' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Sep 30 17:39:30 compute-0 ceph-mon[73755]: from='mgr.14308 192.168.122.100:0/32643731' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2417511691' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Sep 30 17:39:30 compute-0 ceph-mon[73755]: mgrmap e17: compute-0.efvthf(active, since 6s), standbys: compute-1.glbusf
Sep 30 17:39:30 compute-0 ceph-mon[73755]: 4.4 scrub starts
Sep 30 17:39:30 compute-0 ceph-mon[73755]: 4.4 scrub ok
Sep 30 17:39:30 compute-0 ceph-mon[73755]: 5.11 scrub starts
Sep 30 17:39:30 compute-0 ceph-mon[73755]: 5.11 scrub ok
Sep 30 17:39:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1732324391' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Sep 30 17:39:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1732324391' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Sep 30 17:39:30 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.efvthf(active, since 7s), standbys: compute-1.glbusf
Sep 30 17:39:30 compute-0 systemd[1]: libpod-6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb.scope: Deactivated successfully.
Sep 30 17:39:30 compute-0 podman[91942]: 2025-09-30 17:39:30.590197918 +0000 UTC m=+0.695731931 container died 6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb (image=quay.io/ceph/ceph:v19, name=gracious_mendel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 17:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d3fabcef59bba3f1ad877e5b758f5a9e642564ebc01d9c4c7924b7387b3f479-merged.mount: Deactivated successfully.
Sep 30 17:39:30 compute-0 podman[91942]: 2025-09-30 17:39:30.629204334 +0000 UTC m=+0.734738307 container remove 6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb (image=quay.io/ceph/ceph:v19, name=gracious_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:30 compute-0 systemd[1]: libpod-conmon-6507e31b3c48fd3429448cc5fc8d91fc4d790412b689b8a784f85b4f37e719fb.scope: Deactivated successfully.
Sep 30 17:39:30 compute-0 sudo[91939]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:30 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'crash'
Sep 30 17:39:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:30.744+0000 7f2d43821140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:39:30 compute-0 ceph-mgr[74051]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:39:30 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'dashboard'
Sep 30 17:39:30 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Sep 30 17:39:30 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Sep 30 17:39:31 compute-0 python3[92082]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'devicehealth'
Sep 30 17:39:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:31.399+0000 7f2d43821140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 17:39:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 17:39:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 17:39:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]:   from numpy import show_config as show_numpy_config
Sep 30 17:39:31 compute-0 ceph-mon[73755]: 5.5 scrub starts
Sep 30 17:39:31 compute-0 ceph-mon[73755]: 5.5 scrub ok
Sep 30 17:39:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1732324391' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Sep 30 17:39:31 compute-0 ceph-mon[73755]: mgrmap e18: compute-0.efvthf(active, since 7s), standbys: compute-1.glbusf
Sep 30 17:39:31 compute-0 ceph-mon[73755]: 5.1f scrub starts
Sep 30 17:39:31 compute-0 ceph-mon[73755]: 5.1f scrub ok
Sep 30 17:39:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:31.574+0000 7f2d43821140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'influx'
Sep 30 17:39:31 compute-0 python3[92153]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759253971.0702703-33783-109141412238077/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=077789bfca37885cf12ba13cca174a9bb803b3f7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:39:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:31.645+0000 7f2d43821140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'insights'
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'iostat'
Sep 30 17:39:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:31.786+0000 7f2d43821140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:39:31 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'k8sevents'
Sep 30 17:39:31 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Sep 30 17:39:31 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Sep 30 17:39:31 compute-0 sudo[92201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydxflarpjaedomjerunoriwtqxsjgkxr ; /usr/bin/python3'
Sep 30 17:39:31 compute-0 sudo[92201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:32 compute-0 python3[92203]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:32 compute-0 podman[92204]: 2025-09-30 17:39:32.103552159 +0000 UTC m=+0.041774078 container create 50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e (image=quay.io/ceph/ceph:v19, name=ecstatic_archimedes, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:39:32 compute-0 systemd[1]: Started libpod-conmon-50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e.scope.
Sep 30 17:39:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a232a232eff4834099c0e4726485e8d116f6930256507bf70978df3b78d05f26/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a232a232eff4834099c0e4726485e8d116f6930256507bf70978df3b78d05f26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a232a232eff4834099c0e4726485e8d116f6930256507bf70978df3b78d05f26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:32 compute-0 podman[92204]: 2025-09-30 17:39:32.082758973 +0000 UTC m=+0.020980922 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:32 compute-0 podman[92204]: 2025-09-30 17:39:32.181674894 +0000 UTC m=+0.119896843 container init 50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e (image=quay.io/ceph/ceph:v19, name=ecstatic_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 17:39:32 compute-0 podman[92204]: 2025-09-30 17:39:32.187436322 +0000 UTC m=+0.125658261 container start 50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e (image=quay.io/ceph/ceph:v19, name=ecstatic_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:39:32 compute-0 podman[92204]: 2025-09-30 17:39:32.191521908 +0000 UTC m=+0.129743857 container attach 50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e (image=quay.io/ceph/ceph:v19, name=ecstatic_archimedes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 17:39:32 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'localpool'
Sep 30 17:39:32 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 17:39:32 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mirroring'
Sep 30 17:39:32 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'nfs'
Sep 30 17:39:32 compute-0 ceph-mon[73755]: 3.2 scrub starts
Sep 30 17:39:32 compute-0 ceph-mon[73755]: 3.2 scrub ok
Sep 30 17:39:32 compute-0 ceph-mon[73755]: 4.15 scrub starts
Sep 30 17:39:32 compute-0 ceph-mon[73755]: 4.15 scrub ok
Sep 30 17:39:32 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Sep 30 17:39:32 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Sep 30 17:39:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:32.832+0000 7f2d43821140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:39:32 compute-0 ceph-mgr[74051]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:39:32 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'orchestrator'
Sep 30 17:39:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:33.064+0000 7f2d43821140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 17:39:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:33.146+0000 7f2d43821140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_support'
Sep 30 17:39:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:33.216+0000 7f2d43821140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 17:39:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:33.295+0000 7f2d43821140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'progress'
Sep 30 17:39:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:33.363+0000 7f2d43821140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'prometheus'
Sep 30 17:39:33 compute-0 ceph-mon[73755]: 3.1 scrub starts
Sep 30 17:39:33 compute-0 ceph-mon[73755]: 3.1 scrub ok
Sep 30 17:39:33 compute-0 ceph-mon[73755]: 3.13 scrub starts
Sep 30 17:39:33 compute-0 ceph-mon[73755]: 3.13 scrub ok
Sep 30 17:39:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:33.725+0000 7f2d43821140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rbd_support'
Sep 30 17:39:33 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Sep 30 17:39:33 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Sep 30 17:39:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:33.827+0000 7f2d43821140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:39:33 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'restful'
Sep 30 17:39:34 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rgw'
Sep 30 17:39:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:34.268+0000 7f2d43821140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:39:34 compute-0 ceph-mgr[74051]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:39:34 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rook'
Sep 30 17:39:34 compute-0 ceph-mon[73755]: 4.6 scrub starts
Sep 30 17:39:34 compute-0 ceph-mon[73755]: 4.6 scrub ok
Sep 30 17:39:34 compute-0 ceph-mon[73755]: 5.10 scrub starts
Sep 30 17:39:34 compute-0 ceph-mon[73755]: 5.10 scrub ok
Sep 30 17:39:34 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Sep 30 17:39:34 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Sep 30 17:39:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:34.845+0000 7f2d43821140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:39:34 compute-0 ceph-mgr[74051]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:39:34 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'selftest'
Sep 30 17:39:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:34.921+0000 7f2d43821140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:39:34 compute-0 ceph-mgr[74051]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:39:34 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'snap_schedule'
Sep 30 17:39:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:35.005+0000 7f2d43821140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'stats'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'status'
Sep 30 17:39:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:35.163+0000 7f2d43821140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telegraf'
Sep 30 17:39:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:35.235+0000 7f2d43821140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telemetry'
Sep 30 17:39:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:35.391+0000 7f2d43821140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 17:39:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:35.628+0000 7f2d43821140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'volumes'
Sep 30 17:39:35 compute-0 ceph-mon[73755]: 4.2 scrub starts
Sep 30 17:39:35 compute-0 ceph-mon[73755]: 4.2 scrub ok
Sep 30 17:39:35 compute-0 ceph-mon[73755]: 5.16 scrub starts
Sep 30 17:39:35 compute-0 ceph-mon[73755]: 5.16 scrub ok
Sep 30 17:39:35 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Sep 30 17:39:35 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Sep 30 17:39:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf restarted
Sep 30 17:39:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf started
Sep 30 17:39:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:35.898+0000 7f2d43821140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'zabbix'
Sep 30 17:39:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:35.970+0000 7f2d43821140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:39:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Active manager daemon compute-0.efvthf restarted
Sep 30 17:39:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Sep 30 17:39:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.efvthf
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: ms_deliver_dispatch: unhandled message 0x55e78c070340 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  e: '/usr/bin/ceph-mgr'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  0: '/usr/bin/ceph-mgr'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  1: '-n'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  2: 'mgr.compute-0.efvthf'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  3: '-f'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  4: '--setuser'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  5: 'ceph'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  6: '--setgroup'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  7: 'ceph'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  8: '--default-log-to-file=false'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  9: '--default-log-to-journald=true'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  10: '--default-log-to-stderr=false'
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Sep 30 17:39:35 compute-0 ceph-mgr[74051]: mgr respawn  exe_path /proc/self/exe
Sep 30 17:39:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e40 e40: 2 total, 2 up, 2 in
Sep 30 17:39:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e40: 2 total, 2 up, 2 in
Sep 30 17:39:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.efvthf(active, starting, since 0.0221987s), standbys: compute-1.glbusf
Sep 30 17:39:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setuser ceph since I am not root
Sep 30 17:39:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setgroup ceph since I am not root
Sep 30 17:39:36 compute-0 ceph-mgr[74051]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 17:39:36 compute-0 ceph-mgr[74051]: pidfile_write: ignore empty --pid-file
Sep 30 17:39:36 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'alerts'
Sep 30 17:39:36 compute-0 sshd[1010]: Timeout before authentication for connection from 14.103.233.27 to 38.102.83.202, pid = 80204
Sep 30 17:39:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:36.245+0000 7f30a86a6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:39:36 compute-0 ceph-mgr[74051]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:39:36 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'balancer'
Sep 30 17:39:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:36.325+0000 7f30a86a6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:39:36 compute-0 ceph-mgr[74051]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:39:36 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'cephadm'
Sep 30 17:39:36 compute-0 ceph-mon[73755]: 5.3 scrub starts
Sep 30 17:39:36 compute-0 ceph-mon[73755]: 5.3 scrub ok
Sep 30 17:39:36 compute-0 ceph-mon[73755]: 3.10 scrub starts
Sep 30 17:39:36 compute-0 ceph-mon[73755]: 3.10 scrub ok
Sep 30 17:39:36 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf restarted
Sep 30 17:39:36 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf started
Sep 30 17:39:36 compute-0 ceph-mon[73755]: Active manager daemon compute-0.efvthf restarted
Sep 30 17:39:36 compute-0 ceph-mon[73755]: Activating manager daemon compute-0.efvthf
Sep 30 17:39:36 compute-0 ceph-mon[73755]: osdmap e40: 2 total, 2 up, 2 in
Sep 30 17:39:36 compute-0 ceph-mon[73755]: mgrmap e19: compute-0.efvthf(active, starting, since 0.0221987s), standbys: compute-1.glbusf
Sep 30 17:39:36 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.e scrub starts
Sep 30 17:39:36 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.e scrub ok
Sep 30 17:39:37 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'crash'
Sep 30 17:39:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:37.156+0000 7f30a86a6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:39:37 compute-0 ceph-mgr[74051]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:39:37 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'dashboard'
Sep 30 17:39:37 compute-0 ceph-mon[73755]: 3.6 scrub starts
Sep 30 17:39:37 compute-0 ceph-mon[73755]: 3.6 scrub ok
Sep 30 17:39:37 compute-0 ceph-mon[73755]: 3.e scrub starts
Sep 30 17:39:37 compute-0 ceph-mon[73755]: 3.e scrub ok
Sep 30 17:39:37 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'devicehealth'
Sep 30 17:39:37 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Sep 30 17:39:37 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Sep 30 17:39:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:37.793+0000 7f30a86a6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:39:37 compute-0 ceph-mgr[74051]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:39:37 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 17:39:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 17:39:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 17:39:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]:   from numpy import show_config as show_numpy_config
Sep 30 17:39:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:37.960+0000 7f30a86a6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:39:37 compute-0 ceph-mgr[74051]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:39:37 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'influx'
Sep 30 17:39:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:38.036+0000 7f30a86a6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'insights'
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'iostat'
Sep 30 17:39:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:38.170+0000 7f30a86a6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'k8sevents'
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'localpool'
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 17:39:38 compute-0 ceph-mon[73755]: 5.0 scrub starts
Sep 30 17:39:38 compute-0 ceph-mon[73755]: 5.0 scrub ok
Sep 30 17:39:38 compute-0 ceph-mon[73755]: 4.8 scrub starts
Sep 30 17:39:38 compute-0 ceph-mon[73755]: 4.8 scrub ok
Sep 30 17:39:38 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Sep 30 17:39:38 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mirroring'
Sep 30 17:39:38 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'nfs'
Sep 30 17:39:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:39.255+0000 7f30a86a6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'orchestrator'
Sep 30 17:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:39.477+0000 7f30a86a6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 17:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:39.553+0000 7f30a86a6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_support'
Sep 30 17:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:39.625+0000 7f30a86a6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 17:39:39 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Sep 30 17:39:39 compute-0 systemd[75040]: Activating special unit Exit the Session...
Sep 30 17:39:39 compute-0 systemd[75040]: Stopped target Main User Target.
Sep 30 17:39:39 compute-0 systemd[75040]: Stopped target Basic System.
Sep 30 17:39:39 compute-0 systemd[75040]: Stopped target Paths.
Sep 30 17:39:39 compute-0 systemd[75040]: Stopped target Sockets.
Sep 30 17:39:39 compute-0 systemd[75040]: Stopped target Timers.
Sep 30 17:39:39 compute-0 systemd[75040]: Stopped Mark boot as successful after the user session has run 2 minutes.
Sep 30 17:39:39 compute-0 systemd[75040]: Stopped Daily Cleanup of User's Temporary Directories.
Sep 30 17:39:39 compute-0 systemd[75040]: Closed D-Bus User Message Bus Socket.
Sep 30 17:39:39 compute-0 systemd[75040]: Stopped Create User's Volatile Files and Directories.
Sep 30 17:39:39 compute-0 systemd[75040]: Removed slice User Application Slice.
Sep 30 17:39:39 compute-0 systemd[75040]: Reached target Shutdown.
Sep 30 17:39:39 compute-0 systemd[75040]: Finished Exit the Session.
Sep 30 17:39:39 compute-0 systemd[75040]: Reached target Exit the Session.
Sep 30 17:39:39 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Sep 30 17:39:39 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Sep 30 17:39:39 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Sep 30 17:39:39 compute-0 ceph-mon[73755]: 3.7 scrub starts
Sep 30 17:39:39 compute-0 ceph-mon[73755]: 3.7 scrub ok
Sep 30 17:39:39 compute-0 ceph-mon[73755]: 5.9 scrub starts
Sep 30 17:39:39 compute-0 ceph-mon[73755]: 5.9 scrub ok
Sep 30 17:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:39.710+0000 7f30a86a6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'progress'
Sep 30 17:39:39 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Sep 30 17:39:39 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Sep 30 17:39:39 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Sep 30 17:39:39 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Sep 30 17:39:39 compute-0 systemd[1]: user-42477.slice: Consumed 34.997s CPU time.
Sep 30 17:39:39 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Sep 30 17:39:39 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Sep 30 17:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:39.791+0000 7f30a86a6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:39:39 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'prometheus'
Sep 30 17:39:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:40.156+0000 7f30a86a6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:39:40 compute-0 ceph-mgr[74051]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:39:40 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rbd_support'
Sep 30 17:39:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:40.258+0000 7f30a86a6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:39:40 compute-0 ceph-mgr[74051]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:39:40 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'restful'
Sep 30 17:39:40 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rgw'
Sep 30 17:39:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:40.713+0000 7f30a86a6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:39:40 compute-0 ceph-mgr[74051]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:39:40 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rook'
Sep 30 17:39:40 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Sep 30 17:39:40 compute-0 ceph-mon[73755]: 4.0 scrub starts
Sep 30 17:39:40 compute-0 ceph-mon[73755]: 4.0 scrub ok
Sep 30 17:39:40 compute-0 ceph-mon[73755]: 4.9 scrub starts
Sep 30 17:39:40 compute-0 ceph-mon[73755]: 4.9 scrub ok
Sep 30 17:39:40 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Sep 30 17:39:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:41.324+0000 7f30a86a6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'selftest'
Sep 30 17:39:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:41.404+0000 7f30a86a6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'snap_schedule'
Sep 30 17:39:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:41.484+0000 7f30a86a6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'stats'
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'status'
Sep 30 17:39:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:41.645+0000 7f30a86a6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telegraf'
Sep 30 17:39:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:41.719+0000 7f30a86a6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telemetry'
Sep 30 17:39:41 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.f scrub starts
Sep 30 17:39:41 compute-0 ceph-mon[73755]: 4.7 scrub starts
Sep 30 17:39:41 compute-0 ceph-mon[73755]: 4.7 scrub ok
Sep 30 17:39:41 compute-0 ceph-mon[73755]: 4.13 scrub starts
Sep 30 17:39:41 compute-0 ceph-mon[73755]: 4.13 scrub ok
Sep 30 17:39:41 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.f scrub ok
Sep 30 17:39:41 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf restarted
Sep 30 17:39:41 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf started
Sep 30 17:39:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:41.891+0000 7f30a86a6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:39:41 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 17:39:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:42.114+0000 7f30a86a6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'volumes'
Sep 30 17:39:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:42.386+0000 7f30a86a6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'zabbix'
Sep 30 17:39:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:39:42.460+0000 7f30a86a6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Active manager daemon compute-0.efvthf restarted
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: ms_deliver_dispatch: unhandled message 0x55efa2fb8340 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.efvthf
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e41 e41: 2 total, 2 up, 2 in
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e41: 2 total, 2 up, 2 in
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.efvthf(active, starting, since 0.0247954s), standbys: compute-1.glbusf
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr handle_mgr_map Activating!
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr handle_mgr_map I am now activating
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e1 all = 1
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: balancer
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [balancer INFO root] Starting
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Manager daemon compute-0.efvthf is now available
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:39:42
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: cephadm
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: crash
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: dashboard
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO access_control] Loading user roles DB version=2
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO sso] Loading SSO DB version=1
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: devicehealth
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: iostat
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO root] Configured CherryPy, starting engine...
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: nfs
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: orchestrator
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Starting
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: pg_autoscaler
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: progress
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [progress INFO root] Loading...
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f302ba67370>, <progress.module.GhostEvent object at 0x7f302ba673a0>, <progress.module.GhostEvent object at 0x7f302ba673d0>, <progress.module.GhostEvent object at 0x7f302ba67400>, <progress.module.GhostEvent object at 0x7f302ba67430>, <progress.module.GhostEvent object at 0x7f302ba67460>, <progress.module.GhostEvent object at 0x7f302ba67490>, <progress.module.GhostEvent object at 0x7f302ba674c0>, <progress.module.GhostEvent object at 0x7f302ba674f0>, <progress.module.GhostEvent object at 0x7f302ba67520>] historic events
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] recovery thread starting
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] starting setup
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: rbd_support
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: restful
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: status
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [restful WARNING root] server not running: no certificate configured
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: telemetry
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] PerfHandler: starting
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: vms, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: volumes, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: backups, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: volumes
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: images, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TaskHandler: starting
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"} v 0)
Sep 30 17:39:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] setup complete
Sep 30 17:39:42 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.a scrub starts
Sep 30 17:39:42 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.a scrub ok
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Sep 30 17:39:42 compute-0 ceph-mon[73755]: 3.0 scrub starts
Sep 30 17:39:42 compute-0 ceph-mon[73755]: 3.0 scrub ok
Sep 30 17:39:42 compute-0 ceph-mon[73755]: 3.f scrub starts
Sep 30 17:39:42 compute-0 ceph-mon[73755]: 3.f scrub ok
Sep 30 17:39:42 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf restarted
Sep 30 17:39:42 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf started
Sep 30 17:39:42 compute-0 ceph-mon[73755]: Active manager daemon compute-0.efvthf restarted
Sep 30 17:39:42 compute-0 ceph-mon[73755]: Activating manager daemon compute-0.efvthf
Sep 30 17:39:42 compute-0 ceph-mon[73755]: osdmap e41: 2 total, 2 up, 2 in
Sep 30 17:39:42 compute-0 ceph-mon[73755]: mgrmap e20: compute-0.efvthf(active, starting, since 0.0247954s), standbys: compute-1.glbusf
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: Manager daemon compute-0.efvthf is now available
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Sep 30 17:39:42 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Sep 30 17:39:42 compute-0 sshd-session[92390]: Accepted publickey for ceph-admin from 192.168.122.100 port 57202 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:39:42 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Sep 30 17:39:42 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Sep 30 17:39:42 compute-0 systemd-logind[811]: New session 36 of user ceph-admin.
Sep 30 17:39:42 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Sep 30 17:39:42 compute-0 systemd[1]: Starting User Manager for UID 42477...
Sep 30 17:39:42 compute-0 systemd[92405]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:39:43 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.module] Engine started.
Sep 30 17:39:43 compute-0 systemd[92405]: Queued start job for default target Main User Target.
Sep 30 17:39:43 compute-0 systemd[92405]: Created slice User Application Slice.
Sep 30 17:39:43 compute-0 systemd[92405]: Started Mark boot as successful after the user session has run 2 minutes.
Sep 30 17:39:43 compute-0 systemd[92405]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 17:39:43 compute-0 systemd[92405]: Reached target Paths.
Sep 30 17:39:43 compute-0 systemd[92405]: Reached target Timers.
Sep 30 17:39:43 compute-0 systemd[92405]: Starting D-Bus User Message Bus Socket...
Sep 30 17:39:43 compute-0 systemd[92405]: Starting Create User's Volatile Files and Directories...
Sep 30 17:39:43 compute-0 systemd[92405]: Listening on D-Bus User Message Bus Socket.
Sep 30 17:39:43 compute-0 systemd[92405]: Reached target Sockets.
Sep 30 17:39:43 compute-0 systemd[92405]: Finished Create User's Volatile Files and Directories.
Sep 30 17:39:43 compute-0 systemd[92405]: Reached target Basic System.
Sep 30 17:39:43 compute-0 systemd[92405]: Reached target Main User Target.
Sep 30 17:39:43 compute-0 systemd[92405]: Startup finished in 116ms.
Sep 30 17:39:43 compute-0 systemd[1]: Started User Manager for UID 42477.
Sep 30 17:39:43 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Sep 30 17:39:43 compute-0 sshd-session[92390]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:39:43 compute-0 sudo[92422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:43 compute-0 sudo[92422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:43 compute-0 sudo[92422]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:43 compute-0 sudo[92447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:39:43 compute-0 sudo[92447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.efvthf(active, since 1.04281s), standbys: compute-1.glbusf
Sep 30 17:39:43 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14366 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 ", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:43 compute-0 ceph-mgr[74051]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Sep 30 17:39:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v3: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Sep 30 17:39:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73751]: 2025-09-30T17:39:43.518+0000 7fe051a2a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e2 new map
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-09-30T17:39:43:520234+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T17:39:43.520183+0000
                                           modified        2025-09-30T17:39:43.520183+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e42 e42: 2 total, 2 up, 2 in
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e42: 2 total, 2 up, 2 in
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Sep 30 17:39:43 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:43 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:43 compute-0 ceph-mgr[74051]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Sep 30 17:39:43 compute-0 systemd[1]: libpod-50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e.scope: Deactivated successfully.
Sep 30 17:39:43 compute-0 podman[92499]: 2025-09-30 17:39:43.608967748 +0000 UTC m=+0.023813225 container died 50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e (image=quay.io/ceph/ceph:v19, name=ecstatic_archimedes, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a232a232eff4834099c0e4726485e8d116f6930256507bf70978df3b78d05f26-merged.mount: Deactivated successfully.
Sep 30 17:39:43 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.a scrub starts
Sep 30 17:39:43 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.a scrub ok
Sep 30 17:39:43 compute-0 podman[92499]: 2025-09-30 17:39:43.684242319 +0000 UTC m=+0.099087776 container remove 50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e (image=quay.io/ceph/ceph:v19, name=ecstatic_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:39:43 compute-0 systemd[1]: libpod-conmon-50125f615013dd9f53c850b7ac67f72f9e7eda2fb182ed13bbb34f579db1bc3e.scope: Deactivated successfully.
Sep 30 17:39:43 compute-0 sudo[92201]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:43 compute-0 ceph-mon[73755]: 5.6 scrub starts
Sep 30 17:39:43 compute-0 ceph-mon[73755]: 5.6 scrub ok
Sep 30 17:39:43 compute-0 ceph-mon[73755]: 3.a scrub starts
Sep 30 17:39:43 compute-0 ceph-mon[73755]: 3.a scrub ok
Sep 30 17:39:43 compute-0 ceph-mon[73755]: mgrmap e21: compute-0.efvthf(active, since 1.04281s), standbys: compute-1.glbusf
Sep 30 17:39:43 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Sep 30 17:39:43 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Sep 30 17:39:43 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Sep 30 17:39:43 compute-0 ceph-mon[73755]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Sep 30 17:39:43 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Sep 30 17:39:43 compute-0 ceph-mon[73755]: osdmap e42: 2 total, 2 up, 2 in
Sep 30 17:39:43 compute-0 ceph-mon[73755]: fsmap cephfs:0
Sep 30 17:39:43 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:43 compute-0 sudo[92594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnqqcixjenanhunsgrujqtrbzqhtemvy ; /usr/bin/python3'
Sep 30 17:39:43 compute-0 sudo[92594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:43 compute-0 podman[92566]: 2025-09-30 17:39:43.862890245 +0000 UTC m=+0.046853349 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:43 compute-0 podman[92566]: 2025-09-30 17:39:43.959689811 +0000 UTC m=+0.143652905 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:39:43 compute-0 python3[92599]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:44 compute-0 podman[92618]: 2025-09-30 17:39:44.031201565 +0000 UTC m=+0.035524257 container create ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e (image=quay.io/ceph/ceph:v19, name=modest_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 17:39:44 compute-0 systemd[1]: Started libpod-conmon-ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e.scope.
Sep 30 17:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8028b73e37d59721fc194111446a2565bd21f930a24b7285ecfbfd28ed08ce8a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8028b73e37d59721fc194111446a2565bd21f930a24b7285ecfbfd28ed08ce8a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8028b73e37d59721fc194111446a2565bd21f930a24b7285ecfbfd28ed08ce8a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:44 compute-0 podman[92618]: 2025-09-30 17:39:44.014947716 +0000 UTC m=+0.019270428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:44 compute-0 podman[92618]: 2025-09-30 17:39:44.128005521 +0000 UTC m=+0.132328233 container init ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e (image=quay.io/ceph/ceph:v19, name=modest_bose, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 17:39:44 compute-0 podman[92618]: 2025-09-30 17:39:44.133575365 +0000 UTC m=+0.137898057 container start ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e (image=quay.io/ceph/ceph:v19, name=modest_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 17:39:44 compute-0 podman[92618]: 2025-09-30 17:39:44.136917741 +0000 UTC m=+0.141240433 container attach ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e (image=quay.io/ceph/ceph:v19, name=modest_bose, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 17:39:44 compute-0 sudo[92447]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 sudo[92710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:44 compute-0 sudo[92710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:44 compute-0 sudo[92710]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:44] ENGINE Bus STARTING
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:44] ENGINE Bus STARTING
Sep 30 17:39:44 compute-0 sudo[92735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:39:44 compute-0 sudo[92735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.24125 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 17:39:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 modest_bose[92654]: Scheduled mds.cephfs update...
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:44] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:44] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:39:44 compute-0 systemd[1]: libpod-ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e.scope: Deactivated successfully.
Sep 30 17:39:44 compute-0 podman[92618]: 2025-09-30 17:39:44.521124408 +0000 UTC m=+0.525447100 container died ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e (image=quay.io/ceph/ceph:v19, name=modest_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 17:39:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8028b73e37d59721fc194111446a2565bd21f930a24b7285ecfbfd28ed08ce8a-merged.mount: Deactivated successfully.
Sep 30 17:39:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:39:44 compute-0 podman[92618]: 2025-09-30 17:39:44.568786437 +0000 UTC m=+0.573109129 container remove ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e (image=quay.io/ceph/ceph:v19, name=modest_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 systemd[1]: libpod-conmon-ce5d7a9fb251d5d568c5b453225f4f66f6ef9a5fae0fa29490e6de753d43791e.scope: Deactivated successfully.
Sep 30 17:39:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:39:44 compute-0 sudo[92594]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:44] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:44] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:44] ENGINE Bus STARTED
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:44] ENGINE Bus STARTED
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:39:44] ENGINE Client ('192.168.122.100', 42244) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:39:44 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:39:44] ENGINE Client ('192.168.122.100', 42244) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:39:44 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.d scrub starts
Sep 30 17:39:44 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.d scrub ok
Sep 30 17:39:44 compute-0 sudo[92843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wveveqbwvvogujftcknzuhbtxbrujcoc ; /usr/bin/python3'
Sep 30 17:39:44 compute-0 sudo[92843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:44 compute-0 ceph-mon[73755]: Saving service mds.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:44 compute-0 ceph-mon[73755]: 5.c scrub starts
Sep 30 17:39:44 compute-0 ceph-mon[73755]: 5.c scrub ok
Sep 30 17:39:44 compute-0 ceph-mon[73755]: 4.a scrub starts
Sep 30 17:39:44 compute-0 ceph-mon[73755]: 4.a scrub ok
Sep 30 17:39:44 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:44 compute-0 python3[92846]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:44 compute-0 sudo[92735]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:44 compute-0 podman[92863]: 2025-09-30 17:39:44.907995083 +0000 UTC m=+0.038006251 container create d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7 (image=quay.io/ceph/ceph:v19, name=silly_hugle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:44 compute-0 systemd[1]: Started libpod-conmon-d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7.scope.
Sep 30 17:39:44 compute-0 sudo[92876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:44 compute-0 sudo[92876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:44 compute-0 sudo[92876]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b136fab6a3dadcd02f7b2a24c7e9611d826cc9d44d4816d0d1aefcda5061a6ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b136fab6a3dadcd02f7b2a24c7e9611d826cc9d44d4816d0d1aefcda5061a6ce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b136fab6a3dadcd02f7b2a24c7e9611d826cc9d44d4816d0d1aefcda5061a6ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:44 compute-0 podman[92863]: 2025-09-30 17:39:44.985001909 +0000 UTC m=+0.115013087 container init d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7 (image=quay.io/ceph/ceph:v19, name=silly_hugle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:39:44 compute-0 podman[92863]: 2025-09-30 17:39:44.891701663 +0000 UTC m=+0.021712851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:44 compute-0 podman[92863]: 2025-09-30 17:39:44.990956203 +0000 UTC m=+0.120967371 container start d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7 (image=quay.io/ceph/ceph:v19, name=silly_hugle, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:39:44 compute-0 podman[92863]: 2025-09-30 17:39:44.99357928 +0000 UTC m=+0.123590508 container attach d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7 (image=quay.io/ceph/ceph:v19, name=silly_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:45 compute-0 sudo[92906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 17:39:45 compute-0 sudo[92906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:45 compute-0 sudo[92906]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.efvthf(active, since 2s), standbys: compute-1.glbusf
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14394 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 ", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Sep 30 17:39:45 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Sep 30 17:39:45 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:44] ENGINE Bus STARTING
Sep 30 17:39:45 compute-0 ceph-mon[73755]: from='client.24125 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:45 compute-0 ceph-mon[73755]: Saving service mds.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:45 compute-0 ceph-mon[73755]: pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:45 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:44] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:39:45 compute-0 ceph-mon[73755]: 3.b scrub starts
Sep 30 17:39:45 compute-0 ceph-mon[73755]: 3.b scrub ok
Sep 30 17:39:45 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:44] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:39:45 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:44] ENGINE Bus STARTED
Sep 30 17:39:45 compute-0 ceph-mon[73755]: [30/Sep/2025:17:39:44] ENGINE Client ('192.168.122.100', 42244) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:39:45 compute-0 ceph-mon[73755]: 4.d scrub starts
Sep 30 17:39:45 compute-0 ceph-mon[73755]: 4.d scrub ok
Sep 30 17:39:45 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:45 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:45 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mgrmap e22: compute-0.efvthf(active, since 2s), standbys: compute-1.glbusf
Sep 30 17:39:45 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:39:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:39:45 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:39:45 compute-0 sudo[92972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:39:45 compute-0 sudo[92972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:45 compute-0 sudo[92972]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:45 compute-0 sudo[92997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:39:45 compute-0 sudo[92997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:45 compute-0 sudo[92997]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:45 compute-0 sudo[93022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:39:46 compute-0 sudo[93022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93022]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:46 compute-0 sudo[93047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93047]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:39:46 compute-0 sudo[93072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93072]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:39:46 compute-0 sudo[93120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93120]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:39:46 compute-0 sudo[93145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93145]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 17:39:46 compute-0 sudo[93170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93170]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:46 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Sep 30 17:39:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Sep 30 17:39:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e43 e43: 2 total, 2 up, 2 in
Sep 30 17:39:46 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e43: 2 total, 2 up, 2 in
Sep 30 17:39:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Sep 30 17:39:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Sep 30 17:39:46 compute-0 sudo[93195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:39:46 compute-0 sudo[93195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93195]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:39:46 compute-0 sudo[93220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93220]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:39:46 compute-0 sudo[93245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93245]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v7: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:46 compute-0 sudo[93270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:46 compute-0 sudo[93270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93270]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:39:46 compute-0 sudo[93295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93295]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:46 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:46 compute-0 sudo[93343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:39:46 compute-0 sudo[93343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93343]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Sep 30 17:39:46 compute-0 sudo[93368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:39:46 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Sep 30 17:39:46 compute-0 sudo[93368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93368]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:46 compute-0 sudo[93393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93393]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:46 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:46 compute-0 ceph-mon[73755]: Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 17:39:46 compute-0 ceph-mon[73755]: Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:46 compute-0 ceph-mon[73755]: from='client.14394 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 ", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 17:39:46 compute-0 ceph-mon[73755]: 5.d scrub starts
Sep 30 17:39:46 compute-0 ceph-mon[73755]: 5.d scrub ok
Sep 30 17:39:46 compute-0 ceph-mon[73755]: 4.5 scrub starts
Sep 30 17:39:46 compute-0 ceph-mon[73755]: 4.5 scrub ok
Sep 30 17:39:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:39:46 compute-0 ceph-mon[73755]: Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 17:39:46 compute-0 ceph-mon[73755]: Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 17:39:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:39:46 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:39:46 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:39:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Sep 30 17:39:46 compute-0 ceph-mon[73755]: osdmap e43: 2 total, 2 up, 2 in
Sep 30 17:39:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Sep 30 17:39:46 compute-0 sudo[93418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:39:46 compute-0 sudo[93418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93418]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:39:46 compute-0 sudo[93443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93443]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:46 compute-0 sudo[93468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:39:46 compute-0 sudo[93468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:46 compute-0 sudo[93468]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 sudo[93493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:47 compute-0 sudo[93493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93493]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 sudo[93518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:39:47 compute-0 sudo[93518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93518]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 sudo[93566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:39:47 compute-0 sudo[93566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93566]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 sudo[93591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:39:47 compute-0 sudo[93591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93591]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:47 compute-0 sudo[93616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:47 compute-0 sudo[93616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93616]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Sep 30 17:39:47 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:39:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Sep 30 17:39:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e44 e44: 2 total, 2 up, 2 in
Sep 30 17:39:47 compute-0 sudo[93641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:39:47 compute-0 sudo[93641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e44: 2 total, 2 up, 2 in
Sep 30 17:39:47 compute-0 sudo[93641]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.efvthf(active, since 4s), standbys: compute-1.glbusf
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:39:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:47 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 17:39:47 compute-0 sudo[93666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:39:47 compute-0 sudo[93666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:47 compute-0 sudo[93666]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 systemd[1]: libpod-d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7.scope: Deactivated successfully.
Sep 30 17:39:47 compute-0 podman[92863]: 2025-09-30 17:39:47.436547602 +0000 UTC m=+2.566558780 container died d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7 (image=quay.io/ceph/ceph:v19, name=silly_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:47 compute-0 sudo[93702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:39:47 compute-0 sudo[93702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93702]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b136fab6a3dadcd02f7b2a24c7e9611d826cc9d44d4816d0d1aefcda5061a6ce-merged.mount: Deactivated successfully.
Sep 30 17:39:47 compute-0 podman[92863]: 2025-09-30 17:39:47.491656753 +0000 UTC m=+2.621667921 container remove d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7 (image=quay.io/ceph/ceph:v19, name=silly_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:39:47 compute-0 systemd[1]: libpod-conmon-d161d4d9f1a803a3ebe592acf6890b402bbf110ee6c7c53428b1d8f5bf6ad9f7.scope: Deactivated successfully.
Sep 30 17:39:47 compute-0 sudo[92843]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 sudo[93739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:47 compute-0 sudo[93739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93739]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 sudo[93766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:39:47 compute-0 sudo[93766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93766]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Sep 30 17:39:47 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Sep 30 17:39:47 compute-0 sudo[93814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:39:47 compute-0 sudo[93814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93814]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 sudo[93839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:39:47 compute-0 sudo[93839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93839]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 sudo[93864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:47 compute-0 sudo[93864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:47 compute-0 sudo[93864]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:47 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:47 compute-0 ceph-mon[73755]: pgmap v7: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:47 compute-0 ceph-mon[73755]: 4.b scrub starts
Sep 30 17:39:47 compute-0 ceph-mon[73755]: 4.b scrub ok
Sep 30 17:39:47 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:39:47 compute-0 ceph-mon[73755]: 5.7 scrub starts
Sep 30 17:39:47 compute-0 ceph-mon[73755]: 5.7 scrub ok
Sep 30 17:39:47 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:47 compute-0 ceph-mon[73755]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:39:47 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Sep 30 17:39:47 compute-0 ceph-mon[73755]: osdmap e44: 2 total, 2 up, 2 in
Sep 30 17:39:47 compute-0 ceph-mon[73755]: mgrmap e23: compute-0.efvthf(active, since 4s), standbys: compute-1.glbusf
Sep 30 17:39:47 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:47 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:47 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:47 compute-0 sudo[93964]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aweoiddaaqjybvuvollqbbvcvkuqdoyi ; /usr/bin/python3'
Sep 30 17:39:47 compute-0 sudo[93964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:48 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:48 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:48 compute-0 python3[93966]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 17:39:48 compute-0 sudo[93964]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:48 compute-0 sudo[94037]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qilrappbeiwqhgyhqobupxtuyuyiwgoi ; /usr/bin/python3'
Sep 30 17:39:48 compute-0 sudo[94037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Sep 30 17:39:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e45 e45: 2 total, 2 up, 2 in
Sep 30 17:39:48 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e45: 2 total, 2 up, 2 in
Sep 30 17:39:48 compute-0 python3[94039]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759253987.8505492-33814-165805375566648/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=7362c4454e8786984d45f5a884c5c867d1ac96a9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:39:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v10: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:48 compute-0 sudo[94037]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:48 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.d scrub starts
Sep 30 17:39:48 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.d scrub ok
Sep 30 17:39:48 compute-0 sudo[94087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtfkfnnkttfkdumnuiuodrlspvrqbkpp ; /usr/bin/python3'
Sep 30 17:39:48 compute-0 sudo[94087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:39:48 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:39:48 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:48 compute-0 ceph-mon[73755]: Saving service nfs.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:48 compute-0 ceph-mon[73755]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1
Sep 30 17:39:48 compute-0 ceph-mon[73755]: 5.a scrub starts
Sep 30 17:39:48 compute-0 ceph-mon[73755]: 5.a scrub ok
Sep 30 17:39:48 compute-0 ceph-mon[73755]: 5.2 scrub starts
Sep 30 17:39:48 compute-0 ceph-mon[73755]: 5.2 scrub ok
Sep 30 17:39:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:48 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:39:48 compute-0 ceph-mon[73755]: osdmap e45: 2 total, 2 up, 2 in
Sep 30 17:39:48 compute-0 ceph-mon[73755]: 5.b scrub starts
Sep 30 17:39:48 compute-0 ceph-mon[73755]: 5.b scrub ok
Sep 30 17:39:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:39:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:39:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:48 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev bcfb2cd8-64c1-4cf1-890b-6b5a5dd275de (Updating node-exporter deployment (+2 -> 2))
Sep 30 17:39:48 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Sep 30 17:39:48 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Sep 30 17:39:48 compute-0 sudo[94090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:48 compute-0 sudo[94090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:48 compute-0 sudo[94090]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:48 compute-0 python3[94089]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:48 compute-0 sudo[94115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:39:48 compute-0 sudo[94115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:49 compute-0 podman[94139]: 2025-09-30 17:39:49.046822133 +0000 UTC m=+0.042832886 container create 261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c (image=quay.io/ceph/ceph:v19, name=focused_austin, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:49 compute-0 systemd[1]: Started libpod-conmon-261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c.scope.
Sep 30 17:39:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250c2ba9f72ac9fb907e404b51d0b6b134762d50a35213bae59917e59d1d3a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250c2ba9f72ac9fb907e404b51d0b6b134762d50a35213bae59917e59d1d3a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:49 compute-0 podman[94139]: 2025-09-30 17:39:49.113110102 +0000 UTC m=+0.109120875 container init 261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c (image=quay.io/ceph/ceph:v19, name=focused_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:39:49 compute-0 podman[94139]: 2025-09-30 17:39:49.119377464 +0000 UTC m=+0.115388217 container start 261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c (image=quay.io/ceph/ceph:v19, name=focused_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:49 compute-0 podman[94139]: 2025-09-30 17:39:49.028153011 +0000 UTC m=+0.024163794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:49 compute-0 podman[94139]: 2025-09-30 17:39:49.12312405 +0000 UTC m=+0.119134833 container attach 261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c (image=quay.io/ceph/ceph:v19, name=focused_austin, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:39:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:49 compute-0 systemd[1]: Reloading.
Sep 30 17:39:49 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 17:39:49 compute-0 systemd-rc-local-generator[94248]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:39:49 compute-0 systemd-sysv-generator[94251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:39:49 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.efvthf(active, since 6s), standbys: compute-1.glbusf
Sep 30 17:39:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth import"} v 0)
Sep 30 17:39:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1719518272' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Sep 30 17:39:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1719518272' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Sep 30 17:39:49 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Sep 30 17:39:49 compute-0 podman[94139]: 2025-09-30 17:39:49.572935099 +0000 UTC m=+0.568945882 container died 261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c (image=quay.io/ceph/ceph:v19, name=focused_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:49 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Sep 30 17:39:49 compute-0 systemd[1]: libpod-261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c.scope: Deactivated successfully.
Sep 30 17:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c250c2ba9f72ac9fb907e404b51d0b6b134762d50a35213bae59917e59d1d3a1-merged.mount: Deactivated successfully.
Sep 30 17:39:49 compute-0 podman[94139]: 2025-09-30 17:39:49.634281971 +0000 UTC m=+0.630292744 container remove 261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c (image=quay.io/ceph/ceph:v19, name=focused_austin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 17:39:49 compute-0 systemd[1]: libpod-conmon-261d424246fe1fc313f12fb4f6556d3c22f87b83fdde42b306406646db5bcd1c.scope: Deactivated successfully.
Sep 30 17:39:49 compute-0 systemd[1]: Reloading.
Sep 30 17:39:49 compute-0 sudo[94087]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:49 compute-0 systemd-rc-local-generator[94300]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:39:49 compute-0 systemd-sysv-generator[94303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:39:49 compute-0 ceph-mon[73755]: pgmap v10: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:39:49 compute-0 ceph-mon[73755]: 3.d scrub starts
Sep 30 17:39:49 compute-0 ceph-mon[73755]: 3.d scrub ok
Sep 30 17:39:49 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:49 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:49 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:49 compute-0 ceph-mon[73755]: Deploying daemon node-exporter.compute-0 on compute-0
Sep 30 17:39:49 compute-0 ceph-mon[73755]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 17:39:49 compute-0 ceph-mon[73755]: mgrmap e24: compute-0.efvthf(active, since 6s), standbys: compute-1.glbusf
Sep 30 17:39:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1719518272' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Sep 30 17:39:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1719518272' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Sep 30 17:39:49 compute-0 ceph-mon[73755]: 5.8 scrub starts
Sep 30 17:39:49 compute-0 ceph-mon[73755]: 5.8 scrub ok
Sep 30 17:39:49 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:39:50 compute-0 bash[94356]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Sep 30 17:39:50 compute-0 sudo[94392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcnyjbrcrzicxcjgbrmlxdkszvqifdql ; /usr/bin/python3'
Sep 30 17:39:50 compute-0 sudo[94392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:50 compute-0 python3[94394]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:50 compute-0 podman[94396]: 2025-09-30 17:39:50.413170574 +0000 UTC m=+0.039409037 container create 6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c (image=quay.io/ceph/ceph:v19, name=modest_wilson, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:39:50 compute-0 systemd[1]: Started libpod-conmon-6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c.scope.
Sep 30 17:39:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95a8438c4387124e9fe463a6c4a5acd78985ca59eb4b7749ee270d3d50e4ba6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95a8438c4387124e9fe463a6c4a5acd78985ca59eb4b7749ee270d3d50e4ba6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:50 compute-0 podman[94396]: 2025-09-30 17:39:50.396656049 +0000 UTC m=+0.022894532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:50 compute-0 podman[94396]: 2025-09-30 17:39:50.497292784 +0000 UTC m=+0.123531297 container init 6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c (image=quay.io/ceph/ceph:v19, name=modest_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 17:39:50 compute-0 podman[94396]: 2025-09-30 17:39:50.504267734 +0000 UTC m=+0.130506207 container start 6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c (image=quay.io/ceph/ceph:v19, name=modest_wilson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:50 compute-0 podman[94396]: 2025-09-30 17:39:50.507271881 +0000 UTC m=+0.133510364 container attach 6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c (image=quay.io/ceph/ceph:v19, name=modest_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 17:39:50 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Sep 30 17:39:50 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Sep 30 17:39:50 compute-0 bash[94356]: Getting image source signatures
Sep 30 17:39:50 compute-0 bash[94356]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Sep 30 17:39:50 compute-0 bash[94356]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Sep 30 17:39:50 compute-0 bash[94356]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Sep 30 17:39:50 compute-0 ceph-mon[73755]: 3.3 deep-scrub starts
Sep 30 17:39:50 compute-0 ceph-mon[73755]: 3.3 deep-scrub ok
Sep 30 17:39:50 compute-0 ceph-mon[73755]: 4.17 scrub starts
Sep 30 17:39:50 compute-0 ceph-mon[73755]: 4.17 scrub ok
Sep 30 17:39:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 17:39:50 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1765872891' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:39:50 compute-0 modest_wilson[94413]: 
Sep 30 17:39:50 compute-0 modest_wilson[94413]: {"fsid":"63d32c6a-fa18-54ed-8711-9a3915cc367b","health":{"status":"HEALTH_ERR","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-1"],"quorum_age":106,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":45,"num_osds":2,"num_up_osds":2,"osd_up_since":1759253917,"num_in_osds":2,"osd_in_since":1759253889,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194}],"num_pgs":194,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":56352768,"bytes_avail":42884931584,"bytes_total":42941284352,"read_bytes_sec":30030,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-09-30T17:39:43:520234+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-09-30T17:39:12.734724+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"bcfb2cd8-64c1-4cf1-890b-6b5a5dd275de":{"message":"Updating node-exporter deployment (+2 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Sep 30 17:39:50 compute-0 systemd[1]: libpod-6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c.scope: Deactivated successfully.
Sep 30 17:39:50 compute-0 podman[94396]: 2025-09-30 17:39:50.964268835 +0000 UTC m=+0.590507298 container died 6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c (image=quay.io/ceph/ceph:v19, name=modest_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e95a8438c4387124e9fe463a6c4a5acd78985ca59eb4b7749ee270d3d50e4ba6-merged.mount: Deactivated successfully.
Sep 30 17:39:51 compute-0 bash[94356]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Sep 30 17:39:51 compute-0 podman[94396]: 2025-09-30 17:39:51.259044556 +0000 UTC m=+0.885283019 container remove 6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c (image=quay.io/ceph/ceph:v19, name=modest_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:39:51 compute-0 bash[94356]: Writing manifest to image destination
Sep 30 17:39:51 compute-0 systemd[1]: libpod-conmon-6523aa94f15d03ce15cbffd331bf51537fe8c0acd74f032bb193c209ed9bfc1c.scope: Deactivated successfully.
Sep 30 17:39:51 compute-0 sudo[94392]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:51 compute-0 podman[94356]: 2025-09-30 17:39:51.305201216 +0000 UTC m=+1.251807199 container create 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4f7527b5bc9e9f3dd0ca9c4b4629b89a0e9c49749824dbc5db0d2e72a77cbf/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:51 compute-0 podman[94356]: 2025-09-30 17:39:51.354923318 +0000 UTC m=+1.301529321 container init 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:39:51 compute-0 podman[94356]: 2025-09-30 17:39:51.360463331 +0000 UTC m=+1.307069314 container start 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:39:51 compute-0 bash[94356]: 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789
Sep 30 17:39:51 compute-0 podman[94356]: 2025-09-30 17:39:51.28908084 +0000 UTC m=+1.235686843 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.366Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.366Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.370Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.370Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.370Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.370Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Sep 30 17:39:51 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=arp
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=bcache
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=bonding
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=btrfs
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=conntrack
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=cpu
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=cpufreq
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=diskstats
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=dmi
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=edac
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=entropy
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=fibrechannel
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=filefd
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=filesystem
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=hwmon
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=infiniband
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=ipvs
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=loadavg
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=mdadm
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=meminfo
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=netclass
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=netdev
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=netstat
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=nfs
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=nfsd
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=nvme
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=os
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=pressure
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=rapl
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=schedstat
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=selinux
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=sockstat
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=softnet
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=stat
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=tapestats
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=textfile
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=thermal_zone
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=time
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=udp_queues
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=uname
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=vmstat
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=xfs
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.371Z caller=node_exporter.go:117 level=info collector=zfs
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.372Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Sep 30 17:39:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0[94508]: ts=2025-09-30T17:39:51.372Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Sep 30 17:39:51 compute-0 sudo[94540]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-droayzuqbmyfogwtoffxqgtrqgepvbja ; /usr/bin/python3'
Sep 30 17:39:51 compute-0 sudo[94115]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:51 compute-0 sudo[94540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Sep 30 17:39:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:51 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Sep 30 17:39:51 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Sep 30 17:39:51 compute-0 python3[94542]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:51 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.c scrub starts
Sep 30 17:39:51 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.c scrub ok
Sep 30 17:39:51 compute-0 podman[94543]: 2025-09-30 17:39:51.599191466 +0000 UTC m=+0.044417656 container create baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5 (image=quay.io/ceph/ceph:v19, name=elated_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:51 compute-0 systemd[1]: Started libpod-conmon-baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5.scope.
Sep 30 17:39:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0346916fb648305196606f3c64f22f04faa78456515a3168a0adbb7cccbbd150/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0346916fb648305196606f3c64f22f04faa78456515a3168a0adbb7cccbbd150/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:51 compute-0 podman[94543]: 2025-09-30 17:39:51.664189523 +0000 UTC m=+0.109415733 container init baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5 (image=quay.io/ceph/ceph:v19, name=elated_chatelet, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:51 compute-0 podman[94543]: 2025-09-30 17:39:51.672084676 +0000 UTC m=+0.117310866 container start baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5 (image=quay.io/ceph/ceph:v19, name=elated_chatelet, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:51 compute-0 podman[94543]: 2025-09-30 17:39:51.578572035 +0000 UTC m=+0.023798245 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:51 compute-0 podman[94543]: 2025-09-30 17:39:51.675186386 +0000 UTC m=+0.120412576 container attach baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5 (image=quay.io/ceph/ceph:v19, name=elated_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:52 compute-0 ceph-mon[73755]: pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 17:39:52 compute-0 ceph-mon[73755]: 5.15 deep-scrub starts
Sep 30 17:39:52 compute-0 ceph-mon[73755]: 5.15 deep-scrub ok
Sep 30 17:39:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1765872891' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:39:52 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:52 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:52 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:52 compute-0 ceph-mon[73755]: 3.c scrub starts
Sep 30 17:39:52 compute-0 ceph-mon[73755]: 3.c scrub ok
Sep 30 17:39:52 compute-0 ceph-mon[73755]: 4.16 scrub starts
Sep 30 17:39:52 compute-0 ceph-mon[73755]: 4.16 scrub ok
Sep 30 17:39:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 17:39:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559034079' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 17:39:52 compute-0 elated_chatelet[94558]: 
Sep 30 17:39:52 compute-0 elated_chatelet[94558]: {"epoch":2,"fsid":"63d32c6a-fa18-54ed-8711-9a3915cc367b","modified":"2025-09-30T17:37:59.709954Z","created":"2025-09-30T17:36:25.121133Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]}
Sep 30 17:39:52 compute-0 elated_chatelet[94558]: dumped monmap epoch 2
Sep 30 17:39:52 compute-0 systemd[1]: libpod-baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5.scope: Deactivated successfully.
Sep 30 17:39:52 compute-0 podman[94543]: 2025-09-30 17:39:52.181775089 +0000 UTC m=+0.627001299 container died baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5 (image=quay.io/ceph/ceph:v19, name=elated_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0346916fb648305196606f3c64f22f04faa78456515a3168a0adbb7cccbbd150-merged.mount: Deactivated successfully.
Sep 30 17:39:52 compute-0 podman[94543]: 2025-09-30 17:39:52.341492447 +0000 UTC m=+0.786718647 container remove baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5 (image=quay.io/ceph/ceph:v19, name=elated_chatelet, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:52 compute-0 systemd[1]: libpod-conmon-baf2673676d4834152f3dae4fa788022590fa4a0dd346d272372403d10d7a4b5.scope: Deactivated successfully.
Sep 30 17:39:52 compute-0 sudo[94540]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Sep 30 17:39:52 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Sep 30 17:39:52 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Sep 30 17:39:52 compute-0 sudo[94620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agkpesoogdegudyypsovtbkayapbbzgi ; /usr/bin/python3'
Sep 30 17:39:52 compute-0 sudo[94620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:52 compute-0 python3[94622]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:52 compute-0 podman[94623]: 2025-09-30 17:39:52.906520166 +0000 UTC m=+0.038955556 container create 086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce (image=quay.io/ceph/ceph:v19, name=heuristic_driscoll, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 17:39:52 compute-0 systemd[1]: Started libpod-conmon-086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce.scope.
Sep 30 17:39:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cf341e9e3e8027ec67b93285388fc825b425e33fdd143dddd19e8ead9bcf16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cf341e9e3e8027ec67b93285388fc825b425e33fdd143dddd19e8ead9bcf16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:52 compute-0 podman[94623]: 2025-09-30 17:39:52.968310419 +0000 UTC m=+0.100745809 container init 086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce (image=quay.io/ceph/ceph:v19, name=heuristic_driscoll, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:52 compute-0 podman[94623]: 2025-09-30 17:39:52.975397312 +0000 UTC m=+0.107832692 container start 086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce (image=quay.io/ceph/ceph:v19, name=heuristic_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 17:39:52 compute-0 podman[94623]: 2025-09-30 17:39:52.979300892 +0000 UTC m=+0.111736302 container attach 086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce (image=quay.io/ceph/ceph:v19, name=heuristic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 17:39:52 compute-0 podman[94623]: 2025-09-30 17:39:52.890526753 +0000 UTC m=+0.022962163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:53 compute-0 ceph-mon[73755]: Deploying daemon node-exporter.compute-1 on compute-1
Sep 30 17:39:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2559034079' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 17:39:53 compute-0 ceph-mon[73755]: 5.17 scrub starts
Sep 30 17:39:53 compute-0 ceph-mon[73755]: 5.1 scrub starts
Sep 30 17:39:53 compute-0 ceph-mon[73755]: 5.17 scrub ok
Sep 30 17:39:53 compute-0 ceph-mon[73755]: 5.1 scrub ok
Sep 30 17:39:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Sep 30 17:39:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/659304667' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Sep 30 17:39:53 compute-0 heuristic_driscoll[94639]: [client.openstack]
Sep 30 17:39:53 compute-0 heuristic_driscoll[94639]:         key = AQDxFNxoAAAAABAAAVqrvevrN1uM+kO3r0Scwg==
Sep 30 17:39:53 compute-0 heuristic_driscoll[94639]:         caps mgr = "allow *"
Sep 30 17:39:53 compute-0 heuristic_driscoll[94639]:         caps mon = "profile rbd"
Sep 30 17:39:53 compute-0 heuristic_driscoll[94639]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Sep 30 17:39:53 compute-0 systemd[1]: libpod-086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce.scope: Deactivated successfully.
Sep 30 17:39:53 compute-0 conmon[94639]: conmon 086833b09843e739c6df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce.scope/container/memory.events
Sep 30 17:39:53 compute-0 podman[94623]: 2025-09-30 17:39:53.407446182 +0000 UTC m=+0.539881572 container died 086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce (image=quay.io/ceph/ceph:v19, name=heuristic_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-49cf341e9e3e8027ec67b93285388fc825b425e33fdd143dddd19e8ead9bcf16-merged.mount: Deactivated successfully.
Sep 30 17:39:53 compute-0 podman[94623]: 2025-09-30 17:39:53.442917407 +0000 UTC m=+0.575352797 container remove 086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce (image=quay.io/ceph/ceph:v19, name=heuristic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 17:39:53 compute-0 systemd[1]: libpod-conmon-086833b09843e739c6df2399ef747aa6532cd9c0200d2fed66a7ac4a0b09f5ce.scope: Deactivated successfully.
Sep 30 17:39:53 compute-0 sudo[94620]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:53 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Sep 30 17:39:53 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Sep 30 17:39:54 compute-0 ceph-mon[73755]: pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Sep 30 17:39:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/659304667' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Sep 30 17:39:54 compute-0 ceph-mon[73755]: 5.4 scrub starts
Sep 30 17:39:54 compute-0 ceph-mon[73755]: 3.12 scrub starts
Sep 30 17:39:54 compute-0 ceph-mon[73755]: 5.4 scrub ok
Sep 30 17:39:54 compute-0 ceph-mon[73755]: 3.12 scrub ok
Sep 30 17:39:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:39:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:39:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Sep 30 17:39:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:54 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev bcfb2cd8-64c1-4cf1-890b-6b5a5dd275de (Updating node-exporter deployment (+2 -> 2))
Sep 30 17:39:54 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event bcfb2cd8-64c1-4cf1-890b-6b5a5dd275de (Updating node-exporter deployment (+2 -> 2)) in 5 seconds
Sep 30 17:39:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Sep 30 17:39:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:39:54 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:39:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:39:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:39:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:39:54 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:54 compute-0 sudo[94676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:54 compute-0 sudo[94676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:54 compute-0 sudo[94676]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:54 compute-0 sudo[94701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:39:54 compute-0 sudo[94701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Sep 30 17:39:54 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Sep 30 17:39:54 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Sep 30 17:39:54 compute-0 sudo[94911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stkmjigiafibtvszsigosenvvygiwchj ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759253994.430814-33886-73425647172457/async_wrapper.py j902112147448 30 /home/zuul/.ansible/tmp/ansible-tmp-1759253994.430814-33886-73425647172457/AnsiballZ_command.py _'
Sep 30 17:39:54 compute-0 sudo[94911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:54 compute-0 podman[94912]: 2025-09-30 17:39:54.776859143 +0000 UTC m=+0.038318349 container create 3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_swartz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 17:39:54 compute-0 systemd[1]: Started libpod-conmon-3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2.scope.
Sep 30 17:39:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:54 compute-0 podman[94912]: 2025-09-30 17:39:54.840009951 +0000 UTC m=+0.101469177 container init 3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_swartz, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:54 compute-0 podman[94912]: 2025-09-30 17:39:54.845633376 +0000 UTC m=+0.107092582 container start 3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_swartz, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:39:54 compute-0 nifty_swartz[94931]: 167 167
Sep 30 17:39:54 compute-0 podman[94912]: 2025-09-30 17:39:54.848800248 +0000 UTC m=+0.110259494 container attach 3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 17:39:54 compute-0 systemd[1]: libpod-3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2.scope: Deactivated successfully.
Sep 30 17:39:54 compute-0 podman[94912]: 2025-09-30 17:39:54.852035812 +0000 UTC m=+0.113495068 container died 3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:39:54 compute-0 podman[94912]: 2025-09-30 17:39:54.758718975 +0000 UTC m=+0.020178201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:54 compute-0 ansible-async_wrapper.py[94915]: Invoked with j902112147448 30 /home/zuul/.ansible/tmp/ansible-tmp-1759253994.430814-33886-73425647172457/AnsiballZ_command.py _
Sep 30 17:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c1c89557469f1da0c01810f40beb180e4b618866d06256aa0c78d882879a173-merged.mount: Deactivated successfully.
Sep 30 17:39:54 compute-0 ansible-async_wrapper.py[94945]: Starting module and watcher
Sep 30 17:39:54 compute-0 ansible-async_wrapper.py[94945]: Start watching 94948 (30)
Sep 30 17:39:54 compute-0 ansible-async_wrapper.py[94948]: Start module (94948)
Sep 30 17:39:54 compute-0 ansible-async_wrapper.py[94915]: Return async_wrapper task started.
Sep 30 17:39:54 compute-0 podman[94912]: 2025-09-30 17:39:54.889392835 +0000 UTC m=+0.150852041 container remove 3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 17:39:54 compute-0 sudo[94911]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:54 compute-0 systemd[1]: libpod-conmon-3f7a647e56445c7b5f625fc32a832232f71b8bd9e0d518345b6428a5bc424fa2.scope: Deactivated successfully.
Sep 30 17:39:55 compute-0 python3[94949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:55 compute-0 podman[94960]: 2025-09-30 17:39:55.036995601 +0000 UTC m=+0.042775354 container create ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wilson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 17:39:55 compute-0 podman[94973]: 2025-09-30 17:39:55.065797753 +0000 UTC m=+0.041762607 container create c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194 (image=quay.io/ceph/ceph:v19, name=jolly_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:55 compute-0 systemd[1]: Started libpod-conmon-ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58.scope.
Sep 30 17:39:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:55 compute-0 systemd[1]: Started libpod-conmon-c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194.scope.
Sep 30 17:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a180073ab6139fc5101dc11b3ab9ab0169d51b77528480e6f679b98ac0124a75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a180073ab6139fc5101dc11b3ab9ab0169d51b77528480e6f679b98ac0124a75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a180073ab6139fc5101dc11b3ab9ab0169d51b77528480e6f679b98ac0124a75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a180073ab6139fc5101dc11b3ab9ab0169d51b77528480e6f679b98ac0124a75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a180073ab6139fc5101dc11b3ab9ab0169d51b77528480e6f679b98ac0124a75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:55 compute-0 podman[94960]: 2025-09-30 17:39:55.014773638 +0000 UTC m=+0.020553421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a41f1ba16b5adb0ec234192e7b1f6919664ab32d25f18a63b6046768b05131/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a41f1ba16b5adb0ec234192e7b1f6919664ab32d25f18a63b6046768b05131/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:55 compute-0 podman[94960]: 2025-09-30 17:39:55.122839044 +0000 UTC m=+0.128618817 container init ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wilson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:39:55 compute-0 podman[94973]: 2025-09-30 17:39:55.129716172 +0000 UTC m=+0.105681046 container init c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194 (image=quay.io/ceph/ceph:v19, name=jolly_albattani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:55 compute-0 podman[94960]: 2025-09-30 17:39:55.134147556 +0000 UTC m=+0.139927309 container start ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wilson, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:55 compute-0 podman[94960]: 2025-09-30 17:39:55.136593769 +0000 UTC m=+0.142373532 container attach ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 17:39:55 compute-0 podman[94973]: 2025-09-30 17:39:55.138445707 +0000 UTC m=+0.114410561 container start c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194 (image=quay.io/ceph/ceph:v19, name=jolly_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:39:55 compute-0 podman[94973]: 2025-09-30 17:39:55.046159967 +0000 UTC m=+0.022124851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:55 compute-0 podman[94973]: 2025-09-30 17:39:55.145368345 +0000 UTC m=+0.121333229 container attach c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194 (image=quay.io/ceph/ceph:v19, name=jolly_albattani, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:39:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:39:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:55 compute-0 ceph-mon[73755]: pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Sep 30 17:39:55 compute-0 ceph-mon[73755]: 5.14 scrub starts
Sep 30 17:39:55 compute-0 ceph-mon[73755]: 5.14 scrub ok
Sep 30 17:39:55 compute-0 ceph-mon[73755]: 3.5 scrub starts
Sep 30 17:39:55 compute-0 ceph-mon[73755]: 3.5 scrub ok
Sep 30 17:39:55 compute-0 magical_wilson[94991]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:39:55 compute-0 magical_wilson[94991]: --> All data devices are unavailable
Sep 30 17:39:55 compute-0 systemd[1]: libpod-ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58.scope: Deactivated successfully.
Sep 30 17:39:55 compute-0 podman[94960]: 2025-09-30 17:39:55.444647492 +0000 UTC m=+0.450427255 container died ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wilson, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a180073ab6139fc5101dc11b3ab9ab0169d51b77528480e6f679b98ac0124a75-merged.mount: Deactivated successfully.
Sep 30 17:39:55 compute-0 podman[94960]: 2025-09-30 17:39:55.485410054 +0000 UTC m=+0.491189807 container remove ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:39:55 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:39:55 compute-0 jolly_albattani[94996]: 
Sep 30 17:39:55 compute-0 jolly_albattani[94996]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Sep 30 17:39:55 compute-0 systemd[1]: libpod-conmon-ec91edb7d069fe3e8d9aab3f5d2f5be181f4bcf3d1e67003a7e2a2a27e53eb58.scope: Deactivated successfully.
Sep 30 17:39:55 compute-0 systemd[1]: libpod-c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194.scope: Deactivated successfully.
Sep 30 17:39:55 compute-0 podman[94973]: 2025-09-30 17:39:55.513639171 +0000 UTC m=+0.489604035 container died c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194 (image=quay.io/ceph/ceph:v19, name=jolly_albattani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:55 compute-0 sudo[94701]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:55 compute-0 podman[94973]: 2025-09-30 17:39:55.543955833 +0000 UTC m=+0.519920687 container remove c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194 (image=quay.io/ceph/ceph:v19, name=jolly_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:39:55 compute-0 systemd[1]: libpod-conmon-c0bfc7bbc12d10725e4d2517ebdcf6975b4b6dde3380da3b4951b293ad284194.scope: Deactivated successfully.
Sep 30 17:39:55 compute-0 ansible-async_wrapper.py[94948]: Module complete (94948)
Sep 30 17:39:55 compute-0 sudo[95051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:55 compute-0 sudo[95051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:55 compute-0 sudo[95051]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:55 compute-0 sudo[95080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:39:55 compute-0 sudo[95080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:55 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Sep 30 17:39:55 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Sep 30 17:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-98a41f1ba16b5adb0ec234192e7b1f6919664ab32d25f18a63b6046768b05131-merged.mount: Deactivated successfully.
Sep 30 17:39:55 compute-0 podman[95146]: 2025-09-30 17:39:55.968166642 +0000 UTC m=+0.036964695 container create c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:39:55 compute-0 systemd[1]: Started libpod-conmon-c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe.scope.
Sep 30 17:39:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:56 compute-0 sudo[95209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgguircbqivmduuyzcnrebavsahmvnxc ; /usr/bin/python3'
Sep 30 17:39:56 compute-0 sudo[95209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:56 compute-0 podman[95146]: 2025-09-30 17:39:56.030247372 +0000 UTC m=+0.099045455 container init c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_bose, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:56 compute-0 podman[95146]: 2025-09-30 17:39:56.035831446 +0000 UTC m=+0.104629499 container start c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:56 compute-0 podman[95146]: 2025-09-30 17:39:56.03908691 +0000 UTC m=+0.107885013 container attach c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:39:56 compute-0 adoring_bose[95205]: 167 167
Sep 30 17:39:56 compute-0 systemd[1]: libpod-c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe.scope: Deactivated successfully.
Sep 30 17:39:56 compute-0 podman[95146]: 2025-09-30 17:39:56.040837615 +0000 UTC m=+0.109635678 container died c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_bose, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:39:56 compute-0 podman[95146]: 2025-09-30 17:39:55.952546189 +0000 UTC m=+0.021344262 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b7495d8b430dbcc1b8b5ad7ec2198de82b7c003df56b0699e1c713a03d1b55a-merged.mount: Deactivated successfully.
Sep 30 17:39:56 compute-0 podman[95146]: 2025-09-30 17:39:56.08754603 +0000 UTC m=+0.156344093 container remove c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_bose, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Sep 30 17:39:56 compute-0 systemd[1]: libpod-conmon-c1bf8a2817b244ac8f93475b2b25024789d6d4f7fc36489cd411cc6f5518ebbe.scope: Deactivated successfully.
Sep 30 17:39:56 compute-0 python3[95212]: ansible-ansible.legacy.async_status Invoked with jid=j902112147448.94915 mode=status _async_dir=/root/.ansible_async
Sep 30 17:39:56 compute-0 sudo[95209]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:56 compute-0 podman[95233]: 2025-09-30 17:39:56.239441605 +0000 UTC m=+0.050133404 container create d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mendel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:39:56 compute-0 systemd[1]: Started libpod-conmon-d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b.scope.
Sep 30 17:39:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:56 compute-0 sudo[95297]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzevztylcekqqqtesrfncfxarjdmdlqd ; /usr/bin/python3'
Sep 30 17:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404e65517baac77fa96e15f82558c2be8257a8694d593dfa44d69c4f63291abc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404e65517baac77fa96e15f82558c2be8257a8694d593dfa44d69c4f63291abc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404e65517baac77fa96e15f82558c2be8257a8694d593dfa44d69c4f63291abc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404e65517baac77fa96e15f82558c2be8257a8694d593dfa44d69c4f63291abc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:56 compute-0 podman[95233]: 2025-09-30 17:39:56.210663183 +0000 UTC m=+0.021354991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:56 compute-0 sudo[95297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:56 compute-0 podman[95233]: 2025-09-30 17:39:56.319981542 +0000 UTC m=+0.130673380 container init d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mendel, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Sep 30 17:39:56 compute-0 podman[95233]: 2025-09-30 17:39:56.327963318 +0000 UTC m=+0.138655106 container start d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 17:39:56 compute-0 podman[95233]: 2025-09-30 17:39:56.330944005 +0000 UTC m=+0.141635843 container attach d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:56 compute-0 ceph-mon[73755]: from='client.14418 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:39:56 compute-0 ceph-mon[73755]: 4.14 scrub starts
Sep 30 17:39:56 compute-0 ceph-mon[73755]: 4.14 scrub ok
Sep 30 17:39:56 compute-0 ceph-mon[73755]: 3.9 scrub starts
Sep 30 17:39:56 compute-0 ceph-mon[73755]: 3.9 scrub ok
Sep 30 17:39:56 compute-0 python3[95299]: ansible-ansible.legacy.async_status Invoked with jid=j902112147448.94915 mode=cleanup _async_dir=/root/.ansible_async
Sep 30 17:39:56 compute-0 sudo[95297]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 0 B/s wr, 7 op/s
Sep 30 17:39:56 compute-0 condescending_mendel[95291]: {
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:     "0": [
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:         {
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "devices": [
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "/dev/loop3"
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             ],
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "lv_name": "ceph_lv0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "lv_size": "21470642176",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "name": "ceph_lv0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "tags": {
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.cluster_name": "ceph",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.crush_device_class": "",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.encrypted": "0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.osd_id": "0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.type": "block",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.vdo": "0",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:                 "ceph.with_tpm": "0"
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             },
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "type": "block",
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:             "vg_name": "ceph_vg0"
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:         }
Sep 30 17:39:56 compute-0 condescending_mendel[95291]:     ]
Sep 30 17:39:56 compute-0 condescending_mendel[95291]: }
Sep 30 17:39:56 compute-0 systemd[1]: libpod-d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b.scope: Deactivated successfully.
Sep 30 17:39:56 compute-0 podman[95233]: 2025-09-30 17:39:56.610243127 +0000 UTC m=+0.420934925 container died d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mendel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:39:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-404e65517baac77fa96e15f82558c2be8257a8694d593dfa44d69c4f63291abc-merged.mount: Deactivated successfully.
Sep 30 17:39:56 compute-0 podman[95233]: 2025-09-30 17:39:56.657973708 +0000 UTC m=+0.468665496 container remove d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:56 compute-0 systemd[1]: libpod-conmon-d12a1b55c7e1c92f1600e28ccaf4f3e07ad649d0ac42bd7f9c4589c5f485f40b.scope: Deactivated successfully.
Sep 30 17:39:56 compute-0 sudo[95080]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:56 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.f scrub starts
Sep 30 17:39:56 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.f scrub ok
Sep 30 17:39:56 compute-0 sudo[95317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:39:56 compute-0 sudo[95317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:56 compute-0 sudo[95317]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:56 compute-0 sudo[95342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:39:56 compute-0 sudo[95342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:39:56 compute-0 sudo[95390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhhanmnfouyopopqgakedmmuqfwqsyaq ; /usr/bin/python3'
Sep 30 17:39:56 compute-0 sudo[95390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:57 compute-0 python3[95392]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:57 compute-0 podman[95400]: 2025-09-30 17:39:57.064737286 +0000 UTC m=+0.049943499 container create d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6 (image=quay.io/ceph/ceph:v19, name=zen_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:57 compute-0 systemd[1]: Started libpod-conmon-d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6.scope.
Sep 30 17:39:57 compute-0 podman[95400]: 2025-09-30 17:39:57.03695151 +0000 UTC m=+0.022157813 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cbbd72f305ad6606a452077a2990c39e72a2ca9fd828e2cf131cca622aaa580/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cbbd72f305ad6606a452077a2990c39e72a2ca9fd828e2cf131cca622aaa580/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:57 compute-0 podman[95400]: 2025-09-30 17:39:57.149639485 +0000 UTC m=+0.134845708 container init d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6 (image=quay.io/ceph/ceph:v19, name=zen_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 17:39:57 compute-0 podman[95400]: 2025-09-30 17:39:57.155969408 +0000 UTC m=+0.141175631 container start d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6 (image=quay.io/ceph/ceph:v19, name=zen_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:39:57 compute-0 podman[95400]: 2025-09-30 17:39:57.159163561 +0000 UTC m=+0.144369774 container attach d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6 (image=quay.io/ceph/ceph:v19, name=zen_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:57 compute-0 podman[95451]: 2025-09-30 17:39:57.230839019 +0000 UTC m=+0.046590822 container create f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:39:57 compute-0 systemd[1]: Started libpod-conmon-f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19.scope.
Sep 30 17:39:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:57 compute-0 podman[95451]: 2025-09-30 17:39:57.298066683 +0000 UTC m=+0.113818516 container init f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 17:39:57 compute-0 podman[95451]: 2025-09-30 17:39:57.207135598 +0000 UTC m=+0.022887421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:57 compute-0 podman[95451]: 2025-09-30 17:39:57.305519475 +0000 UTC m=+0.121271278 container start f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:57 compute-0 jolly_wright[95469]: 167 167
Sep 30 17:39:57 compute-0 systemd[1]: libpod-f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19.scope: Deactivated successfully.
Sep 30 17:39:57 compute-0 podman[95451]: 2025-09-30 17:39:57.309683842 +0000 UTC m=+0.125435635 container attach f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:57 compute-0 conmon[95469]: conmon f9a7370968ca146cfcd6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19.scope/container/memory.events
Sep 30 17:39:57 compute-0 podman[95451]: 2025-09-30 17:39:57.310575175 +0000 UTC m=+0.126326978 container died f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wright, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd77a99c13b248c6261487347b20c45ca0978a81660ec283e0dfab988133e0fb-merged.mount: Deactivated successfully.
Sep 30 17:39:57 compute-0 podman[95451]: 2025-09-30 17:39:57.347638171 +0000 UTC m=+0.163389974 container remove f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_wright, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 17:39:57 compute-0 ceph-mon[73755]: pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 0 B/s wr, 7 op/s
Sep 30 17:39:57 compute-0 ceph-mon[73755]: 5.12 scrub starts
Sep 30 17:39:57 compute-0 ceph-mon[73755]: 5.12 scrub ok
Sep 30 17:39:57 compute-0 ceph-mon[73755]: 5.f scrub starts
Sep 30 17:39:57 compute-0 ceph-mon[73755]: 5.f scrub ok
Sep 30 17:39:57 compute-0 systemd[1]: libpod-conmon-f9a7370968ca146cfcd6c1bcc6730f7eebd1f2e7551c37e7d99048fd13da7b19.scope: Deactivated successfully.
Sep 30 17:39:57 compute-0 podman[95511]: 2025-09-30 17:39:57.499659591 +0000 UTC m=+0.047207239 container create 2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 17:39:57 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14422 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:39:57 compute-0 zen_ganguly[95444]: 
Sep 30 17:39:57 compute-0 zen_ganguly[95444]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Sep 30 17:39:57 compute-0 systemd[1]: libpod-d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6.scope: Deactivated successfully.
Sep 30 17:39:57 compute-0 podman[95400]: 2025-09-30 17:39:57.552638157 +0000 UTC m=+0.537844370 container died d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6 (image=quay.io/ceph/ceph:v19, name=zen_ganguly, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:57 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 11 completed events
Sep 30 17:39:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:39:57 compute-0 podman[95511]: 2025-09-30 17:39:57.474255986 +0000 UTC m=+0.021803654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:39:57 compute-0 systemd[1]: Started libpod-conmon-2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e.scope.
Sep 30 17:39:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8f00b87501c42c9c6ac6ed7f669e713d0fcc39cb231502a45325f72be324b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8f00b87501c42c9c6ac6ed7f669e713d0fcc39cb231502a45325f72be324b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8f00b87501c42c9c6ac6ed7f669e713d0fcc39cb231502a45325f72be324b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8f00b87501c42c9c6ac6ed7f669e713d0fcc39cb231502a45325f72be324b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:57 compute-0 podman[95511]: 2025-09-30 17:39:57.653285022 +0000 UTC m=+0.200832690 container init 2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:57 compute-0 podman[95511]: 2025-09-30 17:39:57.660473977 +0000 UTC m=+0.208021655 container start 2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:39:57 compute-0 podman[95511]: 2025-09-30 17:39:57.671165583 +0000 UTC m=+0.218713271 container attach 2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cbbd72f305ad6606a452077a2990c39e72a2ca9fd828e2cf131cca622aaa580-merged.mount: Deactivated successfully.
Sep 30 17:39:57 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Sep 30 17:39:57 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Sep 30 17:39:57 compute-0 podman[95528]: 2025-09-30 17:39:57.733632214 +0000 UTC m=+0.174972593 container remove d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6 (image=quay.io/ceph/ceph:v19, name=zen_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:39:57 compute-0 systemd[1]: libpod-conmon-d67679a2df9d8f56ed666a53b4340d10c87d25776a08c2d826622616898257e6.scope: Deactivated successfully.
Sep 30 17:39:57 compute-0 sudo[95390]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:58 compute-0 lvm[95619]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:39:58 compute-0 lvm[95619]: VG ceph_vg0 finished
Sep 30 17:39:58 compute-0 hungry_einstein[95541]: {}
Sep 30 17:39:58 compute-0 systemd[1]: libpod-2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e.scope: Deactivated successfully.
Sep 30 17:39:58 compute-0 podman[95511]: 2025-09-30 17:39:58.358878706 +0000 UTC m=+0.906426354 container died 2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:39:58 compute-0 systemd[1]: libpod-2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e.scope: Consumed 1.028s CPU time.
Sep 30 17:39:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b8f00b87501c42c9c6ac6ed7f669e713d0fcc39cb231502a45325f72be324b5-merged.mount: Deactivated successfully.
Sep 30 17:39:58 compute-0 podman[95511]: 2025-09-30 17:39:58.406606987 +0000 UTC m=+0.954154635 container remove 2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:58 compute-0 systemd[1]: libpod-conmon-2001fdacd7d3c88cabde6a96a462d5271e16ca5ac049dccc9c45445030121c4e.scope: Deactivated successfully.
Sep 30 17:39:58 compute-0 sudo[95342]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:39:58 compute-0 sudo[95658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcucljvsifnkohakaqltywdfxuttuhjl ; /usr/bin/python3'
Sep 30 17:39:58 compute-0 sudo[95658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:39:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v15: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s
Sep 30 17:39:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:39:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:58 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev aa5d9513-854e-4d74-a139-bb18a0ff1a3b (Updating rgw.rgw deployment (+2 -> 2))
Sep 30 17:39:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.csizwd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 17:39:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.csizwd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 17:39:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.csizwd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 17:39:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Sep 30 17:39:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:39:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:58 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.csizwd on compute-1
Sep 30 17:39:58 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.csizwd on compute-1
Sep 30 17:39:58 compute-0 python3[95660]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:39:58 compute-0 ceph-mon[73755]: from='client.14422 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:39:58 compute-0 ceph-mon[73755]: 4.12 scrub starts
Sep 30 17:39:58 compute-0 ceph-mon[73755]: 4.12 scrub ok
Sep 30 17:39:58 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:58 compute-0 ceph-mon[73755]: 4.1 scrub starts
Sep 30 17:39:58 compute-0 ceph-mon[73755]: 4.1 scrub ok
Sep 30 17:39:58 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:58 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:58 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.csizwd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 17:39:58 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.csizwd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 17:39:58 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:39:58 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:39:58 compute-0 podman[95661]: 2025-09-30 17:39:58.667262808 +0000 UTC m=+0.036642806 container create 2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2 (image=quay.io/ceph/ceph:v19, name=jovial_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:39:58 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.e scrub starts
Sep 30 17:39:58 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.e scrub ok
Sep 30 17:39:58 compute-0 systemd[1]: Started libpod-conmon-2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2.scope.
Sep 30 17:39:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edab2063870aa45e750d1ff1afa2445f1c622843856f4d8d5ad8f9b02a38a03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edab2063870aa45e750d1ff1afa2445f1c622843856f4d8d5ad8f9b02a38a03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:39:58 compute-0 podman[95661]: 2025-09-30 17:39:58.740545927 +0000 UTC m=+0.109925945 container init 2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2 (image=quay.io/ceph/ceph:v19, name=jovial_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:39:58 compute-0 podman[95661]: 2025-09-30 17:39:58.747406214 +0000 UTC m=+0.116786212 container start 2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2 (image=quay.io/ceph/ceph:v19, name=jovial_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:39:58 compute-0 podman[95661]: 2025-09-30 17:39:58.651624065 +0000 UTC m=+0.021004083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:39:58 compute-0 podman[95661]: 2025-09-30 17:39:58.750646838 +0000 UTC m=+0.120026856 container attach 2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2 (image=quay.io/ceph/ceph:v19, name=jovial_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:39:59 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14426 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:39:59 compute-0 jovial_bhaskara[95676]: 
Sep 30 17:39:59 compute-0 jovial_bhaskara[95676]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Sep 30 17:39:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:39:59 compute-0 systemd[1]: libpod-2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2.scope: Deactivated successfully.
Sep 30 17:39:59 compute-0 podman[95661]: 2025-09-30 17:39:59.142616785 +0000 UTC m=+0.511996783 container died 2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2 (image=quay.io/ceph/ceph:v19, name=jovial_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7edab2063870aa45e750d1ff1afa2445f1c622843856f4d8d5ad8f9b02a38a03-merged.mount: Deactivated successfully.
Sep 30 17:39:59 compute-0 podman[95661]: 2025-09-30 17:39:59.180223295 +0000 UTC m=+0.549603293 container remove 2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2 (image=quay.io/ceph/ceph:v19, name=jovial_bhaskara, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 17:39:59 compute-0 systemd[1]: libpod-conmon-2d81d8c766f15092a55eaabce427c5d706ad6b11aaffd43b1a50510fe5bd0fa2.scope: Deactivated successfully.
Sep 30 17:39:59 compute-0 sudo[95658]: pam_unix(sudo:session): session closed for user root
Sep 30 17:39:59 compute-0 ceph-mon[73755]: pgmap v15: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s
Sep 30 17:39:59 compute-0 ceph-mon[73755]: Deploying daemon rgw.rgw.compute-1.csizwd on compute-1
Sep 30 17:39:59 compute-0 ceph-mon[73755]: 5.13 scrub starts
Sep 30 17:39:59 compute-0 ceph-mon[73755]: 5.13 scrub ok
Sep 30 17:39:59 compute-0 ceph-mon[73755]: 5.e scrub starts
Sep 30 17:39:59 compute-0 ceph-mon[73755]: 5.e scrub ok
Sep 30 17:39:59 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.c scrub starts
Sep 30 17:39:59 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.c scrub ok
Sep 30 17:39:59 compute-0 ansible-async_wrapper.py[94945]: Done in kid B.
Sep 30 17:39:59 compute-0 sudo[95735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnzwshdqhmteohtgeuyptqfyuonlkfim ; /usr/bin/python3'
Sep 30 17:39:59 compute-0 sudo[95735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 OSD(s) experiencing slow operations in BlueStore; 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [ERR] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [ERR] :      osd.1 observed slow operation indications in BlueStore
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Sep 30 17:40:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73751]: 2025-09-30T17:39:59.998+0000 7fe05422f640 -1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 OSD(s) experiencing slow operations in BlueStore; 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Sep 30 17:40:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73751]: 2025-09-30T17:39:59.998+0000 7fe05422f640 -1 log_channel(cluster) log [ERR] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 17:40:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73751]: 2025-09-30T17:39:59.998+0000 7fe05422f640 -1 log_channel(cluster) log [ERR] :      osd.1 observed slow operation indications in BlueStore
Sep 30 17:40:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73751]: 2025-09-30T17:39:59.998+0000 7fe05422f640 -1 log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Sep 30 17:40:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73751]: 2025-09-30T17:39:59.998+0000 7fe05422f640 -1 log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Sep 30 17:40:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73751]: 2025-09-30T17:39:59.998+0000 7fe05422f640 -1 log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Sep 30 17:40:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0[73751]: 2025-09-30T17:39:59.998+0000 7fe05422f640 -1 log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Sep 30 17:40:00 compute-0 python3[95737]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:40:00 compute-0 podman[95738]: 2025-09-30 17:40:00.140441263 +0000 UTC m=+0.039795427 container create a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22 (image=quay.io/ceph/ceph:v19, name=nice_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Sep 30 17:40:00 compute-0 systemd[1]: Started libpod-conmon-a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22.scope.
Sep 30 17:40:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d717d9479523c05c628d4a208b6d36066fd9ae20ca510d00eaf033d3064fda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d717d9479523c05c628d4a208b6d36066fd9ae20ca510d00eaf033d3064fda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:00 compute-0 podman[95738]: 2025-09-30 17:40:00.214901433 +0000 UTC m=+0.114255607 container init a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22 (image=quay.io/ceph/ceph:v19, name=nice_goodall, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:40:00 compute-0 podman[95738]: 2025-09-30 17:40:00.220768804 +0000 UTC m=+0.120122968 container start a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22 (image=quay.io/ceph/ceph:v19, name=nice_goodall, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:40:00 compute-0 podman[95738]: 2025-09-30 17:40:00.124258576 +0000 UTC m=+0.023612760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:40:00 compute-0 podman[95738]: 2025-09-30 17:40:00.225965018 +0000 UTC m=+0.125319192 container attach a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22 (image=quay.io/ceph/ceph:v19, name=nice_goodall, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:40:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Sep 30 17:40:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:40:00 compute-0 nice_goodall[95753]: 
Sep 30 17:40:00 compute-0 nice_goodall[95753]: [{"container_id": "edbad5d01b7a", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.11%", "created": "2025-09-30T17:37:21.592859Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T17:39:44.278443Z", "memory_usage": 7808745, "ports": [], "service_name": "crash", "started": "2025-09-30T17:37:21.509661Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@crash.compute-0", "version": "19.2.3"}, {"container_id": "488cd31837bd", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.16%", "created": "2025-09-30T17:38:06.653472Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-09-30T17:39:44.563135Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2025-09-30T17:38:06.539064Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@crash.compute-1", "version": "19.2.3"}, {"container_id": "41da77fe255c", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "26.69%", "created": "2025-09-30T17:36:39.419760Z", "daemon_id": "compute-0.efvthf", "daemon_name": "mgr.compute-0.efvthf", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T17:39:44.278310Z", "memory_usage": 538234060, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-09-30T17:36:38.636592Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@mgr.compute-0.efvthf", "version": "19.2.3"}, {"container_id": "0d9fb24a4b03", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "24.45%", "created": "2025-09-30T17:38:01.270325Z", "daemon_id": "compute-1.glbusf", "daemon_name": "mgr.compute-1.glbusf", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-09-30T17:39:44.563026Z", "memory_usage": 502372761, "ports": [8765], "service_name": "mgr", "started": "2025-09-30T17:38:01.158784Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@mgr.compute-1.glbusf", "version": "19.2.3"}, {"container_id": "28cb2903608c", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.08%", "created": "2025-09-30T17:36:27.024407Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T17:39:44.278182Z", "memory_request": 2147483648, "memory_usage": 56046387, "ports": [], "service_name": "mon", "started": "2025-09-30T17:36:32.074970Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@mon.compute-0", "version": "19.2.3"}, {"container_id": "0535866a09e0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.27%", "created": "2025-09-30T17:37:59.604280Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-09-30T17:39:44.562869Z", "memory_request": 2147483648, "memory_usage": 40978350, "ports": [], "service_name": "mon", "started": "2025-09-30T17:37:59.502775Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@mon.compute-1", "version": "19.2.3"}, {"daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "events": ["2025-09-30T17:39:51.433229Z daemon:node-exporter.compute-0 [INFO] \"Deployed node-exporter.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2025-09-30T17:39:54.308717Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "fd4d46f7f2f9", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.64%", "created": "2025-09-30T17:38:25.541472Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T17:39:44.278547Z", "memory_request": 4294967296, "memory_usage": 69237473, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-09-30T17:38:25.163947Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@osd.0", "version": "19.2.3"}, {"container_id": "f26debbca63b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.98%", "created": "2025-09-30T17:38:19.914833Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-09-30T17:39:44.563244Z", "memory_request": 4294967296, "memory_usage": 71743569, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-09-30T17:38:19.826905Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@osd.1", "version": "19.2.3"}]
Sep 30 17:40:00 compute-0 systemd[1]: libpod-a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22.scope: Deactivated successfully.
Sep 30 17:40:00 compute-0 podman[95738]: 2025-09-30 17:40:00.574022783 +0000 UTC m=+0.473376947 container died a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22 (image=quay.io/ceph/ceph:v19, name=nice_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-58d717d9479523c05c628d4a208b6d36066fd9ae20ca510d00eaf033d3064fda-merged.mount: Deactivated successfully.
Sep 30 17:40:00 compute-0 podman[95738]: 2025-09-30 17:40:00.607827435 +0000 UTC m=+0.507181599 container remove a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22 (image=quay.io/ceph/ceph:v19, name=nice_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Sep 30 17:40:00 compute-0 systemd[1]: libpod-conmon-a69976aa8cbfe30d1d2126ecad4b46456c685fc2bf9e7d04e6955c7407c8bf22.scope: Deactivated successfully.
Sep 30 17:40:00 compute-0 sudo[95735]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:00 compute-0 ceph-mon[73755]: from='client.14426 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:40:00 compute-0 ceph-mon[73755]: 4.11 deep-scrub starts
Sep 30 17:40:00 compute-0 ceph-mon[73755]: 4.11 deep-scrub ok
Sep 30 17:40:00 compute-0 ceph-mon[73755]: 4.c scrub starts
Sep 30 17:40:00 compute-0 ceph-mon[73755]: 4.c scrub ok
Sep 30 17:40:00 compute-0 ceph-mon[73755]: Health detail: HEALTH_ERR 1 OSD(s) experiencing slow operations in BlueStore; 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Sep 30 17:40:00 compute-0 ceph-mon[73755]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 17:40:00 compute-0 ceph-mon[73755]:      osd.1 observed slow operation indications in BlueStore
Sep 30 17:40:00 compute-0 ceph-mon[73755]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Sep 30 17:40:00 compute-0 ceph-mon[73755]:     fs cephfs is offline because no MDS is active for it.
Sep 30 17:40:00 compute-0 ceph-mon[73755]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Sep 30 17:40:00 compute-0 ceph-mon[73755]:     fs cephfs has 0 MDS online, but wants 1
Sep 30 17:40:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:40:00 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.e scrub starts
Sep 30 17:40:00 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.e scrub ok
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mewauo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mewauo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mewauo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 17:40:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:40:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:00 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.mewauo on compute-0
Sep 30 17:40:00 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.mewauo on compute-0
Sep 30 17:40:00 compute-0 sudo[95793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:00 compute-0 sudo[95793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:00 compute-0 sudo[95793]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:00 compute-0 sudo[95818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:00 compute-0 sudo[95818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:01 compute-0 podman[95884]: 2025-09-30 17:40:01.245678952 +0000 UTC m=+0.040411523 container create 3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:40:01 compute-0 systemd[1]: Started libpod-conmon-3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1.scope.
Sep 30 17:40:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:01 compute-0 podman[95884]: 2025-09-30 17:40:01.305981757 +0000 UTC m=+0.100714338 container init 3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lederberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:40:01 compute-0 podman[95884]: 2025-09-30 17:40:01.311252423 +0000 UTC m=+0.105984984 container start 3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lederberg, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:40:01 compute-0 podman[95884]: 2025-09-30 17:40:01.314580199 +0000 UTC m=+0.109312760 container attach 3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:40:01 compute-0 angry_lederberg[95900]: 167 167
Sep 30 17:40:01 compute-0 systemd[1]: libpod-3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1.scope: Deactivated successfully.
Sep 30 17:40:01 compute-0 podman[95884]: 2025-09-30 17:40:01.316155559 +0000 UTC m=+0.110888120 container died 3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:40:01 compute-0 podman[95884]: 2025-09-30 17:40:01.227317119 +0000 UTC m=+0.022049730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf19dd31744cbe2508e4cf760bff3acc2dc08eb473507638c7a494fbc58b70ec-merged.mount: Deactivated successfully.
Sep 30 17:40:01 compute-0 podman[95884]: 2025-09-30 17:40:01.348212486 +0000 UTC m=+0.142945047 container remove 3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:40:01 compute-0 systemd[1]: libpod-conmon-3253e172a107735f92f5e0ac705fc7abb40a15eba4c1d541dabdaed2f1bf59b1.scope: Deactivated successfully.
Sep 30 17:40:01 compute-0 systemd[1]: Reloading.
Sep 30 17:40:01 compute-0 systemd-rc-local-generator[95966]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:01 compute-0 systemd-sysv-generator[95971]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:01 compute-0 sudo[95942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyunksnfhyummrjvfiuvzkueverppcqq ; /usr/bin/python3'
Sep 30 17:40:01 compute-0 sudo[95942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:40:01 compute-0 systemd[1]: Reloading.
Sep 30 17:40:01 compute-0 ceph-mon[73755]: pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Sep 30 17:40:01 compute-0 ceph-mon[73755]: from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 17:40:01 compute-0 ceph-mon[73755]: 4.10 scrub starts
Sep 30 17:40:01 compute-0 ceph-mon[73755]: 4.10 scrub ok
Sep 30 17:40:01 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:01 compute-0 ceph-mon[73755]: 4.e scrub starts
Sep 30 17:40:01 compute-0 ceph-mon[73755]: 4.e scrub ok
Sep 30 17:40:01 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:01 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:01 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mewauo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 17:40:01 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.mewauo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 17:40:01 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:01 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:01 compute-0 ceph-mon[73755]: Deploying daemon rgw.rgw.compute-0.mewauo on compute-0
Sep 30 17:40:01 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Sep 30 17:40:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Sep 30 17:40:01 compute-0 systemd-rc-local-generator[96008]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:01 compute-0 systemd-sysv-generator[96011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:01 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Sep 30 17:40:01 compute-0 python3[95979]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:40:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e46 e46: 2 total, 2 up, 2 in
Sep 30 17:40:01 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e46: 2 total, 2 up, 2 in
Sep 30 17:40:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Sep 30 17:40:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Sep 30 17:40:01 compute-0 podman[96017]: 2025-09-30 17:40:01.806901613 +0000 UTC m=+0.039158691 container create 37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511 (image=quay.io/ceph/ceph:v19, name=quirky_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:40:01 compute-0 podman[96017]: 2025-09-30 17:40:01.792306917 +0000 UTC m=+0.024564015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:40:01 compute-0 systemd[1]: Started libpod-conmon-37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511.scope.
Sep 30 17:40:01 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.mewauo for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31dcbb04f45da61ddeca66d216cb296d105ca302c61dc48816f723a9af9c9340/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31dcbb04f45da61ddeca66d216cb296d105ca302c61dc48816f723a9af9c9340/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:01 compute-0 podman[96017]: 2025-09-30 17:40:01.958925313 +0000 UTC m=+0.191182421 container init 37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511 (image=quay.io/ceph/ceph:v19, name=quirky_mirzakhani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:40:01 compute-0 podman[96017]: 2025-09-30 17:40:01.965975235 +0000 UTC m=+0.198232313 container start 37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511 (image=quay.io/ceph/ceph:v19, name=quirky_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:40:01 compute-0 podman[96017]: 2025-09-30 17:40:01.969917157 +0000 UTC m=+0.202174255 container attach 37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511 (image=quay.io/ceph/ceph:v19, name=quirky_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:40:02 compute-0 podman[96106]: 2025-09-30 17:40:02.13795591 +0000 UTC m=+0.034800669 container create 1e75b27bbc5121350f19eaba51a35b421755f8a9df0a3639b7c82dbcf53bfeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-rgw-rgw-compute-0-mewauo, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdf86a5270597a235cbb5d5cb4db1e46071a897023ab36cad29f47544e7f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdf86a5270597a235cbb5d5cb4db1e46071a897023ab36cad29f47544e7f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdf86a5270597a235cbb5d5cb4db1e46071a897023ab36cad29f47544e7f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdf86a5270597a235cbb5d5cb4db1e46071a897023ab36cad29f47544e7f28/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.mewauo supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:02 compute-0 podman[96106]: 2025-09-30 17:40:02.198584803 +0000 UTC m=+0.095429582 container init 1e75b27bbc5121350f19eaba51a35b421755f8a9df0a3639b7c82dbcf53bfeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-rgw-rgw-compute-0-mewauo, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:40:02 compute-0 podman[96106]: 2025-09-30 17:40:02.203213732 +0000 UTC m=+0.100058481 container start 1e75b27bbc5121350f19eaba51a35b421755f8a9df0a3639b7c82dbcf53bfeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-rgw-rgw-compute-0-mewauo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 17:40:02 compute-0 bash[96106]: 1e75b27bbc5121350f19eaba51a35b421755f8a9df0a3639b7c82dbcf53bfeea
Sep 30 17:40:02 compute-0 podman[96106]: 2025-09-30 17:40:02.122977903 +0000 UTC m=+0.019822682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:40:02 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.mewauo for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:02 compute-0 sudo[95818]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:02 compute-0 radosgw[96126]: deferred set uid:gid to 167:167 (ceph:ceph)
Sep 30 17:40:02 compute-0 radosgw[96126]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Sep 30 17:40:02 compute-0 radosgw[96126]: framework: beast
Sep 30 17:40:02 compute-0 radosgw[96126]: framework conf key: endpoint, val: 192.168.122.100:8082
Sep 30 17:40:02 compute-0 radosgw[96126]: init_numa not setting numa affinity
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev aa5d9513-854e-4d74-a139-bb18a0ff1a3b (Updating rgw.rgw deployment (+2 -> 2))
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event aa5d9513-854e-4d74-a139-bb18a0ff1a3b (Updating rgw.rgw deployment (+2 -> 2)) in 4 seconds
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 7af5edeb-2b2f-47cf-8dbd-b1ea7a64b5ac (Updating mds.cephfs deployment (+2 -> 2))
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vrwlru", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vrwlru", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vrwlru", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.vrwlru on compute-0
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.vrwlru on compute-0
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117407937' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:40:02 compute-0 quirky_mirzakhani[96035]: 
Sep 30 17:40:02 compute-0 quirky_mirzakhani[96035]: {"fsid":"63d32c6a-fa18-54ed-8711-9a3915cc367b","health":{"status":"HEALTH_ERR","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-1"],"quorum_age":117,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":46,"num_osds":2,"num_up_osds":2,"osd_up_since":1759253917,"num_in_osds":2,"osd_in_since":1759253889,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194}],"num_pgs":194,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":56389632,"bytes_avail":42884894720,"bytes_total":42941284352,"read_bytes_sec":15014,"write_bytes_sec":0,"read_op_per_sec":4,"write_op_per_sec":1},"fsmap":{"epoch":2,"btime":"2025-09-30T17:39:43:520234+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-09-30T17:39:12.734724+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"aa5d9513-854e-4d74-a139-bb18a0ff1a3b":{"message":"Updating rgw.rgw deployment (+2 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Sep 30 17:40:02 compute-0 sudo[96153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:02 compute-0 sudo[96153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:02 compute-0 sudo[96153]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:02 compute-0 systemd[1]: libpod-37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511.scope: Deactivated successfully.
Sep 30 17:40:02 compute-0 podman[96017]: 2025-09-30 17:40:02.392820481 +0000 UTC m=+0.625077559 container died 37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511 (image=quay.io/ceph/ceph:v19, name=quirky_mirzakhani, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 17:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-31dcbb04f45da61ddeca66d216cb296d105ca302c61dc48816f723a9af9c9340-merged.mount: Deactivated successfully.
Sep 30 17:40:02 compute-0 podman[96017]: 2025-09-30 17:40:02.431628902 +0000 UTC m=+0.663885980 container remove 37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511 (image=quay.io/ceph/ceph:v19, name=quirky_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Sep 30 17:40:02 compute-0 systemd[1]: libpod-conmon-37e3c82f50265a4be724634206d6b4c2e786b5b851820ee3e4b002297cd7d511.scope: Deactivated successfully.
Sep 30 17:40:02 compute-0 sudo[96180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:02 compute-0 sudo[96180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:02 compute-0 sudo[95942]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v18: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 12 completed events
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mgr[74051]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Sep 30 17:40:02 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Sep 30 17:40:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e47 e47: 2 total, 2 up, 2 in
Sep 30 17:40:02 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e47: 2 total, 2 up, 2 in
Sep 30 17:40:02 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Sep 30 17:40:02 compute-0 ceph-mon[73755]: 5.1e scrub starts
Sep 30 17:40:02 compute-0 ceph-mon[73755]: 5.1e scrub ok
Sep 30 17:40:02 compute-0 ceph-mon[73755]: 3.1a deep-scrub starts
Sep 30 17:40:02 compute-0 ceph-mon[73755]: 3.1a deep-scrub ok
Sep 30 17:40:02 compute-0 ceph-mon[73755]: osdmap e46: 2 total, 2 up, 2 in
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/423346448' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vrwlru", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vrwlru", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2117407937' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 17:40:02 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:02 compute-0 podman[96254]: 2025-09-30 17:40:02.783957967 +0000 UTC m=+0.039384627 container create ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:40:02 compute-0 systemd[1]: Started libpod-conmon-ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325.scope.
Sep 30 17:40:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:02 compute-0 podman[96254]: 2025-09-30 17:40:02.843967764 +0000 UTC m=+0.099394444 container init ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:40:02 compute-0 podman[96254]: 2025-09-30 17:40:02.851646552 +0000 UTC m=+0.107073212 container start ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 17:40:02 compute-0 podman[96254]: 2025-09-30 17:40:02.855291896 +0000 UTC m=+0.110718576 container attach ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:40:02 compute-0 thirsty_herschel[96270]: 167 167
Sep 30 17:40:02 compute-0 systemd[1]: libpod-ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325.scope: Deactivated successfully.
Sep 30 17:40:02 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:40:02 compute-0 podman[96254]: 2025-09-30 17:40:02.857157045 +0000 UTC m=+0.112583695 container died ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:40:02 compute-0 podman[96254]: 2025-09-30 17:40:02.768498478 +0000 UTC m=+0.023925158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-116491c03a276e356d0727bae0cb075afa9a8151344d9def2b5cd0f591ba7ba0-merged.mount: Deactivated successfully.
Sep 30 17:40:02 compute-0 podman[96254]: 2025-09-30 17:40:02.891944032 +0000 UTC m=+0.147370692 container remove ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_herschel, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:40:02 compute-0 systemd[1]: libpod-conmon-ef7b0a2cecc8d1b092aa683d7c24d2b93038ef88538061b6326634a6c798b325.scope: Deactivated successfully.
Sep 30 17:40:02 compute-0 systemd[1]: Reloading.
Sep 30 17:40:03 compute-0 systemd-sysv-generator[96896]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:03 compute-0 systemd-rc-local-generator[96892]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:03 compute-0 sudo[96924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efbedxmnuoyjnchfegdhwuqnqlskaomw ; /usr/bin/python3'
Sep 30 17:40:03 compute-0 sudo[96924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:40:03 compute-0 systemd[1]: Reloading.
Sep 30 17:40:03 compute-0 systemd-sysv-generator[96963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:03 compute-0 systemd-rc-local-generator[96959]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:40:03 compute-0 python3[96929]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:40:03 compute-0 podman[96967]: 2025-09-30 17:40:03.395792192 +0000 UTC m=+0.039363246 container create 8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 17:40:03 compute-0 systemd[1]: Started libpod-conmon-8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156.scope.
Sep 30 17:40:03 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.vrwlru for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:03 compute-0 podman[96967]: 2025-09-30 17:40:03.380927569 +0000 UTC m=+0.024498643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3804cfe2beb3b5610851b0636d8e82612680775563a2891650eba22143acd7be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3804cfe2beb3b5610851b0636d8e82612680775563a2891650eba22143acd7be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:03 compute-0 podman[96967]: 2025-09-30 17:40:03.49807118 +0000 UTC m=+0.141642234 container init 8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:40:03 compute-0 podman[96967]: 2025-09-30 17:40:03.507278827 +0000 UTC m=+0.150849881 container start 8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Sep 30 17:40:03 compute-0 podman[96967]: 2025-09-30 17:40:03.536400268 +0000 UTC m=+0.179971322 container attach 8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 17:40:03 compute-0 podman[97053]: 2025-09-30 17:40:03.702317446 +0000 UTC m=+0.036833181 container create b62f95a966edd24f3275c55cea5e4e6512b21f3d23d0180d37293371d71131c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mds-cephfs-compute-0-vrwlru, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5b16a040589f1e54569e8ccd7d709cc4b738ed9306b06790e37dbdeb1c2b1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5b16a040589f1e54569e8ccd7d709cc4b738ed9306b06790e37dbdeb1c2b1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5b16a040589f1e54569e8ccd7d709cc4b738ed9306b06790e37dbdeb1c2b1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5b16a040589f1e54569e8ccd7d709cc4b738ed9306b06790e37dbdeb1c2b1d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.vrwlru supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:03 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Sep 30 17:40:03 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Sep 30 17:40:03 compute-0 podman[97053]: 2025-09-30 17:40:03.76452544 +0000 UTC m=+0.099041195 container init b62f95a966edd24f3275c55cea5e4e6512b21f3d23d0180d37293371d71131c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mds-cephfs-compute-0-vrwlru, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:40:03 compute-0 podman[97053]: 2025-09-30 17:40:03.769147099 +0000 UTC m=+0.103662834 container start b62f95a966edd24f3275c55cea5e4e6512b21f3d23d0180d37293371d71131c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mds-cephfs-compute-0-vrwlru, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 17:40:03 compute-0 bash[97053]: b62f95a966edd24f3275c55cea5e4e6512b21f3d23d0180d37293371d71131c7
Sep 30 17:40:03 compute-0 podman[97053]: 2025-09-30 17:40:03.685599085 +0000 UTC m=+0.020114880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Sep 30 17:40:03 compute-0 ceph-mon[73755]: Saving service rgw.rgw spec with placement compute-0;compute-1
Sep 30 17:40:03 compute-0 ceph-mon[73755]: Deploying daemon mds.cephfs.compute-0.vrwlru on compute-0
Sep 30 17:40:03 compute-0 ceph-mon[73755]: pgmap v18: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:03 compute-0 ceph-mon[73755]: 4.1e deep-scrub starts
Sep 30 17:40:03 compute-0 ceph-mon[73755]: 4.1e deep-scrub ok
Sep 30 17:40:03 compute-0 ceph-mon[73755]: 3.1d scrub starts
Sep 30 17:40:03 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Sep 30 17:40:03 compute-0 ceph-mon[73755]: osdmap e47: 2 total, 2 up, 2 in
Sep 30 17:40:03 compute-0 ceph-mon[73755]: 3.1d scrub ok
Sep 30 17:40:03 compute-0 ceph-mon[73755]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 17:40:03 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.vrwlru for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:03 compute-0 ceph-mds[97072]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 17:40:03 compute-0 ceph-mds[97072]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Sep 30 17:40:03 compute-0 ceph-mds[97072]: main not setting numa affinity
Sep 30 17:40:03 compute-0 ceph-mds[97072]: pidfile_write: ignore empty --pid-file
Sep 30 17:40:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mds-cephfs-compute-0-vrwlru[97068]: starting mds.cephfs.compute-0.vrwlru at 
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e48 e48: 2 total, 2 up, 2 in
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e48: 2 total, 2 up, 2 in
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Sep 30 17:40:03 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru Updating MDS map to version 2 from mon.0
Sep 30 17:40:03 compute-0 sudo[96180]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.wibdub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.wibdub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089148134' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:40:03 compute-0 recursing_hoover[96984]: 
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.wibdub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 17:40:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:40:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:03 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.wibdub on compute-1
Sep 30 17:40:03 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.wibdub on compute-1
Sep 30 17:40:03 compute-0 recursing_hoover[96984]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.efvthf/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.glbusf/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.mewauo","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.csizwd","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Sep 30 17:40:03 compute-0 systemd[1]: libpod-8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156.scope: Deactivated successfully.
Sep 30 17:40:03 compute-0 podman[96967]: 2025-09-30 17:40:03.883253732 +0000 UTC m=+0.526824796 container died 8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3804cfe2beb3b5610851b0636d8e82612680775563a2891650eba22143acd7be-merged.mount: Deactivated successfully.
Sep 30 17:40:03 compute-0 podman[96967]: 2025-09-30 17:40:03.923278444 +0000 UTC m=+0.566849498 container remove 8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156 (image=quay.io/ceph/ceph:v19, name=recursing_hoover, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:40:03 compute-0 systemd[1]: libpod-conmon-8ee1f557595913b31811bb52c693c5b79786fa9f0a20ed35c805083ac9424156.scope: Deactivated successfully.
Sep 30 17:40:03 compute-0 sudo[96924]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 48 pg[10.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] as mds.0
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.vrwlru assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e3 new map
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-09-30T17:40:04:127296+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T17:39:43.520183+0000
                                           modified        2025-09-30T17:40:04.127286+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14458}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.vrwlru{0:14458} state up:creating seq 1 addr [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru Updating MDS map to version 3 from mon.0
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.3 handle_mds_map I am now mds.0.3
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x1
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x100
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x600
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x601
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x602
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x603
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x604
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x605
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x606
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x607
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x608
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.cache creating system inode with ino:0x609
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] up:boot
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:creating}
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.vrwlru"} v 0)
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.vrwlru"}]: dispatch
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e3 all = 0
Sep 30 17:40:04 compute-0 ceph-mds[97072]: mds.0.3 creating_done
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.vrwlru is now active in filesystem cephfs as rank 0
Sep 30 17:40:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v21: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 3.0 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 17:40:04 compute-0 sudo[97136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cudwoycnxyiemhqdziqkyvmvwvwbrfsh ; /usr/bin/python3'
Sep 30 17:40:04 compute-0 sudo[97136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:40:04 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Sep 30 17:40:04 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e49 e49: 2 total, 2 up, 2 in
Sep 30 17:40:04 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e49: 2 total, 2 up, 2 in
Sep 30 17:40:04 compute-0 ceph-mon[73755]: 2.15 scrub starts
Sep 30 17:40:04 compute-0 ceph-mon[73755]: 2.15 scrub ok
Sep 30 17:40:04 compute-0 ceph-mon[73755]: 4.1a deep-scrub starts
Sep 30 17:40:04 compute-0 ceph-mon[73755]: 4.1a deep-scrub ok
Sep 30 17:40:04 compute-0 ceph-mon[73755]: osdmap e48: 2 total, 2 up, 2 in
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3982366829' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.wibdub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2089148134' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.wibdub", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:04 compute-0 ceph-mon[73755]: Deploying daemon mds.cephfs.compute-1.wibdub on compute-1
Sep 30 17:40:04 compute-0 ceph-mon[73755]: daemon mds.cephfs.compute-0.vrwlru assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Sep 30 17:40:04 compute-0 ceph-mon[73755]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Sep 30 17:40:04 compute-0 ceph-mon[73755]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Sep 30 17:40:04 compute-0 ceph-mon[73755]: mds.? [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] up:boot
Sep 30 17:40:04 compute-0 ceph-mon[73755]: fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:creating}
Sep 30 17:40:04 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.vrwlru"}]: dispatch
Sep 30 17:40:04 compute-0 ceph-mon[73755]: daemon mds.cephfs.compute-0.vrwlru is now active in filesystem cephfs as rank 0
Sep 30 17:40:04 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 49 pg[10.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:04 compute-0 python3[97138]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:40:04 compute-0 podman[97143]: 2025-09-30 17:40:04.922978471 +0000 UTC m=+0.037824926 container create d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b (image=quay.io/ceph/ceph:v19, name=eloquent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:40:04 compute-0 systemd[1]: Started libpod-conmon-d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b.scope.
Sep 30 17:40:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f964c67caa8de47b038e1f6a245e9c9e57a6362b62f0c8db0f9fe3431c043bb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f964c67caa8de47b038e1f6a245e9c9e57a6362b62f0c8db0f9fe3431c043bb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:05 compute-0 podman[97143]: 2025-09-30 17:40:04.906683241 +0000 UTC m=+0.021529716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:40:05 compute-0 podman[97143]: 2025-09-30 17:40:05.004330279 +0000 UTC m=+0.119176754 container init d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b (image=quay.io/ceph/ceph:v19, name=eloquent_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:40:05 compute-0 podman[97143]: 2025-09-30 17:40:05.010615681 +0000 UTC m=+0.125462186 container start d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b (image=quay.io/ceph/ceph:v19, name=eloquent_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Sep 30 17:40:05 compute-0 podman[97143]: 2025-09-30 17:40:05.01408595 +0000 UTC m=+0.128932435 container attach d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b (image=quay.io/ceph/ceph:v19, name=eloquent_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e4 new map
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-09-30T17:40:05:134309+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T17:39:43.520183+0000
                                           modified        2025-09-30T17:40:05.134306+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14458}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14458 members: 14458
                                           [mds.cephfs.compute-0.vrwlru{0:14458} state up:active seq 2 addr [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Sep 30 17:40:05 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru Updating MDS map to version 4 from mon.0
Sep 30 17:40:05 compute-0 ceph-mds[97072]: mds.0.3 handle_mds_map I am now mds.0.3
Sep 30 17:40:05 compute-0 ceph-mds[97072]: mds.0.3 handle_mds_map state change up:creating --> up:active
Sep 30 17:40:05 compute-0 ceph-mds[97072]: mds.0.3 recovery_done -- successful recovery!
Sep 30 17:40:05 compute-0 ceph-mds[97072]: mds.0.3 active_start
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] up:active
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active}
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030319904' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Sep 30 17:40:05 compute-0 eloquent_carver[97158]: mimic
Sep 30 17:40:05 compute-0 systemd[1]: libpod-d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b.scope: Deactivated successfully.
Sep 30 17:40:05 compute-0 podman[97143]: 2025-09-30 17:40:05.384721697 +0000 UTC m=+0.499568162 container died d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b (image=quay.io/ceph/ceph:v19, name=eloquent_carver, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:40:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f964c67caa8de47b038e1f6a245e9c9e57a6362b62f0c8db0f9fe3431c043bb5-merged.mount: Deactivated successfully.
Sep 30 17:40:05 compute-0 podman[97143]: 2025-09-30 17:40:05.430099287 +0000 UTC m=+0.544945752 container remove d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b (image=quay.io/ceph/ceph:v19, name=eloquent_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Sep 30 17:40:05 compute-0 systemd[1]: libpod-conmon-d581f102d205932ffee522203a36e14ab2951b242284385de26adcd1208e0e0b.scope: Deactivated successfully.
Sep 30 17:40:05 compute-0 sudo[97136]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:05 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Sep 30 17:40:05 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: pgmap v21: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 3.0 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 17:40:05 compute-0 ceph-mon[73755]: 2.19 scrub starts
Sep 30 17:40:05 compute-0 ceph-mon[73755]: 2.19 scrub ok
Sep 30 17:40:05 compute-0 ceph-mon[73755]: 5.1a scrub starts
Sep 30 17:40:05 compute-0 ceph-mon[73755]: 5.1a scrub ok
Sep 30 17:40:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Sep 30 17:40:05 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Sep 30 17:40:05 compute-0 ceph-mon[73755]: osdmap e49: 2 total, 2 up, 2 in
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mds.? [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] up:active
Sep 30 17:40:05 compute-0 ceph-mon[73755]: fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active}
Sep 30 17:40:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2030319904' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Sep 30 17:40:05 compute-0 ceph-mon[73755]: 2.e scrub starts
Sep 30 17:40:05 compute-0 ceph-mon[73755]: 2.e scrub ok
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e50 e50: 2 total, 2 up, 2 in
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e50: 2 total, 2 up, 2 in
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:05 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 7af5edeb-2b2f-47cf-8dbd-b1ea7a64b5ac (Updating mds.cephfs deployment (+2 -> 2))
Sep 30 17:40:05 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 7af5edeb-2b2f-47cf-8dbd-b1ea7a64b5ac (Updating mds.cephfs deployment (+2 -> 2)) in 4 seconds
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:05 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 7676e00d-59a7-4849-b383-c3e6ba889c1c (Updating nfs.cephfs deployment (+2 -> 2))
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:05 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.bsnzkg
Sep 30 17:40:05 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.bsnzkg
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 17:40:05 compute-0 ceph-mgr[74051]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Sep 30 17:40:05 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 17:40:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:40:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.bsnzkg-rgw
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.bsnzkg-rgw
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.bsnzkg's ganesha conf is defaulting to empty
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.bsnzkg's ganesha conf is defaulting to empty
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.bsnzkg on compute-1
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.bsnzkg on compute-1
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e5 new map
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-09-30T17:40:06:167091+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T17:39:43.520183+0000
                                           modified        2025-09-30T17:40:05.134306+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14458}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14458 members: 14458
                                           [mds.cephfs.compute-0.vrwlru{0:14458} state up:active seq 2 addr [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.wibdub{-1:24145} state up:standby seq 1 addr [v2:192.168.122.101:6804/1687530668,v1:192.168.122.101:6805/1687530668] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1687530668,v1:192.168.122.101:6805/1687530668] up:boot
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active} 1 up:standby
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.wibdub"} v 0)
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.wibdub"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e5 all = 0
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e6 new map
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-09-30T17:40:06:178958+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T17:39:43.520183+0000
                                           modified        2025-09-30T17:40:05.134306+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14458}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14458 members: 14458
                                           [mds.cephfs.compute-0.vrwlru{0:14458} state up:active seq 2 addr [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.wibdub{-1:24145} state up:standby seq 1 addr [v2:192.168.122.101:6804/1687530668,v1:192.168.122.101:6805/1687530668] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active} 1 up:standby
Sep 30 17:40:06 compute-0 sudo[97258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbutobhkyofslkagswctdxfzundurpco ; /usr/bin/python3'
Sep 30 17:40:06 compute-0 sudo[97258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:40:06 compute-0 python3[97260]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:40:06 compute-0 podman[97261]: 2025-09-30 17:40:06.461094391 +0000 UTC m=+0.042759447 container create ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6 (image=quay.io/ceph/ceph:v19, name=nostalgic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 17:40:06 compute-0 systemd[1]: Started libpod-conmon-ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6.scope.
Sep 30 17:40:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v24: 197 pgs: 2 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 2.0 KiB/s wr, 8 op/s
Sep 30 17:40:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661d8209344620186b178a61a3c80c2322f0c7d414d69d668fe48a2a4f94fd7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661d8209344620186b178a61a3c80c2322f0c7d414d69d668fe48a2a4f94fd7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:06 compute-0 podman[97261]: 2025-09-30 17:40:06.530468497 +0000 UTC m=+0.112133563 container init ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6 (image=quay.io/ceph/ceph:v19, name=nostalgic_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 17:40:06 compute-0 podman[97261]: 2025-09-30 17:40:06.43862629 +0000 UTC m=+0.020291356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:40:06 compute-0 podman[97261]: 2025-09-30 17:40:06.537042227 +0000 UTC m=+0.118707273 container start ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6 (image=quay.io/ceph/ceph:v19, name=nostalgic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:40:06 compute-0 podman[97261]: 2025-09-30 17:40:06.540030654 +0000 UTC m=+0.121695720 container attach ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6 (image=quay.io/ceph/ceph:v19, name=nostalgic_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:40:06 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Sep 30 17:40:06 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e51 e51: 2 total, 2 up, 2 in
Sep 30 17:40:06 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e51: 2 total, 2 up, 2 in
Sep 30 17:40:06 compute-0 ceph-mon[73755]: 3.1c scrub starts
Sep 30 17:40:06 compute-0 ceph-mon[73755]: 3.1c scrub ok
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:06 compute-0 ceph-mon[73755]: osdmap e50: 2 total, 2 up, 2 in
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3982366829' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:06 compute-0 ceph-mon[73755]: Creating key for client.nfs.cephfs.0.0.compute-1.bsnzkg
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 17:40:06 compute-0 ceph-mon[73755]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.bsnzkg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: mds.? [v2:192.168.122.101:6804/1687530668,v1:192.168.122.101:6805/1687530668] up:boot
Sep 30 17:40:06 compute-0 ceph-mon[73755]: fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active} 1 up:standby
Sep 30 17:40:06 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.wibdub"}]: dispatch
Sep 30 17:40:06 compute-0 ceph-mon[73755]: fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active} 1 up:standby
Sep 30 17:40:06 compute-0 ceph-mon[73755]: 2.d scrub starts
Sep 30 17:40:06 compute-0 ceph-mon[73755]: 2.d scrub ok
Sep 30 17:40:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Sep 30 17:40:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654377670' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Sep 30 17:40:07 compute-0 nostalgic_jones[97276]: 
Sep 30 17:40:07 compute-0 nostalgic_jones[97276]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":2},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":2},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":2},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":2},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":8}}
Sep 30 17:40:07 compute-0 systemd[1]: libpod-ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6.scope: Deactivated successfully.
Sep 30 17:40:07 compute-0 podman[97261]: 2025-09-30 17:40:07.040683441 +0000 UTC m=+0.622348487 container died ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6 (image=quay.io/ceph/ceph:v19, name=nostalgic_jones, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:40:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-661d8209344620186b178a61a3c80c2322f0c7d414d69d668fe48a2a4f94fd7f-merged.mount: Deactivated successfully.
Sep 30 17:40:07 compute-0 podman[97261]: 2025-09-30 17:40:07.074229489 +0000 UTC m=+0.655894535 container remove ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6 (image=quay.io/ceph/ceph:v19, name=nostalgic_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 17:40:07 compute-0 systemd[1]: libpod-conmon-ad587ee9f5864cebda27c9a330f4be21082d22e49ed4e641b79d557b5cadb5a6.scope: Deactivated successfully.
Sep 30 17:40:07 compute-0 sudo[97258]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:07 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 13 completed events
Sep 30 17:40:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Sep 30 17:40:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Sep 30 17:40:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Sep 30 17:40:07 compute-0 ceph-mon[73755]: Rados config object exists: conf-nfs.cephfs
Sep 30 17:40:07 compute-0 ceph-mon[73755]: Creating key for client.nfs.cephfs.0.0.compute-1.bsnzkg-rgw
Sep 30 17:40:07 compute-0 ceph-mon[73755]: Bind address in nfs.cephfs.0.0.compute-1.bsnzkg's ganesha conf is defaulting to empty
Sep 30 17:40:07 compute-0 ceph-mon[73755]: Deploying daemon nfs.cephfs.0.0.compute-1.bsnzkg on compute-1
Sep 30 17:40:07 compute-0 ceph-mon[73755]: pgmap v24: 197 pgs: 2 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 2.0 KiB/s wr, 8 op/s
Sep 30 17:40:07 compute-0 ceph-mon[73755]: 5.1b deep-scrub starts
Sep 30 17:40:07 compute-0 ceph-mon[73755]: 5.1b deep-scrub ok
Sep 30 17:40:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Sep 30 17:40:07 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Sep 30 17:40:07 compute-0 ceph-mon[73755]: osdmap e51: 2 total, 2 up, 2 in
Sep 30 17:40:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/654377670' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Sep 30 17:40:07 compute-0 ceph-mon[73755]: 2.10 scrub starts
Sep 30 17:40:07 compute-0 ceph-mon[73755]: 2.10 scrub ok
Sep 30 17:40:07 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e52 e52: 2 total, 2 up, 2 in
Sep 30 17:40:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e52: 2 total, 2 up, 2 in
Sep 30 17:40:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Sep 30 17:40:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Sep 30 17:40:07 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 52 pg[12.0( empty local-lis/les=0/0 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [0] r=0 lpr=52 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Sep 30 17:40:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:08 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-0.syzvbh
Sep 30 17:40:08 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-0.syzvbh
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 17:40:08 compute-0 ceph-mgr[74051]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Sep 30 17:40:08 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 3 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:08 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Sep 30 17:40:08 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e53 e53: 2 total, 2 up, 2 in
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e53: 2 total, 2 up, 2 in
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e7 new map
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-09-30T17:40:08:924517+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T17:39:43.520183+0000
                                           modified        2025-09-30T17:40:08.164750+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14458}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14458 members: 14458
                                           [mds.cephfs.compute-0.vrwlru{0:14458} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.wibdub{-1:24145} state up:standby seq 1 addr [v2:192.168.122.101:6804/1687530668,v1:192.168.122.101:6805/1687530668] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 17:40:08 compute-0 ceph-mon[73755]: 5.1c deep-scrub starts
Sep 30 17:40:08 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru Updating MDS map to version 7 from mon.0
Sep 30 17:40:08 compute-0 ceph-mon[73755]: 5.1c deep-scrub ok
Sep 30 17:40:08 compute-0 ceph-mon[73755]: osdmap e52: 2 total, 2 up, 2 in
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3982366829' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 17:40:08 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:08 compute-0 ceph-mon[73755]: 2.c scrub starts
Sep 30 17:40:08 compute-0 ceph-mon[73755]: 2.c scrub ok
Sep 30 17:40:08 compute-0 ceph-mon[73755]: 5.18 scrub starts
Sep 30 17:40:08 compute-0 ceph-mon[73755]: 5.18 scrub ok
Sep 30 17:40:08 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 53 pg[12.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [0] r=0 lpr=52 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] up:active
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active} 1 up:standby
Sep 30 17:40:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Sep 30 17:40:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Sep 30 17:40:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:09 compute-0 ceph-mds[97072]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Sep 30 17:40:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mds-cephfs-compute-0-vrwlru[97068]: 2025-09-30T17:40:09.143+0000 7f45f87e3640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Sep 30 17:40:09 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Sep 30 17:40:09 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Sep 30 17:40:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Sep 30 17:40:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Sep 30 17:40:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Sep 30 17:40:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e54 e54: 2 total, 2 up, 2 in
Sep 30 17:40:09 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e54: 2 total, 2 up, 2 in
Sep 30 17:40:09 compute-0 ceph-mon[73755]: Creating key for client.nfs.cephfs.1.0.compute-0.syzvbh
Sep 30 17:40:09 compute-0 ceph-mon[73755]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Sep 30 17:40:09 compute-0 ceph-mon[73755]: pgmap v27: 198 pgs: 3 unknown, 195 active+clean; 450 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Sep 30 17:40:09 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Sep 30 17:40:09 compute-0 ceph-mon[73755]: osdmap e53: 2 total, 2 up, 2 in
Sep 30 17:40:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Sep 30 17:40:09 compute-0 ceph-mon[73755]: mds.? [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] up:active
Sep 30 17:40:09 compute-0 ceph-mon[73755]: fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active} 1 up:standby
Sep 30 17:40:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3982366829' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Sep 30 17:40:09 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Sep 30 17:40:09 compute-0 ceph-mon[73755]: 2.13 scrub starts
Sep 30 17:40:09 compute-0 ceph-mon[73755]: 2.13 scrub ok
Sep 30 17:40:09 compute-0 ceph-mon[73755]: 4.1b scrub starts
Sep 30 17:40:09 compute-0 ceph-mon[73755]: 4.1b scrub ok
Sep 30 17:40:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3220603395' entity='client.rgw.rgw.compute-0.mewauo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Sep 30 17:40:09 compute-0 ceph-mon[73755]: from='client.? ' entity='client.rgw.rgw.compute-1.csizwd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Sep 30 17:40:09 compute-0 ceph-mon[73755]: osdmap e54: 2 total, 2 up, 2 in
Sep 30 17:40:10 compute-0 radosgw[96126]: v1 topic migration: starting v1 topic migration..
Sep 30 17:40:10 compute-0 radosgw[96126]: LDAP not started since no server URIs were provided in the configuration.
Sep 30 17:40:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-rgw-rgw-compute-0-mewauo[96122]: 2025-09-30T17:40:10.136+0000 7f272a444980 -1 LDAP not started since no server URIs were provided in the configuration.
Sep 30 17:40:10 compute-0 radosgw[96126]: v1 topic migration: finished v1 topic migration
Sep 30 17:40:10 compute-0 radosgw[96126]: framework: beast
Sep 30 17:40:10 compute-0 radosgw[96126]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Sep 30 17:40:10 compute-0 radosgw[96126]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Sep 30 17:40:10 compute-0 radosgw[96126]: starting handler: beast
Sep 30 17:40:10 compute-0 radosgw[96126]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 17:40:10 compute-0 radosgw[96126]: mgrc service_daemon_register rgw.14452 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.mewauo,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864116,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=cf56360a-b45c-423e-ade6-44203ee5bb4f,zone_name=default,zonegroup_id=db97795f-33ec-4db5-9ea0-da39adf34835,zonegroup_name=default}
Sep 30 17:40:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e8 new map
Sep 30 17:40:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-09-30T17:40:10:422879+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T17:39:43.520183+0000
                                           modified        2025-09-30T17:40:08.164750+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14458}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14458 members: 14458
                                           [mds.cephfs.compute-0.vrwlru{0:14458} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.100:6806/311246388,v1:192.168.122.100:6807/311246388] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.wibdub{-1:24145} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1687530668,v1:192.168.122.101:6805/1687530668] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 17:40:10 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1687530668,v1:192.168.122.101:6805/1687530668] up:standby
Sep 30 17:40:10 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active} 1 up:standby
Sep 30 17:40:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 1 creating+peering, 197 active+clean; 453 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 4.2 KiB/s rd, 6.2 KiB/s wr, 19 op/s
Sep 30 17:40:10 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Sep 30 17:40:10 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Sep 30 17:40:11 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 17:40:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Sep 30 17:40:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 17:40:11 compute-0 ceph-mon[73755]: mds.? [v2:192.168.122.101:6804/1687530668,v1:192.168.122.101:6805/1687530668] up:standby
Sep 30 17:40:11 compute-0 ceph-mon[73755]: fsmap cephfs:1 {0=cephfs.compute-0.vrwlru=up:active} 1 up:standby
Sep 30 17:40:11 compute-0 ceph-mon[73755]: 2.1 scrub starts
Sep 30 17:40:11 compute-0 ceph-mon[73755]: 2.1 scrub ok
Sep 30 17:40:11 compute-0 ceph-mon[73755]: pgmap v30: 198 pgs: 1 creating+peering, 197 active+clean; 453 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 4.2 KiB/s rd, 6.2 KiB/s wr, 19 op/s
Sep 30 17:40:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 17:40:11 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Sep 30 17:40:11 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Sep 30 17:40:11 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-0.syzvbh-rgw
Sep 30 17:40:11 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-0.syzvbh-rgw
Sep 30 17:40:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 17:40:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 17:40:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 17:40:11 compute-0 ceph-mgr[74051]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-0.syzvbh's ganesha conf is defaulting to empty
Sep 30 17:40:11 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-0.syzvbh's ganesha conf is defaulting to empty
Sep 30 17:40:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:40:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:11 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-0.syzvbh on compute-0
Sep 30 17:40:11 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-0.syzvbh on compute-0
Sep 30 17:40:11 compute-0 sudo[97387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:11 compute-0 sudo[97387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:11 compute-0 sudo[97387]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:11 compute-0 sudo[97412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:11 compute-0 sudo[97412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:11 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Sep 30 17:40:11 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Sep 30 17:40:12 compute-0 podman[97477]: 2025-09-30 17:40:12.236304411 +0000 UTC m=+0.036907116 container create 757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 17:40:12 compute-0 systemd[1]: Started libpod-conmon-757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298.scope.
Sep 30 17:40:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:12 compute-0 podman[97477]: 2025-09-30 17:40:12.310964544 +0000 UTC m=+0.111567299 container init 757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:40:12 compute-0 podman[97477]: 2025-09-30 17:40:12.218914521 +0000 UTC m=+0.019517246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:40:12 compute-0 podman[97477]: 2025-09-30 17:40:12.319107795 +0000 UTC m=+0.119710500 container start 757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:40:12 compute-0 romantic_montalcini[97493]: 167 167
Sep 30 17:40:12 compute-0 systemd[1]: libpod-757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298.scope: Deactivated successfully.
Sep 30 17:40:12 compute-0 podman[97477]: 2025-09-30 17:40:12.326179418 +0000 UTC m=+0.126782153 container attach 757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:40:12 compute-0 podman[97477]: 2025-09-30 17:40:12.326745602 +0000 UTC m=+0.127348317 container died 757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b506b22ec6c3472012d7b7c3dc2ea81814f1c28f24bed110917231827aa6bdc3-merged.mount: Deactivated successfully.
Sep 30 17:40:12 compute-0 podman[97477]: 2025-09-30 17:40:12.374937049 +0000 UTC m=+0.175539744 container remove 757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:40:12 compute-0 systemd[1]: libpod-conmon-757cc052057373d332b4f40de202dba0a9996052b48bb5c7b9383966c4c49298.scope: Deactivated successfully.
Sep 30 17:40:12 compute-0 systemd[1]: Reloading.
Sep 30 17:40:12 compute-0 systemd-rc-local-generator[97531]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:12 compute-0 systemd-sysv-generator[97536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 1 creating+peering, 197 active+clean; 453 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 3.0 KiB/s rd, 4.4 KiB/s wr, 14 op/s
Sep 30 17:40:12 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:40:12 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:40:12 compute-0 ceph-mon[73755]: 4.18 scrub starts
Sep 30 17:40:12 compute-0 ceph-mon[73755]: 4.18 scrub ok
Sep 30 17:40:12 compute-0 ceph-mon[73755]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 17:40:12 compute-0 ceph-mon[73755]: 6.1b scrub starts
Sep 30 17:40:12 compute-0 ceph-mon[73755]: 6.1b scrub ok
Sep 30 17:40:12 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 17:40:12 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 17:40:12 compute-0 ceph-mon[73755]: Rados config object exists: conf-nfs.cephfs
Sep 30 17:40:12 compute-0 ceph-mon[73755]: Creating key for client.nfs.cephfs.1.0.compute-0.syzvbh-rgw
Sep 30 17:40:12 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 17:40:12 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-0.syzvbh-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 17:40:12 compute-0 ceph-mon[73755]: Bind address in nfs.cephfs.1.0.compute-0.syzvbh's ganesha conf is defaulting to empty
Sep 30 17:40:12 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:40:12 compute-0 ceph-mon[73755]: Deploying daemon nfs.cephfs.1.0.compute-0.syzvbh on compute-0
Sep 30 17:40:12 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:40:12 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:40:12 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:40:12 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:40:12 compute-0 systemd[1]: Reloading.
Sep 30 17:40:12 compute-0 systemd-rc-local-generator[97575]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:12 compute-0 systemd-sysv-generator[97579]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:12 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Sep 30 17:40:12 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Sep 30 17:40:12 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:13 compute-0 podman[97629]: 2025-09-30 17:40:13.148306286 +0000 UTC m=+0.037183844 container create 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92686d956068f925812fc42d9f71b6ab54a3b07b027d2d050ec8ab17b15e0227/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92686d956068f925812fc42d9f71b6ab54a3b07b027d2d050ec8ab17b15e0227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92686d956068f925812fc42d9f71b6ab54a3b07b027d2d050ec8ab17b15e0227/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92686d956068f925812fc42d9f71b6ab54a3b07b027d2d050ec8ab17b15e0227/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:13 compute-0 podman[97629]: 2025-09-30 17:40:13.209035277 +0000 UTC m=+0.097912855 container init 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:40:13 compute-0 podman[97629]: 2025-09-30 17:40:13.214718764 +0000 UTC m=+0.103596322 container start 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 17:40:13 compute-0 bash[97629]: 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac
Sep 30 17:40:13 compute-0 podman[97629]: 2025-09-30 17:40:13.131796548 +0000 UTC m=+0.020674126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:40:13 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:40:13 compute-0 sudo[97412]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:40:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:40:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:40:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 7676e00d-59a7-4849-b383-c3e6ba889c1c (Updating nfs.cephfs deployment (+2 -> 2))
Sep 30 17:40:13 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 7676e00d-59a7-4849-b383-c3e6ba889c1c (Updating nfs.cephfs deployment (+2 -> 2)) in 7 seconds
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000002:nfs.cephfs.1: -2
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:40:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:40:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev a1a22099-ea5a-4d43-a5f2-10912873743e (Updating ingress.nfs.cephfs deployment (+4 -> 4))
Sep 30 17:40:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Sep 30 17:40:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:40:13 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.iacknv on compute-1
Sep 30 17:40:13 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.iacknv on compute-1
Sep 30 17:40:13 compute-0 ceph-mon[73755]: 7.1f scrub starts
Sep 30 17:40:13 compute-0 ceph-mon[73755]: 7.1f scrub ok
Sep 30 17:40:13 compute-0 ceph-mon[73755]: pgmap v31: 198 pgs: 1 creating+peering, 197 active+clean; 453 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 3.0 KiB/s rd, 4.4 KiB/s wr, 14 op/s
Sep 30 17:40:13 compute-0 ceph-mon[73755]: 7.18 scrub starts
Sep 30 17:40:13 compute-0 ceph-mon[73755]: 7.18 scrub ok
Sep 30 17:40:13 compute-0 ceph-mon[73755]: 6.1e scrub starts
Sep 30 17:40:13 compute-0 ceph-mon[73755]: 6.1e scrub ok
Sep 30 17:40:13 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:13 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Sep 30 17:40:13 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Sep 30 17:40:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 169 KiB/s rd, 10 KiB/s wr, 332 op/s
Sep 30 17:40:14 compute-0 ceph-mon[73755]: Deploying daemon haproxy.nfs.cephfs.compute-1.iacknv on compute-1
Sep 30 17:40:14 compute-0 ceph-mon[73755]: 7.1b scrub starts
Sep 30 17:40:14 compute-0 ceph-mon[73755]: 7.1b scrub ok
Sep 30 17:40:14 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.1c deep-scrub starts
Sep 30 17:40:14 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.1c deep-scrub ok
Sep 30 17:40:15 compute-0 ceph-mon[73755]: 7.1c deep-scrub starts
Sep 30 17:40:15 compute-0 ceph-mon[73755]: 7.1c deep-scrub ok
Sep 30 17:40:15 compute-0 ceph-mon[73755]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 169 KiB/s rd, 10 KiB/s wr, 332 op/s
Sep 30 17:40:15 compute-0 ceph-mon[73755]: 7.1e scrub starts
Sep 30 17:40:15 compute-0 ceph-mon[73755]: 7.1e scrub ok
Sep 30 17:40:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Sep 30 17:40:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Sep 30 17:40:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 139 KiB/s rd, 8.2 KiB/s wr, 273 op/s
Sep 30 17:40:16 compute-0 ceph-mon[73755]: 6.1c deep-scrub starts
Sep 30 17:40:16 compute-0 ceph-mon[73755]: 6.1c deep-scrub ok
Sep 30 17:40:16 compute-0 ceph-mon[73755]: 6.18 scrub starts
Sep 30 17:40:16 compute-0 ceph-mon[73755]: 6.18 scrub ok
Sep 30 17:40:16 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.12 deep-scrub starts
Sep 30 17:40:16 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.12 deep-scrub ok
Sep 30 17:40:17 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 14 completed events
Sep 30 17:40:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:17 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 12bbad90-e416-49c0-9803-b36946103f89 (Global Recovery Event) in 15 seconds
Sep 30 17:40:17 compute-0 ceph-mon[73755]: 7.12 scrub starts
Sep 30 17:40:17 compute-0 ceph-mon[73755]: 7.12 scrub ok
Sep 30 17:40:17 compute-0 ceph-mon[73755]: 6.1f scrub starts
Sep 30 17:40:17 compute-0 ceph-mon[73755]: 6.1f scrub ok
Sep 30 17:40:17 compute-0 ceph-mon[73755]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 139 KiB/s rd, 8.2 KiB/s wr, 273 op/s
Sep 30 17:40:17 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Sep 30 17:40:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Sep 30 17:40:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 114 KiB/s rd, 4.3 KiB/s wr, 220 op/s
Sep 30 17:40:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:40:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:40:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 17:40:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:18 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.jcdnha on compute-0
Sep 30 17:40:18 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.jcdnha on compute-0
Sep 30 17:40:18 compute-0 sudo[97699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:18 compute-0 sudo[97699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:18 compute-0 sudo[97699]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:18 compute-0 sudo[97724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:18 compute-0 sudo[97724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:18 compute-0 ceph-mon[73755]: 6.12 deep-scrub starts
Sep 30 17:40:18 compute-0 ceph-mon[73755]: 6.12 deep-scrub ok
Sep 30 17:40:18 compute-0 ceph-mon[73755]: 6.c scrub starts
Sep 30 17:40:18 compute-0 ceph-mon[73755]: 6.c scrub ok
Sep 30 17:40:18 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:18 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:18 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:18 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Sep 30 17:40:18 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Sep 30 17:40:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:19 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:19 compute-0 ceph-mon[73755]: 6.17 deep-scrub starts
Sep 30 17:40:19 compute-0 ceph-mon[73755]: 6.17 deep-scrub ok
Sep 30 17:40:19 compute-0 ceph-mon[73755]: 6.1 deep-scrub starts
Sep 30 17:40:19 compute-0 ceph-mon[73755]: 6.1 deep-scrub ok
Sep 30 17:40:19 compute-0 ceph-mon[73755]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 114 KiB/s rd, 4.3 KiB/s wr, 220 op/s
Sep 30 17:40:19 compute-0 ceph-mon[73755]: Deploying daemon haproxy.nfs.cephfs.compute-0.jcdnha on compute-0
Sep 30 17:40:19 compute-0 ceph-mon[73755]: 6.6 deep-scrub starts
Sep 30 17:40:19 compute-0 ceph-mon[73755]: 6.6 deep-scrub ok
Sep 30 17:40:19 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Sep 30 17:40:19 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Sep 30 17:40:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 104 KiB/s rd, 3.8 KiB/s wr, 198 op/s
Sep 30 17:40:20 compute-0 ceph-mon[73755]: 7.11 scrub starts
Sep 30 17:40:20 compute-0 ceph-mon[73755]: 7.11 scrub ok
Sep 30 17:40:20 compute-0 ceph-mon[73755]: 7.6 scrub starts
Sep 30 17:40:20 compute-0 ceph-mon[73755]: 7.6 scrub ok
Sep 30 17:40:20 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Sep 30 17:40:20 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Sep 30 17:40:21 compute-0 podman[97787]: 2025-09-30 17:40:21.458116175 +0000 UTC m=+2.291212871 container create 2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_hoover)
Sep 30 17:40:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:21 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c000fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:21 compute-0 systemd[1]: Started libpod-conmon-2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0.scope.
Sep 30 17:40:21 compute-0 podman[97787]: 2025-09-30 17:40:21.441729271 +0000 UTC m=+2.274825967 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Sep 30 17:40:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:21 compute-0 podman[97787]: 2025-09-30 17:40:21.515663974 +0000 UTC m=+2.348760700 container init 2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_hoover)
Sep 30 17:40:21 compute-0 podman[97787]: 2025-09-30 17:40:21.523178599 +0000 UTC m=+2.356275295 container start 2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_hoover)
Sep 30 17:40:21 compute-0 podman[97787]: 2025-09-30 17:40:21.526567166 +0000 UTC m=+2.359663892 container attach 2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_hoover)
Sep 30 17:40:21 compute-0 ecstatic_hoover[97904]: 0 0
Sep 30 17:40:21 compute-0 systemd[1]: libpod-2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0.scope: Deactivated successfully.
Sep 30 17:40:21 compute-0 podman[97787]: 2025-09-30 17:40:21.52941109 +0000 UTC m=+2.362507786 container died 2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_hoover)
Sep 30 17:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a980ff2c72cbae7438f91fdc9ced870489e6a32a4af347b4054a04b90d0b524-merged.mount: Deactivated successfully.
Sep 30 17:40:21 compute-0 podman[97787]: 2025-09-30 17:40:21.571499309 +0000 UTC m=+2.404596025 container remove 2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0 (image=quay.io/ceph/haproxy:2.3, name=ecstatic_hoover)
Sep 30 17:40:21 compute-0 systemd[1]: libpod-conmon-2d6c897562ce94121c85cc213d8115da3f709d9249c2783304624ea55b622bb0.scope: Deactivated successfully.
Sep 30 17:40:21 compute-0 systemd[1]: Reloading.
Sep 30 17:40:21 compute-0 systemd-rc-local-generator[97951]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:21 compute-0 systemd-sysv-generator[97955]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:21 compute-0 systemd[1]: Reloading.
Sep 30 17:40:21 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Sep 30 17:40:21 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Sep 30 17:40:21 compute-0 ceph-mon[73755]: 7.16 scrub starts
Sep 30 17:40:21 compute-0 ceph-mon[73755]: 7.16 scrub ok
Sep 30 17:40:21 compute-0 ceph-mon[73755]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 104 KiB/s rd, 3.8 KiB/s wr, 198 op/s
Sep 30 17:40:21 compute-0 ceph-mon[73755]: 7.17 scrub starts
Sep 30 17:40:21 compute-0 ceph-mon[73755]: 7.17 scrub ok
Sep 30 17:40:21 compute-0 ceph-mon[73755]: 6.4 scrub starts
Sep 30 17:40:21 compute-0 ceph-mon[73755]: 6.4 scrub ok
Sep 30 17:40:21 compute-0 systemd-rc-local-generator[97993]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:21 compute-0 systemd-sysv-generator[97996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:22 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.jcdnha for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:22 compute-0 podman[98048]: 2025-09-30 17:40:22.367162332 +0000 UTC m=+0.040119379 container create e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de5507b9cf719a139ef7f6634cc4cd673dece300ba29a74e795b9e5caaf31a2/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:22 compute-0 podman[98048]: 2025-09-30 17:40:22.414313193 +0000 UTC m=+0.087270260 container init e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:40:22 compute-0 podman[98048]: 2025-09-30 17:40:22.418629304 +0000 UTC m=+0.091586361 container start e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:40:22 compute-0 bash[98048]: e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5
Sep 30 17:40:22 compute-0 podman[98048]: 2025-09-30 17:40:22.347531454 +0000 UTC m=+0.020488541 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Sep 30 17:40:22 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.jcdnha for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [NOTICE] 272/174022 (2) : New worker #1 (4) forked
Sep 30 17:40:22 compute-0 sudo[97724]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 17:40:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Sep 30 17:40:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:22 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:22 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:22 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:22 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 3.3 KiB/s wr, 175 op/s
Sep 30 17:40:22 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.miadhc on compute-0
Sep 30 17:40:22 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.miadhc on compute-0
Sep 30 17:40:22 compute-0 sudo[98078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:22 compute-0 sudo[98078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:22 compute-0 sudo[98078]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:22 compute-0 sudo[98103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:22 compute-0 sudo[98103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:22 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 15 completed events
Sep 30 17:40:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:22 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Sep 30 17:40:22 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Sep 30 17:40:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:23 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:23 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:23 compute-0 ceph-mon[73755]: 7.15 scrub starts
Sep 30 17:40:23 compute-0 ceph-mon[73755]: 7.15 scrub ok
Sep 30 17:40:23 compute-0 ceph-mon[73755]: 6.0 scrub starts
Sep 30 17:40:23 compute-0 ceph-mon[73755]: 6.0 scrub ok
Sep 30 17:40:23 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:23 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:23 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:23 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:23 compute-0 ceph-mon[73755]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:23 compute-0 ceph-mon[73755]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:23 compute-0 ceph-mon[73755]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 3.3 KiB/s wr, 175 op/s
Sep 30 17:40:23 compute-0 ceph-mon[73755]: Deploying daemon keepalived.nfs.cephfs.compute-0.miadhc on compute-0
Sep 30 17:40:23 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:23 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.a scrub starts
Sep 30 17:40:23 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.a scrub ok
Sep 30 17:40:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 3.2 KiB/s wr, 174 op/s
Sep 30 17:40:24 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Sep 30 17:40:24 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Sep 30 17:40:24 compute-0 ceph-mon[73755]: 6.15 scrub starts
Sep 30 17:40:24 compute-0 ceph-mon[73755]: 6.15 scrub ok
Sep 30 17:40:24 compute-0 ceph-mon[73755]: 7.3 scrub starts
Sep 30 17:40:24 compute-0 ceph-mon[73755]: 7.3 scrub ok
Sep 30 17:40:24 compute-0 ceph-mon[73755]: 6.a scrub starts
Sep 30 17:40:24 compute-0 ceph-mon[73755]: 6.a scrub ok
Sep 30 17:40:24 compute-0 ceph-mon[73755]: 7.2 deep-scrub starts
Sep 30 17:40:24 compute-0 ceph-mon[73755]: 7.2 deep-scrub ok
Sep 30 17:40:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:25 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:25 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c001cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:25 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Sep 30 17:40:25 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Sep 30 17:40:26 compute-0 ceph-mon[73755]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 3.2 KiB/s wr, 174 op/s
Sep 30 17:40:26 compute-0 ceph-mon[73755]: 6.8 scrub starts
Sep 30 17:40:26 compute-0 ceph-mon[73755]: 6.8 scrub ok
Sep 30 17:40:26 compute-0 ceph-mon[73755]: 7.4 scrub starts
Sep 30 17:40:26 compute-0 ceph-mon[73755]: 7.4 scrub ok
Sep 30 17:40:26 compute-0 ceph-mon[73755]: 6.7 scrub starts
Sep 30 17:40:26 compute-0 ceph-mon[73755]: 6.7 scrub ok
Sep 30 17:40:26 compute-0 podman[98167]: 2025-09-30 17:40:26.239371241 +0000 UTC m=+3.272574090 container create 4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb (image=quay.io/ceph/keepalived:2.2.4, name=infallible_burnell, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4)
Sep 30 17:40:26 compute-0 systemd[1]: Started libpod-conmon-4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb.scope.
Sep 30 17:40:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:26 compute-0 podman[98167]: 2025-09-30 17:40:26.224868135 +0000 UTC m=+3.258071014 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Sep 30 17:40:26 compute-0 podman[98167]: 2025-09-30 17:40:26.316086826 +0000 UTC m=+3.349289745 container init 4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb (image=quay.io/ceph/keepalived:2.2.4, name=infallible_burnell, io.buildah.version=1.28.2, name=keepalived, version=2.2.4, distribution-scope=public, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vendor=Red Hat, Inc., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container)
Sep 30 17:40:26 compute-0 podman[98167]: 2025-09-30 17:40:26.321928777 +0000 UTC m=+3.355131646 container start 4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb (image=quay.io/ceph/keepalived:2.2.4, name=infallible_burnell, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-type=git, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Sep 30 17:40:26 compute-0 podman[98167]: 2025-09-30 17:40:26.32512812 +0000 UTC m=+3.358330989 container attach 4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb (image=quay.io/ceph/keepalived:2.2.4, name=infallible_burnell, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9)
Sep 30 17:40:26 compute-0 infallible_burnell[98264]: 0 0
Sep 30 17:40:26 compute-0 systemd[1]: libpod-4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb.scope: Deactivated successfully.
Sep 30 17:40:26 compute-0 podman[98167]: 2025-09-30 17:40:26.326928167 +0000 UTC m=+3.360131026 container died 4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb (image=quay.io/ceph/keepalived:2.2.4, name=infallible_burnell, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, architecture=x86_64, name=keepalived)
Sep 30 17:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df17ad284f26bbf9cbdaf6952723e6a1a380b15fba6e4b6cf3da34e11c700cc-merged.mount: Deactivated successfully.
Sep 30 17:40:26 compute-0 podman[98167]: 2025-09-30 17:40:26.359395507 +0000 UTC m=+3.392598376 container remove 4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb (image=quay.io/ceph/keepalived:2.2.4, name=infallible_burnell, com.redhat.component=keepalived-container, architecture=x86_64, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, version=2.2.4, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, name=keepalived)
Sep 30 17:40:26 compute-0 systemd[1]: libpod-conmon-4993dbb98eeb041c5105ed75c0b365e07821f5d2110530b660839a0c555dd9fb.scope: Deactivated successfully.
Sep 30 17:40:26 compute-0 systemd[1]: Reloading.
Sep 30 17:40:26 compute-0 systemd-rc-local-generator[98307]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:26 compute-0 systemd-sysv-generator[98313]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:26 compute-0 systemd[1]: Reloading.
Sep 30 17:40:26 compute-0 systemd-rc-local-generator[98350]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:26 compute-0 systemd-sysv-generator[98356]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:26 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Sep 30 17:40:26 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Sep 30 17:40:26 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.miadhc for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:27 compute-0 ceph-mon[73755]: 7.e scrub starts
Sep 30 17:40:27 compute-0 ceph-mon[73755]: 7.e scrub ok
Sep 30 17:40:27 compute-0 ceph-mon[73755]: 7.5 scrub starts
Sep 30 17:40:27 compute-0 ceph-mon[73755]: 7.5 scrub ok
Sep 30 17:40:27 compute-0 podman[98413]: 2025-09-30 17:40:27.206233225 +0000 UTC m=+0.036466945 container create b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4)
Sep 30 17:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b440af5346938c9da27f26d8ba7d4743a46024999e44dce79b15b834690079ac/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:27 compute-0 podman[98413]: 2025-09-30 17:40:27.256737902 +0000 UTC m=+0.086971642 container init b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, name=keepalived, release=1793, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 17:40:27 compute-0 podman[98413]: 2025-09-30 17:40:27.262048009 +0000 UTC m=+0.092281729 container start b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2, distribution-scope=public, architecture=x86_64, name=keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Sep 30 17:40:27 compute-0 bash[98413]: b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4
Sep 30 17:40:27 compute-0 podman[98413]: 2025-09-30 17:40:27.189456091 +0000 UTC m=+0.019689841 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Sep 30 17:40:27 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.miadhc for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: Starting Keepalived v2.2.4 (08/21,2021)
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: Running on Linux 5.14.0-617.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025 (built for Linux 5.14.0)
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: Configuration file /etc/keepalived/keepalived.conf
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: Starting VRRP child process, pid=4
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: Startup complete
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: (VI_0) Entering BACKUP STATE (init)
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:27 2025: VRRP_Script(check_backend) succeeded
Sep 30 17:40:27 compute-0 sudo[98103]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 17:40:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:27 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:27 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:27 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:27 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:27 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.zmigik on compute-1
Sep 30 17:40:27 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.zmigik on compute-1
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:27 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff180016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:27 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff140016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:27 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Sep 30 17:40:27 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Sep 30 17:40:28 compute-0 ceph-mon[73755]: pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:28 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:28 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:28 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:28 compute-0 ceph-mon[73755]: 7.f scrub starts
Sep 30 17:40:28 compute-0 ceph-mon[73755]: 7.f scrub ok
Sep 30 17:40:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:28 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Sep 30 17:40:28 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Sep 30 17:40:29 compute-0 ceph-mon[73755]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:29 compute-0 ceph-mon[73755]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:29 compute-0 ceph-mon[73755]: Deploying daemon keepalived.nfs.cephfs.compute-1.zmigik on compute-1
Sep 30 17:40:29 compute-0 ceph-mon[73755]: 6.5 scrub starts
Sep 30 17:40:29 compute-0 ceph-mon[73755]: 6.5 scrub ok
Sep 30 17:40:29 compute-0 ceph-mon[73755]: 6.f scrub starts
Sep 30 17:40:29 compute-0 ceph-mon[73755]: 6.f scrub ok
Sep 30 17:40:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:29 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:29 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:29 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Sep 30 17:40:30 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Sep 30 17:40:30 compute-0 ceph-mon[73755]: pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:30 compute-0 ceph-mon[73755]: 7.0 scrub starts
Sep 30 17:40:30 compute-0 ceph-mon[73755]: 7.0 scrub ok
Sep 30 17:40:30 compute-0 ceph-mon[73755]: 7.8 scrub starts
Sep 30 17:40:30 compute-0 ceph-mon[73755]: 7.8 scrub ok
Sep 30 17:40:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:30 2025: (VI_0) Entering MASTER STATE
Sep 30 17:40:31 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Sep 30 17:40:31 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Sep 30 17:40:31 compute-0 ceph-mon[73755]: 7.7 scrub starts
Sep 30 17:40:31 compute-0 ceph-mon[73755]: 7.7 scrub ok
Sep 30 17:40:31 compute-0 ceph-mon[73755]: 6.9 scrub starts
Sep 30 17:40:31 compute-0 ceph-mon[73755]: 6.9 scrub ok
Sep 30 17:40:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:31 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff180016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:31 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff140016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:31 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Sep 30 17:40:32 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Sep 30 17:40:32 compute-0 ceph-mon[73755]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:32 compute-0 ceph-mon[73755]: 6.2 scrub starts
Sep 30 17:40:32 compute-0 ceph-mon[73755]: 6.2 scrub ok
Sep 30 17:40:32 compute-0 ceph-mon[73755]: 7.9 scrub starts
Sep 30 17:40:32 compute-0 ceph-mon[73755]: 7.9 scrub ok
Sep 30 17:40:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:40:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:40:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 17:40:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:32 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev a1a22099-ea5a-4d43-a5f2-10912873743e (Updating ingress.nfs.cephfs deployment (+4 -> 4))
Sep 30 17:40:32 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event a1a22099-ea5a-4d43-a5f2-10912873743e (Updating ingress.nfs.cephfs deployment (+4 -> 4)) in 19 seconds
Sep 30 17:40:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 17:40:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:32 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 2a21670f-33eb-46cd-a9ae-7a150a4ee6c6 (Updating alertmanager deployment (+1 -> 1))
Sep 30 17:40:32 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Sep 30 17:40:32 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Sep 30 17:40:32 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 16 completed events
Sep 30 17:40:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:32 compute-0 sudo[98439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:32 compute-0 sudo[98439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:32 compute-0 sudo[98439]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:32 compute-0 sudo[98464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:32 compute-0 sudo[98464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:32 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Sep 30 17:40:32 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Sep 30 17:40:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:33 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:33 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:33 compute-0 ceph-mon[73755]: 7.1 deep-scrub starts
Sep 30 17:40:33 compute-0 ceph-mon[73755]: 7.1 deep-scrub ok
Sep 30 17:40:33 compute-0 ceph-mon[73755]: pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:33 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:33 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:33 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:33 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:33 compute-0 ceph-mon[73755]: 6.b scrub starts
Sep 30 17:40:33 compute-0 ceph-mon[73755]: Deploying daemon alertmanager.compute-0 on compute-0
Sep 30 17:40:33 compute-0 ceph-mon[73755]: 6.b scrub ok
Sep 30 17:40:33 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:33 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.d scrub starts
Sep 30 17:40:33 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.d scrub ok
Sep 30 17:40:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v42: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:40:34 compute-0 ceph-mon[73755]: 6.3 deep-scrub starts
Sep 30 17:40:34 compute-0 ceph-mon[73755]: 6.3 deep-scrub ok
Sep 30 17:40:34 compute-0 ceph-mon[73755]: 7.b scrub starts
Sep 30 17:40:34 compute-0 ceph-mon[73755]: 7.b scrub ok
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.690277383 +0000 UTC m=+1.540032669 volume create df430091c7e9a5048dd5fbc241a4808ed4a9356c159bfb8ea920ea27f96c8b25
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.705924348 +0000 UTC m=+1.555679604 container create 04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_moser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:34 compute-0 systemd[1]: Started libpod-conmon-04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4.scope.
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.676697372 +0000 UTC m=+1.526452638 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 17:40:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1566750b1d3ae625451ddb140a2468c08e86c94303da71be00017a7fbb0d19/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.799732616 +0000 UTC m=+1.649487862 container init 04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_moser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.808726329 +0000 UTC m=+1.658481575 container start 04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_moser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.812168138 +0000 UTC m=+1.661923384 container attach 04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_moser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:34 compute-0 nice_moser[98664]: 65534 65534
Sep 30 17:40:34 compute-0 systemd[1]: libpod-04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4.scope: Deactivated successfully.
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.814212861 +0000 UTC m=+1.663968107 container died 04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_moser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae1566750b1d3ae625451ddb140a2468c08e86c94303da71be00017a7fbb0d19-merged.mount: Deactivated successfully.
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.852285517 +0000 UTC m=+1.702040763 container remove 04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_moser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:34 compute-0 podman[98529]: 2025-09-30 17:40:34.857770249 +0000 UTC m=+1.707525495 volume remove df430091c7e9a5048dd5fbc241a4808ed4a9356c159bfb8ea920ea27f96c8b25
Sep 30 17:40:34 compute-0 systemd[1]: libpod-conmon-04a7a7e9db820ce63168464f71bb0ded6342c1fe090ac3e0828a3e9d1e4adfc4.scope: Deactivated successfully.
Sep 30 17:40:34 compute-0 podman[98681]: 2025-09-30 17:40:34.924134036 +0000 UTC m=+0.036215408 volume create b2b03c4edc5fe19c92b417dcc830aedf546190e0f69ab18944d74341b4dc8fc7
Sep 30 17:40:34 compute-0 podman[98681]: 2025-09-30 17:40:34.934415362 +0000 UTC m=+0.046496734 container create db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71 (image=quay.io/prometheus/alertmanager:v0.25.0, name=relaxed_germain, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:34 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.d scrub starts
Sep 30 17:40:34 compute-0 systemd[1]: Started libpod-conmon-db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71.scope.
Sep 30 17:40:34 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.d scrub ok
Sep 30 17:40:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/941abfd9714dd5cf2677ac08c8983d728164767d16b0434d59f885f47e095cf0/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:35 compute-0 podman[98681]: 2025-09-30 17:40:35.002613317 +0000 UTC m=+0.114694719 container init db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71 (image=quay.io/prometheus/alertmanager:v0.25.0, name=relaxed_germain, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:35 compute-0 podman[98681]: 2025-09-30 17:40:34.910423431 +0000 UTC m=+0.022504823 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 17:40:35 compute-0 podman[98681]: 2025-09-30 17:40:35.007819162 +0000 UTC m=+0.119900534 container start db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71 (image=quay.io/prometheus/alertmanager:v0.25.0, name=relaxed_germain, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:35 compute-0 relaxed_germain[98697]: 65534 65534
Sep 30 17:40:35 compute-0 systemd[1]: libpod-db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71.scope: Deactivated successfully.
Sep 30 17:40:35 compute-0 podman[98681]: 2025-09-30 17:40:35.022738888 +0000 UTC m=+0.134820260 container attach db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71 (image=quay.io/prometheus/alertmanager:v0.25.0, name=relaxed_germain, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:35 compute-0 podman[98681]: 2025-09-30 17:40:35.023173069 +0000 UTC m=+0.135254451 container died db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71 (image=quay.io/prometheus/alertmanager:v0.25.0, name=relaxed_germain, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-941abfd9714dd5cf2677ac08c8983d728164767d16b0434d59f885f47e095cf0-merged.mount: Deactivated successfully.
Sep 30 17:40:35 compute-0 podman[98681]: 2025-09-30 17:40:35.276171007 +0000 UTC m=+0.388252379 container remove db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71 (image=quay.io/prometheus/alertmanager:v0.25.0, name=relaxed_germain, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:35 compute-0 podman[98681]: 2025-09-30 17:40:35.28206981 +0000 UTC m=+0.394151182 volume remove b2b03c4edc5fe19c92b417dcc830aedf546190e0f69ab18944d74341b4dc8fc7
Sep 30 17:40:35 compute-0 systemd[1]: libpod-conmon-db0001842468a4f83d782b837e2821ccfba870ede5c84e55f083b0f0ef106c71.scope: Deactivated successfully.
Sep 30 17:40:35 compute-0 systemd[1]: Reloading.
Sep 30 17:40:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:35 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff180016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:35 compute-0 systemd-rc-local-generator[98738]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:35 compute-0 systemd-sysv-generator[98743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:35 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff140016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:35 compute-0 systemd[1]: Reloading.
Sep 30 17:40:35 compute-0 ceph-mon[73755]: 7.d scrub starts
Sep 30 17:40:35 compute-0 ceph-mon[73755]: 7.d scrub ok
Sep 30 17:40:35 compute-0 ceph-mon[73755]: pgmap v42: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:40:35 compute-0 ceph-mon[73755]: 7.14 scrub starts
Sep 30 17:40:35 compute-0 ceph-mon[73755]: 7.14 scrub ok
Sep 30 17:40:35 compute-0 systemd-rc-local-generator[98779]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:35 compute-0 systemd-sysv-generator[98783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:35 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.c scrub starts
Sep 30 17:40:35 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.c scrub ok
Sep 30 17:40:35 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:36 compute-0 podman[98837]: 2025-09-30 17:40:36.157826725 +0000 UTC m=+0.034836033 volume create 0ff05485dc6191c213faf58adc02a5a4acca2eb6e133f30602e71b69a3a6e65f
Sep 30 17:40:36 compute-0 podman[98837]: 2025-09-30 17:40:36.169764984 +0000 UTC m=+0.046774302 container create 54b2fea94257f4c0dbb8baa51cc4daf28060912f14272b21840329b3da1a781c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:36 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Sep 30 17:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e94b9d0ffad0207bfc182feb8704c5b56943dd28b98cd325accb5583c086a525/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e94b9d0ffad0207bfc182feb8704c5b56943dd28b98cd325accb5583c086a525/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:36 compute-0 podman[98837]: 2025-09-30 17:40:36.219166533 +0000 UTC m=+0.096175861 container init 54b2fea94257f4c0dbb8baa51cc4daf28060912f14272b21840329b3da1a781c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:36 compute-0 podman[98837]: 2025-09-30 17:40:36.224676475 +0000 UTC m=+0.101685783 container start 54b2fea94257f4c0dbb8baa51cc4daf28060912f14272b21840329b3da1a781c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:36 compute-0 bash[98837]: 54b2fea94257f4c0dbb8baa51cc4daf28060912f14272b21840329b3da1a781c
Sep 30 17:40:36 compute-0 podman[98837]: 2025-09-30 17:40:36.145173318 +0000 UTC m=+0.022182656 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 17:40:36 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:36.245Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:36.245Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:36.253Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:36.254Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:36.286Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:36.287Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:36.290Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Sep 30 17:40:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:36.290Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Sep 30 17:40:36 compute-0 sudo[98464]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 2a21670f-33eb-46cd-a9ae-7a150a4ee6c6 (Updating alertmanager deployment (+1 -> 1))
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 2a21670f-33eb-46cd-a9ae-7a150a4ee6c6 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Sep 30 17:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 56cb11ed-76e8-46a0-b321-72adf3b1208d (Updating grafana deployment (+1 -> 1))
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Sep 30 17:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Sep 30 17:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Sep 30 17:40:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v43: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:36 compute-0 sudo[98873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:36 compute-0 sudo[98873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:36 compute-0 sudo[98873]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:36 compute-0 sudo[98898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:36 compute-0 sudo[98898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:36 compute-0 ceph-mon[73755]: 6.d scrub starts
Sep 30 17:40:36 compute-0 ceph-mon[73755]: 6.d scrub ok
Sep 30 17:40:36 compute-0 ceph-mon[73755]: 7.a scrub starts
Sep 30 17:40:36 compute-0 ceph-mon[73755]: 7.a scrub ok
Sep 30 17:40:36 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Sep 30 17:40:36 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:36 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.e scrub starts
Sep 30 17:40:36 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.e scrub ok
Sep 30 17:40:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:37 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:37 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:37 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 17 completed events
Sep 30 17:40:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:37 compute-0 ceph-mon[73755]: 7.c scrub starts
Sep 30 17:40:37 compute-0 ceph-mon[73755]: 7.c scrub ok
Sep 30 17:40:37 compute-0 ceph-mon[73755]: Regenerating cephadm self-signed grafana TLS certificates
Sep 30 17:40:37 compute-0 ceph-mon[73755]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Sep 30 17:40:37 compute-0 ceph-mon[73755]: Deploying daemon grafana.compute-0 on compute-0
Sep 30 17:40:37 compute-0 ceph-mon[73755]: pgmap v43: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:37 compute-0 ceph-mon[73755]: 6.14 scrub starts
Sep 30 17:40:37 compute-0 ceph-mon[73755]: 6.14 scrub ok
Sep 30 17:40:37 compute-0 ceph-mon[73755]: 6.e scrub starts
Sep 30 17:40:37 compute-0 ceph-mon[73755]: 6.e scrub ok
Sep 30 17:40:37 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:37 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Sep 30 17:40:37 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Sep 30 17:40:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:38.255Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000152008s
Sep 30 17:40:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v44: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:38 compute-0 ceph-mon[73755]: 6.16 scrub starts
Sep 30 17:40:38 compute-0 ceph-mon[73755]: 6.16 scrub ok
Sep 30 17:40:38 compute-0 ceph-mon[73755]: 6.19 scrub starts
Sep 30 17:40:38 compute-0 ceph-mon[73755]: 6.19 scrub ok
Sep 30 17:40:38 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Sep 30 17:40:38 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Sep 30 17:40:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:39 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:39 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:39 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Sep 30 17:40:39 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Sep 30 17:40:40 compute-0 ceph-mon[73755]: pgmap v44: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:40 compute-0 ceph-mon[73755]: 7.10 scrub starts
Sep 30 17:40:40 compute-0 ceph-mon[73755]: 7.10 scrub ok
Sep 30 17:40:40 compute-0 ceph-mon[73755]: 7.19 scrub starts
Sep 30 17:40:40 compute-0 ceph-mon[73755]: 7.19 scrub ok
Sep 30 17:40:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v45: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:40 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Sep 30 17:40:40 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Sep 30 17:40:41 compute-0 ceph-mon[73755]: 6.11 scrub starts
Sep 30 17:40:41 compute-0 ceph-mon[73755]: 6.11 scrub ok
Sep 30 17:40:41 compute-0 ceph-mon[73755]: 7.1a scrub starts
Sep 30 17:40:41 compute-0 ceph-mon[73755]: 7.1a scrub ok
Sep 30 17:40:41 compute-0 ceph-mon[73755]: 6.10 scrub starts
Sep 30 17:40:41 compute-0 ceph-mon[73755]: 6.10 scrub ok
Sep 30 17:40:41 compute-0 ceph-mon[73755]: 6.1a scrub starts
Sep 30 17:40:41 compute-0 ceph-mon[73755]: 6.1a scrub ok
Sep 30 17:40:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:41 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:41 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:40:42
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', 'volumes', '.mgr', '.rgw.root', 'images', '.nfs', 'default.rgw.control', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v46: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 1)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 1)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:40:42 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:40:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:40:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Sep 30 17:40:43 compute-0 ceph-mon[73755]: pgmap v45: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:43 compute-0 ceph-mon[73755]: 6.13 scrub starts
Sep 30 17:40:43 compute-0 ceph-mon[73755]: 6.13 scrub ok
Sep 30 17:40:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:43 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:43 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e55 e55: 2 total, 2 up, 2 in
Sep 30 17:40:43 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e55: 2 total, 2 up, 2 in
Sep 30 17:40:43 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev ce7ad284-509c-474a-ab1e-1078de8ecb4b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Sep 30 17:40:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:40:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:44 compute-0 ceph-mon[73755]: pgmap v46: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:40:44 compute-0 ceph-mon[73755]: 7.13 scrub starts
Sep 30 17:40:44 compute-0 ceph-mon[73755]: 7.13 scrub ok
Sep 30 17:40:44 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:44 compute-0 ceph-mon[73755]: 7.1d scrub starts
Sep 30 17:40:44 compute-0 ceph-mon[73755]: 7.1d scrub ok
Sep 30 17:40:44 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:44 compute-0 ceph-mon[73755]: osdmap e55: 2 total, 2 up, 2 in
Sep 30 17:40:44 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:44 compute-0 podman[98962]: 2025-09-30 17:40:44.17031096 +0000 UTC m=+7.089218930 container create 0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251 (image=quay.io/ceph/grafana:10.4.0, name=serene_chatterjee, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 systemd[1]: Started libpod-conmon-0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251.scope.
Sep 30 17:40:44 compute-0 podman[98962]: 2025-09-30 17:40:44.152576741 +0000 UTC m=+7.071484751 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 17:40:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:44 compute-0 podman[98962]: 2025-09-30 17:40:44.286197739 +0000 UTC m=+7.205105729 container init 0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251 (image=quay.io/ceph/grafana:10.4.0, name=serene_chatterjee, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 podman[98962]: 2025-09-30 17:40:44.300585462 +0000 UTC m=+7.219493442 container start 0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251 (image=quay.io/ceph/grafana:10.4.0, name=serene_chatterjee, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 serene_chatterjee[99186]: 472 0
Sep 30 17:40:44 compute-0 systemd[1]: libpod-0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251.scope: Deactivated successfully.
Sep 30 17:40:44 compute-0 podman[98962]: 2025-09-30 17:40:44.31096875 +0000 UTC m=+7.229876750 container attach 0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251 (image=quay.io/ceph/grafana:10.4.0, name=serene_chatterjee, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 podman[98962]: 2025-09-30 17:40:44.311991127 +0000 UTC m=+7.230899117 container died 0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251 (image=quay.io/ceph/grafana:10.4.0, name=serene_chatterjee, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-18b23ce4b84569b3ab8d6248393511e69488423c5e4023951c8a6274e3372e9f-merged.mount: Deactivated successfully.
Sep 30 17:40:44 compute-0 podman[98962]: 2025-09-30 17:40:44.364740312 +0000 UTC m=+7.283648292 container remove 0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251 (image=quay.io/ceph/grafana:10.4.0, name=serene_chatterjee, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 systemd[1]: libpod-conmon-0e5c1a14458e1176cfa5b73817fd74f337cd7366315f719ffd51e300961d0251.scope: Deactivated successfully.
Sep 30 17:40:44 compute-0 podman[99203]: 2025-09-30 17:40:44.440741409 +0000 UTC m=+0.051893694 container create c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e (image=quay.io/ceph/grafana:10.4.0, name=interesting_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 systemd[1]: Started libpod-conmon-c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e.scope.
Sep 30 17:40:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:44 compute-0 podman[99203]: 2025-09-30 17:40:44.41333738 +0000 UTC m=+0.024518586 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 17:40:44 compute-0 podman[99203]: 2025-09-30 17:40:44.519434496 +0000 UTC m=+0.130586791 container init c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e (image=quay.io/ceph/grafana:10.4.0, name=interesting_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v48: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 204 B/s rd, 0 op/s
Sep 30 17:40:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:40:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:44 compute-0 podman[99203]: 2025-09-30 17:40:44.525436181 +0000 UTC m=+0.136588466 container start c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e (image=quay.io/ceph/grafana:10.4.0, name=interesting_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 interesting_antonelli[99219]: 472 0
Sep 30 17:40:44 compute-0 systemd[1]: libpod-c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e.scope: Deactivated successfully.
Sep 30 17:40:44 compute-0 podman[99203]: 2025-09-30 17:40:44.528904551 +0000 UTC m=+0.140056876 container attach c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e (image=quay.io/ceph/grafana:10.4.0, name=interesting_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 podman[99203]: 2025-09-30 17:40:44.529318022 +0000 UTC m=+0.140470317 container died c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e (image=quay.io/ceph/grafana:10.4.0, name=interesting_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-93222fd5ec5f053d19e704b7b2296ee8dc5a334d43bddd378f12c8f00622f3a2-merged.mount: Deactivated successfully.
Sep 30 17:40:44 compute-0 podman[99203]: 2025-09-30 17:40:44.571809871 +0000 UTC m=+0.182962156 container remove c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e (image=quay.io/ceph/grafana:10.4.0, name=interesting_antonelli, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:44 compute-0 systemd[1]: libpod-conmon-c4cd18eec7dcbab7b183b3cccb62673b84d466e88a641e1c35a5992ad514ac9e.scope: Deactivated successfully.
Sep 30 17:40:44 compute-0 systemd[1]: Reloading.
Sep 30 17:40:44 compute-0 systemd-sysv-generator[99270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:44 compute-0 systemd-rc-local-generator[99266]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Sep 30 17:40:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e56 e56: 2 total, 2 up, 2 in
Sep 30 17:40:44 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e56: 2 total, 2 up, 2 in
Sep 30 17:40:44 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 59047769-f84c-4a5a-84c7-e28eb5b215a5 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Sep 30 17:40:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:40:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:45 compute-0 systemd[1]: Reloading.
Sep 30 17:40:45 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:45 compute-0 ceph-mon[73755]: 6.1d scrub starts
Sep 30 17:40:45 compute-0 ceph-mon[73755]: 6.1d scrub ok
Sep 30 17:40:45 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:45 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:45 compute-0 ceph-mon[73755]: osdmap e56: 2 total, 2 up, 2 in
Sep 30 17:40:45 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:45 compute-0 systemd-sysv-generator[99306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:45 compute-0 systemd-rc-local-generator[99300]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:45 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:45 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:45 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:45 compute-0 podman[99365]: 2025-09-30 17:40:45.518656517 +0000 UTC m=+0.044486642 container create c0c4f203e50521af1e67ea671cd3250328ab176a59126da54fd0b28cda8d538c (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dca52ac82893bd3df0349982ca23dca6ca2b36e5fa2a8dbc4af22904dfbc73/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dca52ac82893bd3df0349982ca23dca6ca2b36e5fa2a8dbc4af22904dfbc73/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dca52ac82893bd3df0349982ca23dca6ca2b36e5fa2a8dbc4af22904dfbc73/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dca52ac82893bd3df0349982ca23dca6ca2b36e5fa2a8dbc4af22904dfbc73/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dca52ac82893bd3df0349982ca23dca6ca2b36e5fa2a8dbc4af22904dfbc73/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:45 compute-0 podman[99365]: 2025-09-30 17:40:45.570803157 +0000 UTC m=+0.096633302 container init c0c4f203e50521af1e67ea671cd3250328ab176a59126da54fd0b28cda8d538c (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:45 compute-0 podman[99365]: 2025-09-30 17:40:45.576559636 +0000 UTC m=+0.102389751 container start c0c4f203e50521af1e67ea671cd3250328ab176a59126da54fd0b28cda8d538c (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:40:45 compute-0 bash[99365]: c0c4f203e50521af1e67ea671cd3250328ab176a59126da54fd0b28cda8d538c
Sep 30 17:40:45 compute-0 podman[99365]: 2025-09-30 17:40:45.497314045 +0000 UTC m=+0.023144190 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 17:40:45 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:45 compute-0 sudo[98898]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Sep 30 17:40:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:45 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 56cb11ed-76e8-46a0-b321-72adf3b1208d (Updating grafana deployment (+1 -> 1))
Sep 30 17:40:45 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 56cb11ed-76e8-46a0-b321-72adf3b1208d (Updating grafana deployment (+1 -> 1)) in 9 seconds
Sep 30 17:40:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Sep 30 17:40:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:45 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 1594b8e3-4cd2-42ec-a1e0-f535c3afe9b6 (Updating ingress.rgw.default deployment (+4 -> 4))
Sep 30 17:40:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Sep 30 17:40:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:45 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.gretil on compute-0
Sep 30 17:40:45 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.gretil on compute-0
Sep 30 17:40:45 compute-0 sudo[99400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:45 compute-0 sudo[99400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:45 compute-0 sudo[99400]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765097486Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-09-30T17:40:45Z
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765379353Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765392753Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765396924Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765400534Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765404024Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765407724Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765411714Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765415994Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765419414Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765422724Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765426034Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765430644Z level=info msg=Target target=[all]
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765438085Z level=info msg="Path Home" path=/usr/share/grafana
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765441455Z level=info msg="Path Data" path=/var/lib/grafana
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765444795Z level=info msg="Path Logs" path=/var/log/grafana
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765447955Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765451185Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=settings t=2025-09-30T17:40:45.765454395Z level=info msg="App mode production"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore t=2025-09-30T17:40:45.765703581Z level=info msg="Connecting to DB" dbtype=sqlite3
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore t=2025-09-30T17:40:45.765724542Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.766293577Z level=info msg="Starting DB migrations"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.767705223Z level=info msg="Executing migration" id="create migration_log table"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.768765031Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.059368ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.773131534Z level=info msg="Executing migration" id="create user table"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.773979356Z level=info msg="Migration successfully executed" id="create user table" duration=849.582µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.776039389Z level=info msg="Executing migration" id="add unique index user.login"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.776742737Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=696.478µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.779070497Z level=info msg="Executing migration" id="add unique index user.email"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.77992907Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=859.992µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.78188658Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.782617199Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=731.049µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.784615691Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.785246497Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=630.116µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.787472485Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.789877757Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.407452ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.793144072Z level=info msg="Executing migration" id="create user table v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.793879331Z level=info msg="Migration successfully executed" id="create user table v2" duration=735.889µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.799143097Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.800129182Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=987.475µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.802167325Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.803017697Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=851.122µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.806641331Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.807116033Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=475.622µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.809266359Z level=info msg="Executing migration" id="Drop old table user_v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.809840584Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=574.435µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.811904587Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.8131744Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.269913ms
Sep 30 17:40:45 compute-0 sudo[99425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.815151691Z level=info msg="Executing migration" id="Update user table charset"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.815248554Z level=info msg="Migration successfully executed" id="Update user table charset" duration=98.233µs
Sep 30 17:40:45 compute-0 sudo[99425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.817274976Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.818290342Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.015546ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.819886584Z level=info msg="Executing migration" id="Add missing user data"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.820217522Z level=info msg="Migration successfully executed" id="Add missing user data" duration=331.318µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.821955627Z level=info msg="Executing migration" id="Add is_disabled column to user"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.823188209Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.232122ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.824959925Z level=info msg="Executing migration" id="Add index user.login/user.email"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.825741615Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=780.92µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.827554962Z level=info msg="Executing migration" id="Add is_service_account column to user"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.829979875Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.421073ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.832243354Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.840648491Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.400987ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.843088694Z level=info msg="Executing migration" id="Add uid column to user"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.84445774Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.368746ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.846372749Z level=info msg="Executing migration" id="Update uid column values for users"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.846644946Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=272.297µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.848530345Z level=info msg="Executing migration" id="Add unique index user_uid"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.849223813Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=694.178µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.851536303Z level=info msg="Executing migration" id="create temp user table v1-7"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.852241081Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=705.138µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.854320885Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.855249609Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=928.424µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.857441736Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.858251997Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=810.331µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.860140356Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.860822273Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=680.797µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.862926988Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.863696938Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=770.25µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.866355976Z level=info msg="Executing migration" id="Update temp_user table charset"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.866438869Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=83.633µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.868322187Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.869023296Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=701.469µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.871063068Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.871728026Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=665.247µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.873534372Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.874433606Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=899.683µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.878295916Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.879104596Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=807.991µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.884689181Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.888663394Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.972933ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.892099633Z level=info msg="Executing migration" id="create temp_user v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.893207101Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.110368ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.895418569Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.896494947Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.076607ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.899139995Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.900269714Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.128369ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.902190334Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.903080397Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=890.413µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.904819782Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.905678804Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=859.372µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.909896553Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.9109242Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=1.033467ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.912724337Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.913609849Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=886.092µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.915612031Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.916213297Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=607.476µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.918915897Z level=info msg="Executing migration" id="create star table"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.920101188Z level=info msg="Migration successfully executed" id="create star table" duration=1.1898ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.922882859Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.923959227Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.078108ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.9267706Z level=info msg="Executing migration" id="create org table v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.927613152Z level=info msg="Migration successfully executed" id="create org table v1" duration=843.442µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.931139873Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.932502488Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.365465ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.934998673Z level=info msg="Executing migration" id="create org_user table v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.935742802Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=743.969µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.938800461Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.939613683Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=814.371µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.941962663Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.942733873Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=771.24µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.944599572Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.945277629Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=678.377µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.947753533Z level=info msg="Executing migration" id="Update org table charset"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.947884877Z level=info msg="Migration successfully executed" id="Update org table charset" duration=134.914µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.949750495Z level=info msg="Executing migration" id="Update org_user table charset"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.949816837Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=67.812µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.951475899Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.951696915Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=221.346µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.953250646Z level=info msg="Executing migration" id="create dashboard table"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.954000835Z level=info msg="Migration successfully executed" id="create dashboard table" duration=747.23µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.956774467Z level=info msg="Executing migration" id="add index dashboard.account_id"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.957823664Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.049967ms
Sep 30 17:40:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.960447162Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.961749315Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.303183ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.964139317Z level=info msg="Executing migration" id="create dashboard_tag table"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.964852376Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=715.189µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.970121572Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Sep 30 17:40:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.971156999Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.037087ms
Sep 30 17:40:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e57 e57: 2 total, 2 up, 2 in
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.975874711Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.977073702Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.202661ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.979435203Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Sep 30 17:40:45 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e57: 2 total, 2 up, 2 in
Sep 30 17:40:45 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 1061f401-2725-4e1a-b30b-5707c10bdc2d (PG autoscaler increasing pool 10 PGs from 1 to 32)
Sep 30 17:40:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:40:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.984638928Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.202025ms
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.986590268Z level=info msg="Executing migration" id="create dashboard v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.987547863Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=958.235µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.989404821Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.990217822Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=809.661µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.995096309Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.996011812Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=915.594µs
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.998159128Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Sep 30 17:40:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:45.998540838Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=381.75µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.000319354Z level=info msg="Executing migration" id="drop table dashboard_v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.001215957Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=896.373µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.003189788Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.003305941Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=116.443µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.005422166Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.007074749Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.652232ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.009095031Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.010671772Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.575661ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.012632682Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.014163632Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.53037ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.017370225Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.018692999Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.328494ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.020625799Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.022637811Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.015092ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.02489556Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.025911776Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.020296ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.027811705Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.028573075Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=765.08µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.030571337Z level=info msg="Executing migration" id="Update dashboard table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.030600097Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=30µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.034544429Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.03457566Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=33.561µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.036719236Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.038590464Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.865198ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.040261867Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.041697485Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.435428ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.043622594Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.045033461Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.406847ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.046837488Z level=info msg="Executing migration" id="Add column uid in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.048420009Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.583591ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.049876436Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.050043051Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=166.905µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.0515655Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.052151915Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=586.205µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.053909611Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.054566728Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=656.997µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.055961864Z level=info msg="Executing migration" id="Update dashboard title length"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.055981144Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=20.35µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.057571605Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.058322165Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=750.87µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.060238804Z level=info msg="Executing migration" id="create dashboard_provisioning"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.061123177Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=885.193µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.063082298Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.067331748Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.24416ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.069175766Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.069958976Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=786.56µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.072205484Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.072975274Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=767.27µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.077371158Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.078124687Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=753.959µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.0801635Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.080462418Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=298.898µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.082227834Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Sep 30 17:40:46 compute-0 ceph-mon[73755]: pgmap v48: 198 pgs: 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 204 B/s rd, 0 op/s
Sep 30 17:40:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:46 compute-0 ceph-mon[73755]: osdmap e57: 2 total, 2 up, 2 in
Sep 30 17:40:46 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.082773538Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=545.704µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.085382895Z level=info msg="Executing migration" id="Add check_sum column"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.087315065Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.93156ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.089302047Z level=info msg="Executing migration" id="Add index for dashboard_title"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.090170759Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=868.012µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.091833292Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.091993076Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=160.124µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.09367147Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.093821544Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=149.824µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.095359133Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.096076652Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=717.559µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.104236423Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.106073961Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.834168ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.108118574Z level=info msg="Executing migration" id="create data_source table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.109064088Z level=info msg="Migration successfully executed" id="create data_source table" duration=944.914µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.111299556Z level=info msg="Executing migration" id="add index data_source.account_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.112110707Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=811.781µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.114182971Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.11493265Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=745.339µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.117081186Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.118072241Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=995.685µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.119608741Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.120266328Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=657.807µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.121584692Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.126609642Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.0199ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.129077876Z level=info msg="Executing migration" id="create data_source table v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.130108603Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.031667ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.131825297Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.1326975Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=873.203µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.134108366Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.134760113Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=650.967µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.136629032Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.137194616Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=565.864µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.138951062Z level=info msg="Executing migration" id="Add column with_credentials"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.140894722Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.94437ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.142402281Z level=info msg="Executing migration" id="Add secure json data column"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.144174077Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.771986ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.145729517Z level=info msg="Executing migration" id="Update data_source table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.145754168Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.231µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.147414011Z level=info msg="Executing migration" id="Update initial version to 1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.147599196Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=185.184µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.14929916Z level=info msg="Executing migration" id="Add read_only data column"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.151484706Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.187167ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.153118348Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.153278383Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=160.584µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.155022138Z level=info msg="Executing migration" id="Update json_data with nulls"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.155409528Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=388.21µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.159692849Z level=info msg="Executing migration" id="Add uid column"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.161606108Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.91475ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.163580279Z level=info msg="Executing migration" id="Update uid value"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.163765974Z level=info msg="Migration successfully executed" id="Update uid value" duration=186.275µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.17171607Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.172587282Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=873.182µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.174525342Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.17520932Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=685.248µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.178206118Z level=info msg="Executing migration" id="create api_key table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.17905473Z level=info msg="Migration successfully executed" id="create api_key table" duration=850.392µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.180862136Z level=info msg="Executing migration" id="add index api_key.account_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.181570665Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=709.109µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.18447493Z level=info msg="Executing migration" id="add index api_key.key"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.185196499Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=723.409µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.186895573Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.187652912Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=757.249µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.18949342Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.190238699Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=744.979µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.191852761Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.19259549Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=739.859µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.1941539Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.19491481Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=761.46µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.197009284Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.202613959Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.595525ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.204536899Z level=info msg="Executing migration" id="create api_key table v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.205296939Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=761.74µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.206841749Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.207830584Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=983.545µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.209328223Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.210186425Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=855.652µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.211831398Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.212657399Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=826.391µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.214832176Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.215501853Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=670.968µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.217364731Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.218392488Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.028307ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.220240026Z level=info msg="Executing migration" id="Update api_key table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.220274456Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=36.62µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.222096874Z level=info msg="Executing migration" id="Add expires to api_key table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.22502728Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.923525ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.226916378Z level=info msg="Executing migration" id="Add service account foreign key"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.22969436Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.773212ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.231907978Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.232141114Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=239.117µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.233844548Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.236017964Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.170706ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.237713388Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Sep 30 17:40:46 compute-0 podman[99488]: 2025-09-30 17:40:46.237180364 +0000 UTC m=+0.041030373 container create 52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d (image=quay.io/ceph/haproxy:2.3, name=heuristic_dijkstra)
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.240221543Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.505695ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.242667826Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.243830666Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.1638ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.248053885Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.248832476Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=780.581µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.253922726Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.254886111Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=964.685µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.256481033Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.257301684Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=825.232µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:40:46.257Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002563711s
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.25946912Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.260316222Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=847.452µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.262487378Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.263552696Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.064958ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.265425654Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.265490356Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=62.432µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.266980484Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.267005125Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=25.241µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.268978886Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.272469466Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.48819ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.274706934Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Sep 30 17:40:46 compute-0 systemd[1]: Started libpod-conmon-52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d.scope.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.277515367Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.804723ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.279875218Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.27993493Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=60.482µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.281638604Z level=info msg="Executing migration" id="create quota table v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.282427104Z level=info msg="Migration successfully executed" id="create quota table v1" duration=788.47µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.284611021Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.285441752Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=829.571µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.287634059Z level=info msg="Executing migration" id="Update quota table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.287743712Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=112.973µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.289711623Z level=info msg="Executing migration" id="create plugin_setting table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.290521314Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=809.621µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.292778332Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.293587073Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=806.461µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.295636596Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.297946076Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.30929ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.299706941Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.299820184Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=114.833µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.301674672Z level=info msg="Executing migration" id="create session table"
Sep 30 17:40:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.302868003Z level=info msg="Migration successfully executed" id="create session table" duration=1.192501ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.305519592Z level=info msg="Executing migration" id="Drop old table playlist table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.305747328Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=225.656µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.307719899Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.307944825Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=224.965µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.309989517Z level=info msg="Executing migration" id="create playlist table v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.311132267Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.14285ms
Sep 30 17:40:46 compute-0 podman[99488]: 2025-09-30 17:40:46.218428279 +0000 UTC m=+0.022278328 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.315621433Z level=info msg="Executing migration" id="create playlist item table v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.316928017Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.306694ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.320476479Z level=info msg="Executing migration" id="Update playlist table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.320496309Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=18.88µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.32245338Z level=info msg="Executing migration" id="Update playlist_item table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.322553923Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=100.933µs
Sep 30 17:40:46 compute-0 podman[99488]: 2025-09-30 17:40:46.322764278 +0000 UTC m=+0.126614307 container init 52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d (image=quay.io/ceph/haproxy:2.3, name=heuristic_dijkstra)
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.324588415Z level=info msg="Executing migration" id="Add playlist column created_at"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.327750517Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.161392ms
Sep 30 17:40:46 compute-0 podman[99488]: 2025-09-30 17:40:46.329472152 +0000 UTC m=+0.133322171 container start 52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d (image=quay.io/ceph/haproxy:2.3, name=heuristic_dijkstra)
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.331755791Z level=info msg="Executing migration" id="Add playlist column updated_at"
Sep 30 17:40:46 compute-0 heuristic_dijkstra[99504]: 0 0
Sep 30 17:40:46 compute-0 systemd[1]: libpod-52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d.scope: Deactivated successfully.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.3348338Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.078179ms
Sep 30 17:40:46 compute-0 podman[99488]: 2025-09-30 17:40:46.337069208 +0000 UTC m=+0.140919227 container attach 52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d (image=quay.io/ceph/haproxy:2.3, name=heuristic_dijkstra)
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.337401967Z level=info msg="Executing migration" id="drop preferences table v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.337570991Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=169.524µs
Sep 30 17:40:46 compute-0 podman[99488]: 2025-09-30 17:40:46.337770837 +0000 UTC m=+0.141620856 container died 52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d (image=quay.io/ceph/haproxy:2.3, name=heuristic_dijkstra)
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.339628735Z level=info msg="Executing migration" id="drop preferences table v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.339792199Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=164.624µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.341420381Z level=info msg="Executing migration" id="create preferences table v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.342112309Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=692.018µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.34447537Z level=info msg="Executing migration" id="Update preferences table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.344559272Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=84.862µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.346326268Z level=info msg="Executing migration" id="Add column team_id in preferences"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.348961586Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.635278ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.350619219Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.350800704Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=185.975µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.353324829Z level=info msg="Executing migration" id="Add column week_start in preferences"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.356201174Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.875874ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.358983555Z level=info msg="Executing migration" id="Add column preferences.json_data"
Sep 30 17:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-775c62fc8d700aba03a21ba6b6bf0ce2cfa73bab8a2ac8d8fc23052dee0bd82e-merged.mount: Deactivated successfully.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.364280733Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=5.291787ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.365923685Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.365984237Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=61.582µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.368591594Z level=info msg="Executing migration" id="Add preferences index org_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.369393425Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=802.321µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.372078764Z level=info msg="Executing migration" id="Add preferences index user_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.373300826Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.222742ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.376682514Z level=info msg="Executing migration" id="create alert table v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.378125381Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.442988ms
Sep 30 17:40:46 compute-0 podman[99488]: 2025-09-30 17:40:46.379499917 +0000 UTC m=+0.183349926 container remove 52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d (image=quay.io/ceph/haproxy:2.3, name=heuristic_dijkstra)
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.380821711Z level=info msg="Executing migration" id="add index alert org_id & id "
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.382149825Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.328844ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.384242239Z level=info msg="Executing migration" id="add index alert state"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.385330407Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.088148ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.387468893Z level=info msg="Executing migration" id="add index alert dashboard_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.388454098Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=984.855µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.390948253Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Sep 30 17:40:46 compute-0 systemd[1]: libpod-conmon-52cc6b65ad8edc059a239f14262ebe32a3c05868e4ea10d3d96b2a915f6ac64d.scope: Deactivated successfully.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.391907318Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=959.825µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.393997352Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.39509817Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.099738ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.397251316Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.398241082Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=988.985µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.400250534Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.410794396Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.541882ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.416029252Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.416975956Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=947.494µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.421607936Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.422761936Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.1536ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.425483467Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.425881027Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=398µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.427415257Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.428417373Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.001785ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.43026153Z level=info msg="Executing migration" id="create alert_notification table v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.431189604Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=928.264µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.433110974Z level=info msg="Executing migration" id="Add column is_default"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.435801754Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.68963ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.437742834Z level=info msg="Executing migration" id="Add column frequency"
Sep 30 17:40:46 compute-0 systemd[1]: Reloading.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.440468935Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.72681ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.443542054Z level=info msg="Executing migration" id="Add column send_reminder"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.446931222Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.388408ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.448636726Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.45151755Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.881634ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.452873566Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.453576934Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=703.329µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.455484903Z level=info msg="Executing migration" id="Update alert table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.455538894Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=54.581µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.457067534Z level=info msg="Executing migration" id="Update alert_notification table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.457125956Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=60.972µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.458505051Z level=info msg="Executing migration" id="create notification_journal table v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.459152008Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=646.717µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.461867858Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.462563956Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=696.258µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.463978623Z level=info msg="Executing migration" id="drop alert_notification_journal"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.464823145Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=844.722µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.466211651Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.467006751Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=794.95µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.46849854Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.469397143Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=897.783µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.471248111Z level=info msg="Executing migration" id="Add for to alert table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.474755072Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.504961ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.476844186Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.479913365Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.068029ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.481853536Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.482050721Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=197.605µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.483783056Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.484561266Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=778.48µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.486453505Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.487552133Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.098918ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.490797617Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.494638316Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.839679ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.496293229Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.496428233Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=135.354µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.498114766Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.49901401Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=899.284µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.500653432Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.501617007Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=963.025µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.503737882Z level=info msg="Executing migration" id="Drop old annotation table v4"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.503871465Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=133.993µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.505851037Z level=info msg="Executing migration" id="create annotation table v5"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.506821762Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=970.625µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.509132512Z level=info msg="Executing migration" id="add index annotation 0 v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.510283331Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.152169ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.512661753Z level=info msg="Executing migration" id="add index annotation 1 v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.513564726Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=906.093µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.516937824Z level=info msg="Executing migration" id="add index annotation 2 v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.517767585Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=831.391µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.519641694Z level=info msg="Executing migration" id="add index annotation 3 v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.520464535Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=823.381µs
Sep 30 17:40:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v51: 229 pgs: 31 unknown, 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.522614771Z level=info msg="Executing migration" id="add index annotation 4 v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.523730129Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.118969ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.525974238Z level=info msg="Executing migration" id="Update annotation table charset"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.526002558Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.271µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.528046531Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Sep 30 17:40:46 compute-0 systemd-sysv-generator[99555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:46 compute-0 systemd-rc-local-generator[99552]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.533674807Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.608555ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.536971232Z level=info msg="Executing migration" id="Drop category_id index"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.537732692Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=762.2µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.539511358Z level=info msg="Executing migration" id="Add column tags to annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.54229705Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.785542ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.543947823Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.544496797Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=548.764µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.546047177Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.546760455Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=710.988µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.549154567Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.549882176Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=727.399µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.552183936Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.560659785Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.473139ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.562474532Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.563083018Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=609.016µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.564903645Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.565630184Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=726.189µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.567708628Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.567975385Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=267.197µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.569573496Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.570065609Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=492.743µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.571708691Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.571864045Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=155.494µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.573447146Z level=info msg="Executing migration" id="Add created time to annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.585408506Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=11.938639ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.587201012Z level=info msg="Executing migration" id="Add updated time to annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.590085257Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.886845ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.591914764Z level=info msg="Executing migration" id="Add index for created in annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.592619902Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=705.158µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.594156262Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.594972533Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=816.001µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.596849702Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.597024526Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=175.074µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.598657219Z level=info msg="Executing migration" id="Add epoch_end column"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.60180328Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.144951ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.603708959Z level=info msg="Executing migration" id="Add index for epoch_end"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.604401247Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=692.258µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.60643885Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.606566603Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=128.263µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.608563285Z level=info msg="Executing migration" id="Move region to single row"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.608841332Z level=info msg="Migration successfully executed" id="Move region to single row" duration=277.927µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.610459454Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.611226574Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=766.82µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.614360125Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.615064333Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=704.748µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.617130187Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.617937388Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=806.651µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.620568246Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.621273534Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=705.228µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.623401989Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.624537888Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.138579ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.628158802Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.628939692Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=780.8µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.633363557Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.633431629Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=69.032µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.639464195Z level=info msg="Executing migration" id="create test_data table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.640512512Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.049157ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.643031497Z level=info msg="Executing migration" id="create dashboard_version table v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.643964141Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=933.304µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.646104177Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.647071932Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=968.165µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.653421506Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.655224783Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.814467ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.657876501Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.658298782Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=421.731µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.66206103Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.662943283Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=874.392µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.665507229Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.665662863Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=159.004µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.668437135Z level=info msg="Executing migration" id="create team table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.669847811Z level=info msg="Migration successfully executed" id="create team table" duration=1.412687ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.67287499Z level=info msg="Executing migration" id="add index team.org_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.674302846Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.428517ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.679873081Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.681036561Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.16833ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.684754307Z level=info msg="Executing migration" id="Add column uid in team"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.688567926Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.811509ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.692954079Z level=info msg="Executing migration" id="Update uid column values in team"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.693140584Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=186.585µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.697975099Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.698870762Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=896.173µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.702187998Z level=info msg="Executing migration" id="create team member table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.702962908Z level=info msg="Migration successfully executed" id="create team member table" duration=775.28µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.707106886Z level=info msg="Executing migration" id="add index team_member.org_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.707974878Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=868.123µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.712576297Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.718815559Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=6.234891ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.722902914Z level=info msg="Executing migration" id="add index team_member.team_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.724086125Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.185681ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.728775296Z level=info msg="Executing migration" id="Add column email to team table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.733235492Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.458566ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.736692111Z level=info msg="Executing migration" id="Add column external to team_member table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.741395323Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.696672ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.744459152Z level=info msg="Executing migration" id="Add column permission to team_member table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.749135703Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.673411ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.752247694Z level=info msg="Executing migration" id="create dashboard acl table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.753294471Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.048257ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.758190438Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.75942141Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.233841ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.763458734Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.764639965Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.182491ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.770193458Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.771205054Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.013676ms
Sep 30 17:40:46 compute-0 systemd[1]: Reloading.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.776523532Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.777476327Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=955.065µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.781950803Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.782774264Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=824.771µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.789141349Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.790027372Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=886.653µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.796562311Z level=info msg="Executing migration" id="add index dashboard_permission"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.797442214Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=880.732µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.803141141Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.803712866Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=571.795µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.807314759Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.807553275Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=234.516µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.811045706Z level=info msg="Executing migration" id="create tag table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.811844116Z level=info msg="Migration successfully executed" id="create tag table" duration=798.82µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.816979729Z level=info msg="Executing migration" id="add index tag.key_value"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.817804901Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=826.072µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.823688713Z level=info msg="Executing migration" id="create login attempt table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.824578996Z level=info msg="Migration successfully executed" id="create login attempt table" duration=892.323µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.831820843Z level=info msg="Executing migration" id="add index login_attempt.username"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.832720237Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=900.814µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.840066577Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.8409505Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=885.113µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.845500817Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.856141333Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=10.636256ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.861275926Z level=info msg="Executing migration" id="create login_attempt v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.862057326Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=783.29µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.8656974Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.866551472Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=855.542µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.869654883Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.86993898Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=286.568µs
Sep 30 17:40:46 compute-0 systemd-sysv-generator[99595]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:46 compute-0 systemd-rc-local-generator[99592]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.873734898Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.874333764Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=599.996µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.877699851Z level=info msg="Executing migration" id="create user auth table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.878297216Z level=info msg="Migration successfully executed" id="create user auth table" duration=597.425µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.881081888Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.882107245Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.024817ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.886670743Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.886748235Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=79.872µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.891613081Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.895604434Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.989013ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.899134255Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.90316875Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.032265ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.906687951Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.910558461Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.87017ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.914693508Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.918329562Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.637914ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.924214255Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.925554559Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.342795ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.928978178Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.93408401Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.098962ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.93755558Z level=info msg="Executing migration" id="create server_lock table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.938550716Z level=info msg="Migration successfully executed" id="create server_lock table" duration=999.596µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.941873832Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.942862437Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=984.405µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.950388302Z level=info msg="Executing migration" id="create user auth token table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.951332457Z level=info msg="Migration successfully executed" id="create user auth token table" duration=946.915µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.956740966Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.957696241Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=958.175µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.960698639Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.96150387Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=805.291µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.967731601Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.968881351Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.15029ms
Sep 30 17:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.973696375Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.978388767Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.690422ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.982238826Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.983242332Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.005696ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.986081726Z level=info msg="Executing migration" id="create cache_data table"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.986918768Z level=info msg="Migration successfully executed" id="create cache_data table" duration=837.422µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.988886168Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.989745761Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=860.673µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.99166056Z level=info msg="Executing migration" id="create short_url table v1"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.992676327Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.017366ms
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.994932015Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.9959082Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=974.015µs
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.999547374Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Sep 30 17:40:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:46.999646917Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=103.203µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.003513347Z level=info msg="Executing migration" id="delete alert_definition table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.00363947Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=129.303µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.00592885Z level=info msg="Executing migration" id="recreate alert_definition table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.006955776Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.027497ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.010075317Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.011092873Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.020046ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.013311581Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.0144361Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.124809ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.016595626Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.016675408Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=81.562µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.019054519Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.020023904Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=966.105µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.023682369Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.024571362Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=889.983µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.028698079Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.02951778Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=819.861µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.030950457Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.031760408Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=810.541µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.034110139Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.038254876Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.144297ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.040299029Z level=info msg="Executing migration" id="drop alert_definition table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.04188909Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.094278ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.04379688Z level=info msg="Executing migration" id="delete alert_definition_version table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.043874582Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=77.592µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.046097319Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.046861019Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=764.19µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.048860471Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.049766904Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=906.153µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.052398352Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.053256824Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=858.512µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.05501334Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.055060031Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=47.111µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.057209727Z level=info msg="Executing migration" id="drop alert_definition_version table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.058588052Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.382165ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.060378769Z level=info msg="Executing migration" id="create alert_instance table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.061257391Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=877.962µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.06391161Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.064891226Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=979.056µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.06932065Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.070320956Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.001176ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.073879258Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.078662212Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.780634ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.080541041Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.081443474Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=906.813µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.08321199Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Sep 30 17:40:47 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.gretil for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.084096923Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=884.753µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.085799147Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.109226743Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.422976ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.111852661Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.134479287Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.622825ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.136551Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.13729822Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=746.67µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.140678057Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.141506309Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=828.221µs
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e58 e58: 2 total, 2 up, 2 in
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.144427634Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.149483235Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.053451ms
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e58: 2 total, 2 up, 2 in
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.151401525Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Sep 30 17:40:47 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev 10bdeb24-547f-48fb-8abf-ce0a1c86c6e9 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Sep 30 17:40:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.155717536Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.312761ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.157720418Z level=info msg="Executing migration" id="create alert_rule table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.158495058Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=774.44µs
Sep 30 17:40:47 compute-0 ceph-mon[73755]: Deploying daemon haproxy.rgw.default.compute-0.gretil on compute-0
Sep 30 17:40:47 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:47 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:47 compute-0 ceph-mon[73755]: 8.10 scrub starts
Sep 30 17:40:47 compute-0 ceph-mon[73755]: 8.10 scrub ok
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.16242854Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.163278062Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=849.122µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.166424153Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.16743878Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.015297ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.17130445Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.172250764Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=946.964µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.17440026Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.174451631Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=51.981µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.178244999Z level=info msg="Executing migration" id="add column for to alert_rule"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.182876959Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.63019ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.185256371Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.190085416Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.828805ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.191941354Z level=info msg="Executing migration" id="add column labels to alert_rule"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.197028375Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.084721ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.198940785Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.199817858Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=877.403µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.2014652Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.202304612Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=839.672µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.204820767Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.209202701Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.373833ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.21189597Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.21769026Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.79118ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.219888227Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.220818061Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=933.304µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.222764162Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.227013822Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.250551ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.228858159Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.234015723Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.152894ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.236408545Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.236500997Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=92.722µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.238400076Z level=info msg="Executing migration" id="create alert_rule_version table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.239568426Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.16821ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.241833235Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.24279447Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=961.095µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.252333827Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.253473076Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.140929ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.255742305Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.255796977Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=55.592µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.257781928Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.262287205Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.504646ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.263996239Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.268776103Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.777003ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.270580079Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.275056705Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.471476ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.276651716Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.281022709Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.366703ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.282942019Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.287766794Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.825695ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.28952726Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.289579591Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=52.601µs
Sep 30 17:40:47 compute-0 podman[99649]: 2025-09-30 17:40:47.291239244 +0000 UTC m=+0.037998175 container create f461d576ddfdf63b15bed1844a0ae6c4a3f2de1f96e2b50e4110664c3cd6cada (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-rgw-default-compute-0-gretil)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.291497771Z level=info msg="Executing migration" id=create_alert_configuration_table
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.292124427Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=625.957µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.2941724Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.298657666Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.484686ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.300317599Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.30038013Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=62.221µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.30192174Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.306181961Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.259811ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.307787352Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.308564492Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=776.36µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.310561274Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.315257585Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.695941ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.3181292Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.318778057Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=649.037µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.321406775Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.322193635Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=787.62µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.32430859Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.329097654Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.786274ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.333957079Z level=info msg="Executing migration" id="create provenance_type table"
Sep 30 17:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b9644580475ec57a957088a4e8b2a3c169a0944a8dd5bb5f3eb595826b385f4/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.334731379Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=771.21µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.338315252Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.339127893Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=812.751µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.341741061Z level=info msg="Executing migration" id="create alert_image table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.342426709Z level=info msg="Migration successfully executed" id="create alert_image table" duration=686.008µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.344568504Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.345509379Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=941.164µs
Sep 30 17:40:47 compute-0 podman[99649]: 2025-09-30 17:40:47.346195206 +0000 UTC m=+0.092954157 container init f461d576ddfdf63b15bed1844a0ae6c4a3f2de1f96e2b50e4110664c3cd6cada (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-rgw-default-compute-0-gretil)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.347680895Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.347730936Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=51.051µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.349504602Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.350323453Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=816.621µs
Sep 30 17:40:47 compute-0 podman[99649]: 2025-09-30 17:40:47.351620347 +0000 UTC m=+0.098379288 container start f461d576ddfdf63b15bed1844a0ae6c4a3f2de1f96e2b50e4110664c3cd6cada (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-rgw-default-compute-0-gretil)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.352708955Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.353760512Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.054437ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.355722743Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Sep 30 17:40:47 compute-0 bash[99649]: f461d576ddfdf63b15bed1844a0ae6c4a3f2de1f96e2b50e4110664c3cd6cada
Sep 30 17:40:47 compute-0 podman[99649]: 2025-09-30 17:40:47.275813715 +0000 UTC m=+0.022572676 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.356116703Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.357773046Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.358272109Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=498.763µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.360005034Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.361045Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.040296ms
Sep 30 17:40:47 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.gretil for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.363156315Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-rgw-default-compute-0-gretil[99665]: [NOTICE] 272/174047 (2) : New worker #1 (4) forked
Sep 30 17:40:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.368272648Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.111812ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.370201078Z level=info msg="Executing migration" id="create library_element table v1"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.372080326Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.877739ms
Sep 30 17:40:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.006000155s ======
Sep 30 17:40:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:40:47.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000155s
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.37456504Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.375658879Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.095499ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.377924277Z level=info msg="Executing migration" id="create library_element_connection table v1"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.378866132Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=942.345µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.381417718Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.382713721Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.296833ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.385132614Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.386086099Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=955.535µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.388351457Z level=info msg="Executing migration" id="increase max description length to 2048"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.388387498Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=52.181µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.390281507Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.39038421Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=109.103µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.392063993Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.392403242Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=338.699µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.39425494Z level=info msg="Executing migration" id="create data_keys table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.395189074Z level=info msg="Migration successfully executed" id="create data_keys table" duration=934.674µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.397139785Z level=info msg="Executing migration" id="create secrets table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.398088819Z level=info msg="Migration successfully executed" id="create secrets table" duration=949.094µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.400409999Z level=info msg="Executing migration" id="rename data_keys name column to id"
Sep 30 17:40:47 compute-0 sudo[99425]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.428471956Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=28.059236ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.433171917Z level=info msg="Executing migration" id="add name column into data_keys"
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.440200059Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.024732ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:47 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.44640399Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.446579084Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=176.004µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.449167841Z level=info msg="Executing migration" id="rename data_keys name column to label"
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:47 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-1.adkopy on compute-1
Sep 30 17:40:47 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-1.adkopy on compute-1
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.475424911Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=26.25327ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.484358822Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:47 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.51054985Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=26.185878ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.516392101Z level=info msg="Executing migration" id="create kv_store table v1"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.517297955Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=907.454µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.523024183Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.523879045Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=854.782µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.530538057Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.530817825Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=281.337µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.537701863Z level=info msg="Executing migration" id="create permission table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.538689108Z level=info msg="Migration successfully executed" id="create permission table" duration=987.125µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.547292031Z level=info msg="Executing migration" id="add unique index permission.role_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.548540703Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.250922ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.564477426Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.565711598Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.236452ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.572107373Z level=info msg="Executing migration" id="create role table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.5731317Z level=info msg="Migration successfully executed" id="create role table" duration=1.025137ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.581862746Z level=info msg="Executing migration" id="add column display_name"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.588641971Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.780065ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.59283518Z level=info msg="Executing migration" id="add column group_name"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.598060835Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.226775ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.607032867Z level=info msg="Executing migration" id="add index role.org_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.607864239Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=831.412µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.61062048Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.611526803Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=858.082µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.615145467Z level=info msg="Executing migration" id="add index role_org_id_uid"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.616061761Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=916.374µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.618988307Z level=info msg="Executing migration" id="create team role table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.619922661Z level=info msg="Migration successfully executed" id="create team role table" duration=933.504µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.636664874Z level=info msg="Executing migration" id="add index team_role.org_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.638275616Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.612102ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.654599838Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.656086607Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.488849ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.68285967Z level=info msg="Executing migration" id="add index team_role.team_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.683956848Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.099868ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.693239408Z level=info msg="Executing migration" id="create user role table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.694166102Z level=info msg="Migration successfully executed" id="create user role table" duration=925.934µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.70412297Z level=info msg="Executing migration" id="add index user_role.org_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.705081075Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=956.305µs
Sep 30 17:40:47 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 18 completed events
Sep 30 17:40:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.72343256Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.724725593Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.293973ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.729561018Z level=info msg="Executing migration" id="add index user_role.user_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.730551044Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=989.806µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.780002984Z level=info msg="Executing migration" id="create builtin role table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.781088362Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.087578ms
Sep 30 17:40:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:47 compute-0 ceph-mgr[74051]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.793063272Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.794201461Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.140759ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.809968759Z level=info msg="Executing migration" id="add index builtin_role.name"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.810977815Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.006936ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.813886591Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.820867541Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.9757ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.822822092Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.823850219Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.028787ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.825896872Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.826872387Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=974.745µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.828660233Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.829766992Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.107179ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.831399964Z level=info msg="Executing migration" id="add unique index role.uid"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.83242243Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.022046ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.834163085Z level=info msg="Executing migration" id="create seed assignment table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.834935445Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=772.19µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.836685831Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.83779958Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.113258ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.839773721Z level=info msg="Executing migration" id="add column hidden to role table"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.846376082Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.59824ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.850910299Z level=info msg="Executing migration" id="permission kind migration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.857209612Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.300463ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.862126039Z level=info msg="Executing migration" id="permission attribute migration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.86832756Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.199881ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.871152333Z level=info msg="Executing migration" id="permission identifier migration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.877027665Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.875612ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.879203761Z level=info msg="Executing migration" id="add permission identifier index"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.880072754Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=866.213µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.892524336Z level=info msg="Executing migration" id="add permission action scope role_id index"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.893687706Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.16471ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.895554674Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.896469838Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=915.174µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.89846967Z level=info msg="Executing migration" id="create query_history table v1"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.899220969Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=753.079µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.900751069Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.901654862Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=906.083µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.903709955Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.903770247Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=60.252µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.905542963Z level=info msg="Executing migration" id="rbac disabled migrator"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.905570794Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=28.951µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.907170205Z level=info msg="Executing migration" id="teams permissions migration"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.907552055Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=381.76µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.908969522Z level=info msg="Executing migration" id="dashboard permissions"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.909445664Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=477.192µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.910963963Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.911523298Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=559.465µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.914025992Z level=info msg="Executing migration" id="drop managed folder create actions"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.914229508Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=203.786µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.91585626Z level=info msg="Executing migration" id="alerting notification permissions"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.916298621Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=470.182µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.918155359Z level=info msg="Executing migration" id="create query_history_star table v1"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.918985661Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=830.382µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.92089788Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.921820054Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=922.524µs
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.92360406Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.929720219Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.114899ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.94832304Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.948461034Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=143.144µs
Sep 30 17:40:47 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 58 pg[10.0( v 54'774 (0'0,54'774] local-lis/les=48/49 n=178 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.889044762s) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 54'773 mlcod 54'773 active pruub 152.960220337s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.956642905Z level=info msg="Executing migration" id="create correlation table v1"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.957811246Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.168651ms
Sep 30 17:40:47 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 58 pg[10.0( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.889044762s) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 54'773 mlcod 0'0 unknown pruub 152.960220337s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b327de28 space 0x5631b3e77940 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40bf6a8 space 0x5631b3ff7460 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b409dd88 space 0x5631b3ff6900 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40beb68 space 0x5631b3fa97a0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40bfce8 space 0x5631b3ff7530 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b4098668 space 0x5631b3ff6830 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40be2a8 space 0x5631b3fa9870 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40bf388 space 0x5631b3ff7a10 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40bf1a8 space 0x5631b3ff7390 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40f8668 space 0x5631b3f301b0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b3dfde28 space 0x5631b3ff6eb0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40be848 space 0x5631b3ff7bb0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40f8488 space 0x5631b3ff7600 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b3bf5f68 space 0x5631b3e80280 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40a4708 space 0x5631b3bb25c0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40a4a28 space 0x5631b3be7120 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40acfc8 space 0x5631b3ff77a0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40d6e88 space 0x5631b3ff6aa0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40a4fc8 space 0x5631b3ff7050 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40a5388 space 0x5631b395b460 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40d6988 space 0x5631b3ff69d0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40a5e28 space 0x5631b4019bb0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b409cde8 space 0x5631b4019c80 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b409c2a8 space 0x5631b3f44900 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40bf748 space 0x5631b3ff76d0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40be708 space 0x5631b3ff7ae0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40a4d48 space 0x5631b3ff71f0 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40ac708 space 0x5631b3ff7d50 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40a4488 space 0x5631b3ff6f80 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x5631b33acd80) operator()   moving buffer(0x5631b40ac208 space 0x5631b3ff7c80 0x0~1000 clean)
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.971166781Z level=info msg="Executing migration" id="add index correlations.uid"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.972178317Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.012106ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.974381694Z level=info msg="Executing migration" id="add index correlations.source_uid"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.975471513Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.085999ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.98114466Z level=info msg="Executing migration" id="add correlation config column"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.989582958Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.435869ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.991761814Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.992973926Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.213642ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.994517086Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.995620184Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.102728ms
Sep 30 17:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:47.997513703Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.017538431Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=20.020518ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.019567134Z level=info msg="Executing migration" id="create correlation v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.020651172Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.085058ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.02252049Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.023362222Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=883.283µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.025491657Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.026550335Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.058918ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.028559247Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.029526762Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=968.315µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.031539194Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.03178946Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=249.556µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.033544596Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.034337226Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=793.02µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.040709501Z level=info msg="Executing migration" id="add provisioning column"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.047781534Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.070773ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.056327255Z level=info msg="Executing migration" id="create entity_events table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.057486825Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.1651ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.058925363Z level=info msg="Executing migration" id="create dashboard public config v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.059947709Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.022166ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.062368832Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.062757542Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.065904733Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.066254252Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.067830703Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.068635414Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=804.641µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.071024336Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.07196775Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=942.604µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.074629309Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.076062976Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.436857ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.078471938Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.079864665Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.393057ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.082230496Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.083440507Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.209921ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.085691285Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.086911767Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.226572ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.089435422Z level=info msg="Executing migration" id="Drop public config table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.090397277Z level=info msg="Migration successfully executed" id="Drop public config table" duration=961.355µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.092322987Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.093544159Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.217801ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.095563341Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.096633359Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.069988ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.098779034Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.099972735Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.193331ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.102015268Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.103115846Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.100768ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.105382905Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.132680892Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=27.293496ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.141754356Z level=info msg="Executing migration" id="add annotations_enabled column"
Sep 30 17:40:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.150539214Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.783388ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.166735843Z level=info msg="Executing migration" id="add time_selection_enabled column"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.17320901Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.473357ms
Sep 30 17:40:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e59 e59: 2 total, 2 up, 2 in
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.192535831Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.19289231Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=360.569µs
Sep 30 17:40:48 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e59: 2 total, 2 up, 2 in
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.197118199Z level=info msg="Executing migration" id="add share column"
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.17( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.15( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.16( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.14( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.13( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.2( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.e( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.a( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.c( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.9( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.f( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.d( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.b( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.8( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.3( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.4( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.5( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.6( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.18( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.19( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1a( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.7( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1b( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1c( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1e( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1f( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev d78c64e3-3bd0-446e-afb3-2def297cd132 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1d( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.10( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.11( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.12( v 54'774 lc 0'0 (0'0,54'774] local-lis/les=48/49 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.15( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.14( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev ce7ad284-509c-474a-ab1e-1078de8ecb4b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event ce7ad284-509c-474a-ab1e-1078de8ecb4b (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 59047769-f84c-4a5a-84c7-e28eb5b215a5 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 59047769-f84c-4a5a-84c7-e28eb5b215a5 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 1061f401-2725-4e1a-b30b-5707c10bdc2d (PG autoscaler increasing pool 10 PGs from 1 to 32)
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 1061f401-2725-4e1a-b30b-5707c10bdc2d (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 10bdeb24-547f-48fb-8abf-ce0a1c86c6e9 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 10bdeb24-547f-48fb-8abf-ce0a1c86c6e9 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev d78c64e3-3bd0-446e-afb3-2def297cd132 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event d78c64e3-3bd0-446e-afb3-2def297cd132 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.20566502Z level=info msg="Migration successfully executed" id="add share column" duration=8.545361ms
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.13( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.0( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 54'773 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.2( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-mon[73755]: pgmap v51: 229 pgs: 31 unknown, 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.4( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:48 compute-0 ceph-mon[73755]: osdmap e58: 2 total, 2 up, 2 in
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.9( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.8( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:48 compute-0 ceph-mon[73755]: Deploying daemon haproxy.rgw.default.compute-1.adkopy on compute-1
Sep 30 17:40:48 compute-0 ceph-mon[73755]: 8.16 scrub starts
Sep 30 17:40:48 compute-0 ceph-mon[73755]: 8.16 scrub ok
Sep 30 17:40:48 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.3( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.5( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.209037638Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.18( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.19( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.209267714Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=232.446µs
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.10( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.11( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 59 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=54'774 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.214265273Z level=info msg="Executing migration" id="create file table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.215155396Z level=info msg="Migration successfully executed" id="create file table" duration=890.973µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.217852226Z level=info msg="Executing migration" id="file table idx: path natural pk"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.21916744Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.327954ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.284227964Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.285640681Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.416397ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.288363701Z level=info msg="Executing migration" id="create file_meta table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.289216613Z level=info msg="Migration successfully executed" id="create file_meta table" duration=853.992µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.297397535Z level=info msg="Executing migration" id="file table idx: path key"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.298792711Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.398246ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.301414819Z level=info msg="Executing migration" id="set path collation in file table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.30147833Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=62.371µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.304230241Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.304296763Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=68.462µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.306989543Z level=info msg="Executing migration" id="managed permissions migration"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.307743212Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=755.689µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.309903118Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.310145625Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=243.327µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.312009563Z level=info msg="Executing migration" id="RBAC action name migrator"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.313621145Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.610651ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.315739829Z level=info msg="Executing migration" id="Add UID column to playlist"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.324384833Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.641174ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.425206813Z level=info msg="Executing migration" id="Update uid column values in playlist"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.425542241Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=334.008µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.431886916Z level=info msg="Executing migration" id="Add index for uid in playlist"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.433047166Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.169001ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.461198974Z level=info msg="Executing migration" id="update group index for alert rules"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.461727958Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=533.384µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.46451439Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.464721335Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=207.665µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.499454245Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.499987049Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=538.673µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.502885774Z level=info msg="Executing migration" id="add action column to seed_assignment"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.509412382Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.525549ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.511125097Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.517906602Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.776435ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.519868863Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.520849488Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=980.315µs
Sep 30 17:40:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v54: 291 pgs: 93 unknown, 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:40:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 17:40:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.522697406Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.597740208Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=75.016851ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.614911892Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.616042672Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.129419ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.617621743Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.618494855Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=872.812µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.676860356Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.698458775Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=21.603169ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.710769043Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.720993528Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=10.223895ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.723371359Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.723726829Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=355.61µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.730788101Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.731082919Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=213.556µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.742766611Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.743067679Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=302.578µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.761414684Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.761741212Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=329.188µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.764369471Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.764641408Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=272.738µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.766971928Z level=info msg="Executing migration" id="create folder table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.768028075Z level=info msg="Migration successfully executed" id="create folder table" duration=1.056307ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.770205202Z level=info msg="Executing migration" id="Add index for parent_uid"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.771211188Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.008317ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.773644681Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.774668037Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.023096ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.776640568Z level=info msg="Executing migration" id="Update folder title length"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.776664909Z level=info msg="Migration successfully executed" id="Update folder title length" duration=25.201µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.778983709Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.779913823Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=932.184µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.782015627Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.782947521Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=932.654µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.78483724Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.785767274Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=929.814µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.788244999Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.788739441Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=494.413µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.790557598Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.790801545Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=242.737µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.792453697Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.793404552Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=948.325µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.795157357Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.796261966Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.104479ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.798050602Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.798930365Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=880.143µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.800483005Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.801400389Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=916.594µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.803053142Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.803963135Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=909.583µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.805467164Z level=info msg="Executing migration" id="create anon_device table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.806298936Z level=info msg="Migration successfully executed" id="create anon_device table" duration=831.402µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.807969659Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.808945314Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=974.785µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.811321036Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.81304121Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.720704ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.815273258Z level=info msg="Executing migration" id="create signing_key table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.81650878Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.247202ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.819086707Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.82075548Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.668743ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.823264435Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.824588419Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.323854ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.826575921Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.82693274Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=358.95µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.836269211Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.843645932Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.369831ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.85321345Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.854135844Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=927.184µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.859322488Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.86054989Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.229812ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.862976513Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.864160203Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.18502ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.865850777Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.866817242Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=967.675µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.86905659Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.870396335Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.337605ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.872093409Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.873263049Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.16973ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.875792484Z level=info msg="Executing migration" id="create sso_setting table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.877223691Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.425517ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.881874452Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.882857577Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=986.475µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.884231853Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.884469839Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=239.276µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.886687636Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.886761138Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=76.472µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.888412551Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.89609368Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=7.675959ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.899326303Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.907726561Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=8.397528ms
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.911390736Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.911863948Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=474.522µs
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=migrator t=2025-09-30T17:40:48.915316167Z level=info msg="migrations completed" performed=547 skipped=0 duration=3.147645935s
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore t=2025-09-30T17:40:48.916928349Z level=info msg="Created default organization"
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=secrets t=2025-09-30T17:40:48.919883745Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Sep 30 17:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=plugin.store t=2025-09-30T17:40:48.942596343Z level=info msg="Loading plugins..."
Sep 30 17:40:48 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Sep 30 17:40:48 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=local.finder t=2025-09-30T17:40:49.016735832Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=plugin.store t=2025-09-30T17:40:49.016767473Z level=info msg="Plugins loaded" count=55 duration=74.17258ms
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=query_data t=2025-09-30T17:40:49.019635887Z level=info msg="Query Service initialization"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=live.push_http t=2025-09-30T17:40:49.023454536Z level=info msg="Live Push Gateway initialization"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.migration t=2025-09-30T17:40:49.02669771Z level=info msg=Starting
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.migration t=2025-09-30T17:40:49.027157582Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.migration orgID=1 t=2025-09-30T17:40:49.027564663Z level=info msg="Migrating alerts for organisation"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.migration orgID=1 t=2025-09-30T17:40:49.028384004Z level=info msg="Alerts found to migrate" alerts=0
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.migration t=2025-09-30T17:40:49.030058197Z level=info msg="Completed alerting migration"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.state.manager t=2025-09-30T17:40:49.057715483Z level=info msg="Running in alternative execution of Error/NoData mode"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=infra.usagestats.collector t=2025-09-30T17:40:49.059587951Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=provisioning.datasources t=2025-09-30T17:40:49.060581187Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=provisioning.alerting t=2025-09-30T17:40:49.070834462Z level=info msg="starting to provision alerting"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=provisioning.alerting t=2025-09-30T17:40:49.070864683Z level=info msg="finished to provision alerting"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.state.manager t=2025-09-30T17:40:49.071007077Z level=info msg="Warming state cache for startup"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=http.server t=2025-09-30T17:40:49.074555349Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=http.server t=2025-09-30T17:40:49.074908658Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.multiorg.alertmanager t=2025-09-30T17:40:49.074967259Z level=info msg="Starting MultiOrg Alertmanager"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=grafanaStorageLogger t=2025-09-30T17:40:49.082760681Z level=info msg="Storage starting"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=provisioning.dashboard t=2025-09-30T17:40:49.107366478Z level=info msg="starting to provision dashboards"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.state.manager t=2025-09-30T17:40:49.131458211Z level=info msg="State cache has been initialized" states=0 duration=60.446084ms
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ngalert.scheduler t=2025-09-30T17:40:49.131562854Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ticker t=2025-09-30T17:40:49.131642636Z level=info msg=starting first_tick=2025-09-30T17:40:50Z
Sep 30 17:40:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=plugins.update.checker t=2025-09-30T17:40:49.180995634Z level=info msg="Update check succeeded" duration=107.626195ms
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=grafana.update.checker t=2025-09-30T17:40:49.181515637Z level=info msg="Update check succeeded" duration=110.006467ms
Sep 30 17:40:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Sep 30 17:40:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e60 e60: 2 total, 2 up, 2 in
Sep 30 17:40:49 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e60: 2 total, 2 up, 2 in
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.228885333Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.229365865Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.240417861Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Sep 30 17:40:49 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Sep 30 17:40:49 compute-0 ceph-mon[73755]: osdmap e59: 2 total, 2 up, 2 in
Sep 30 17:40:49 compute-0 ceph-mon[73755]: pgmap v54: 291 pgs: 93 unknown, 198 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:49 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:49 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:49 compute-0 ceph-mon[73755]: 8.15 scrub starts
Sep 30 17:40:49 compute-0 ceph-mon[73755]: 8.15 scrub ok
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.264986437Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.294398939Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.30527999Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.30526563Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.317768953Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:40:49.32304454Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked"
Sep 30 17:40:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:40:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:40:49.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=grafana-apiserver t=2025-09-30T17:40:49.392633971Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=grafana-apiserver t=2025-09-30T17:40:49.393026361Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:49 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:49 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c0025d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:49 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 60 pg[12.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=15.308391571s) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active pruub 157.074020386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:49 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 60 pg[12.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=15.308391571s) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown pruub 157.074020386s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:40:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:40:49.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:40:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:40:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=provisioning.dashboard t=2025-09-30T17:40:49.781860315Z level=info msg="finished to provision dashboards"
Sep 30 17:40:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 17:40:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Sep 30 17:40:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:49 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:49 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:49 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:49 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:49 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.fjegxm on compute-0
Sep 30 17:40:49 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.fjegxm on compute-0
Sep 30 17:40:49 compute-0 sudo[99690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:49 compute-0 sudo[99690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:49 compute-0 sudo[99690]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:49 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Sep 30 17:40:49 compute-0 sudo[99715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:49 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Sep 30 17:40:49 compute-0 sudo[99715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Sep 30 17:40:50 compute-0 ceph-mon[73755]: 10.15 scrub starts
Sep 30 17:40:50 compute-0 ceph-mon[73755]: 10.15 scrub ok
Sep 30 17:40:50 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:50 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 17:40:50 compute-0 ceph-mon[73755]: osdmap e60: 2 total, 2 up, 2 in
Sep 30 17:40:50 compute-0 ceph-mon[73755]: 8.14 scrub starts
Sep 30 17:40:50 compute-0 ceph-mon[73755]: 8.14 scrub ok
Sep 30 17:40:50 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:50 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:50 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:50 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:50 compute-0 ceph-mon[73755]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:50 compute-0 ceph-mon[73755]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:50 compute-0 ceph-mon[73755]: Deploying daemon keepalived.rgw.default.compute-0.fjegxm on compute-0
Sep 30 17:40:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e61 e61: 2 total, 2 up, 2 in
Sep 30 17:40:50 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e61: 2 total, 2 up, 2 in
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.11( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.13( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.4( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.7( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.15( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.6( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.9( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.8( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.12( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.a( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.f( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.c( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.b( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.e( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.d( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.5( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.2( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.3( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1e( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1f( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1d( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1b( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.18( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.19( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1c( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.16( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.17( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.14( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1a( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.11( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.10( empty local-lis/les=52/53 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.4( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.9( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.f( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.d( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.5( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.2( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.0( empty local-lis/les=60/61 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1f( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.15( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.16( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.17( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.14( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1a( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 61 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:50 compute-0 podman[99780]: 2025-09-30 17:40:50.421851378 +0000 UTC m=+0.066691237 container create 288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc (image=quay.io/ceph/keepalived:2.2.4, name=elated_einstein, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Sep 30 17:40:50 compute-0 systemd[1]: Started libpod-conmon-288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc.scope.
Sep 30 17:40:50 compute-0 podman[99780]: 2025-09-30 17:40:50.388033053 +0000 UTC m=+0.032873012 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Sep 30 17:40:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:50 compute-0 podman[99780]: 2025-09-30 17:40:50.526541808 +0000 UTC m=+0.171381697 container init 288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc (image=quay.io/ceph/keepalived:2.2.4, name=elated_einstein, vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph.)
Sep 30 17:40:50 compute-0 podman[99780]: 2025-09-30 17:40:50.541511545 +0000 UTC m=+0.186351414 container start 288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc (image=quay.io/ceph/keepalived:2.2.4, name=elated_einstein, version=2.2.4, distribution-scope=public, name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 17:40:50 compute-0 podman[99780]: 2025-09-30 17:40:50.546384061 +0000 UTC m=+0.191223950 container attach 288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc (image=quay.io/ceph/keepalived:2.2.4, name=elated_einstein, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793)
Sep 30 17:40:50 compute-0 elated_einstein[99797]: 0 0
Sep 30 17:40:50 compute-0 systemd[1]: libpod-288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc.scope: Deactivated successfully.
Sep 30 17:40:50 compute-0 conmon[99797]: conmon 288f005a6f9a0027de9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc.scope/container/memory.events
Sep 30 17:40:50 compute-0 podman[99780]: 2025-09-30 17:40:50.552292034 +0000 UTC m=+0.197131893 container died 288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc (image=quay.io/ceph/keepalived:2.2.4, name=elated_einstein, io.openshift.expose-services=, release=1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=keepalived for Ceph, com.redhat.component=keepalived-container)
Sep 30 17:40:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-90fe4beed129aad2b36f89032121f468ad9ead8b8f631725ee836667c6a220ae-merged.mount: Deactivated successfully.
Sep 30 17:40:50 compute-0 podman[99780]: 2025-09-30 17:40:50.599625079 +0000 UTC m=+0.244464938 container remove 288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc (image=quay.io/ceph/keepalived:2.2.4, name=elated_einstein, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, version=2.2.4, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Sep 30 17:40:50 compute-0 systemd[1]: libpod-conmon-288f005a6f9a0027de9cfe6d16b63c2f7ffed0f4e49018b7cdcbcc16f4c716fc.scope: Deactivated successfully.
Sep 30 17:40:50 compute-0 systemd[1]: Reloading.
Sep 30 17:40:50 compute-0 systemd-sysv-generator[99848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:50 compute-0 systemd-rc-local-generator[99844]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:50 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Sep 30 17:40:50 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Sep 30 17:40:51 compute-0 systemd[1]: Reloading.
Sep 30 17:40:51 compute-0 systemd-rc-local-generator[99880]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:51 compute-0 systemd-sysv-generator[99887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:51 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.fjegxm for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:51 compute-0 ceph-mon[73755]: 10.16 scrub starts
Sep 30 17:40:51 compute-0 ceph-mon[73755]: 10.16 scrub ok
Sep 30 17:40:51 compute-0 ceph-mon[73755]: osdmap e61: 2 total, 2 up, 2 in
Sep 30 17:40:51 compute-0 ceph-mon[73755]: pgmap v57: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:51 compute-0 ceph-mon[73755]: 8.17 scrub starts
Sep 30 17:40:51 compute-0 ceph-mon[73755]: 8.17 scrub ok
Sep 30 17:40:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:40:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:40:51.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:51 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:51 compute-0 podman[99943]: 2025-09-30 17:40:51.486894153 +0000 UTC m=+0.050826396 container create 29e4700b4d52c7cf7ea3a984e2e12e0e462cf19ed39f81d6a59d6bd8d46a76f7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm, architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., version=2.2.4, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:51 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34e0504ece020e1e88ad0ffa4be60c4597b87b23e74c9931d07017180c3ef1e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:51 compute-0 podman[99943]: 2025-09-30 17:40:51.540572172 +0000 UTC m=+0.104504395 container init 29e4700b4d52c7cf7ea3a984e2e12e0e462cf19ed39f81d6a59d6bd8d46a76f7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Sep 30 17:40:51 compute-0 podman[99943]: 2025-09-30 17:40:51.545774377 +0000 UTC m=+0.109706580 container start 29e4700b4d52c7cf7ea3a984e2e12e0e462cf19ed39f81d6a59d6bd8d46a76f7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm, com.redhat.component=keepalived-container, release=1793, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=2.2.4, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.expose-services=, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived)
Sep 30 17:40:51 compute-0 podman[99943]: 2025-09-30 17:40:51.459948616 +0000 UTC m=+0.023880909 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Sep 30 17:40:51 compute-0 bash[99943]: 29e4700b4d52c7cf7ea3a984e2e12e0e462cf19ed39f81d6a59d6bd8d46a76f7
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: Starting Keepalived v2.2.4 (08/21,2021)
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: Running on Linux 5.14.0-617.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025 (built for Linux 5.14.0)
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: Configuration file /etc/keepalived/keepalived.conf
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Sep 30 17:40:51 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.fjegxm for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: Starting VRRP child process, pid=4
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: Startup complete
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: (VI_0) Entering BACKUP STATE (init)
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:51 2025: (VI_0) Entering BACKUP STATE
Sep 30 17:40:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:51 2025: VRRP_Script(check_backend) succeeded
Sep 30 17:40:51 compute-0 sudo[99715]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 17:40:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:51 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:51 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:51 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:51 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:51 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-1.wuqpyu on compute-1
Sep 30 17:40:51 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-1.wuqpyu on compute-1
Sep 30 17:40:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:40:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:40:51.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:40:51 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Sep 30 17:40:51 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Sep 30 17:40:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc[98429]: Tue Sep 30 17:40:52 2025: (VI_0) Entering MASTER STATE
Sep 30 17:40:52 compute-0 sudo[99991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljbxjjponriullcjeuwbhhzbvqlrnism ; /usr/bin/python3'
Sep 30 17:40:52 compute-0 sudo[99991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:40:52 compute-0 python3[99993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:40:52 compute-0 podman[99994]: 2025-09-30 17:40:52.468159129 +0000 UTC m=+0.045372915 container create 74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467 (image=quay.io/ceph/ceph:v19, name=nifty_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 17:40:52 compute-0 systemd[1]: Started libpod-conmon-74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467.scope.
Sep 30 17:40:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:52 compute-0 podman[99994]: 2025-09-30 17:40:52.45156631 +0000 UTC m=+0.028780116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:40:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491de198169424b187331c09790def1e3fcc61d525e15e76d059f6897a9f28f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491de198169424b187331c09790def1e3fcc61d525e15e76d059f6897a9f28f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:52 compute-0 podman[99994]: 2025-09-30 17:40:52.5767675 +0000 UTC m=+0.153981326 container init 74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467 (image=quay.io/ceph/ceph:v19, name=nifty_snyder, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:40:52 compute-0 podman[99994]: 2025-09-30 17:40:52.584562542 +0000 UTC m=+0.161776338 container start 74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467 (image=quay.io/ceph/ceph:v19, name=nifty_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Sep 30 17:40:52 compute-0 podman[99994]: 2025-09-30 17:40:52.589326415 +0000 UTC m=+0.166540251 container attach 74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467 (image=quay.io/ceph/ceph:v19, name=nifty_snyder, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:40:52 compute-0 ceph-mon[73755]: 10.14 scrub starts
Sep 30 17:40:52 compute-0 ceph-mon[73755]: 10.14 scrub ok
Sep 30 17:40:52 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:52 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:52 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:52 compute-0 ceph-mon[73755]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 17:40:52 compute-0 ceph-mon[73755]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 17:40:52 compute-0 ceph-mon[73755]: Deploying daemon keepalived.rgw.default.compute-1.wuqpyu on compute-1
Sep 30 17:40:52 compute-0 ceph-mon[73755]: 8.11 scrub starts
Sep 30 17:40:52 compute-0 ceph-mon[73755]: 8.11 scrub ok
Sep 30 17:40:52 compute-0 ceph-mon[73755]: 10.0 scrub starts
Sep 30 17:40:52 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 23 completed events
Sep 30 17:40:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:52 compute-0 nifty_snyder[100009]: could not fetch user info: no user info saved
Sep 30 17:40:52 compute-0 systemd[1]: libpod-74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467.scope: Deactivated successfully.
Sep 30 17:40:52 compute-0 podman[99994]: 2025-09-30 17:40:52.865438712 +0000 UTC m=+0.442652508 container died 74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467 (image=quay.io/ceph/ceph:v19, name=nifty_snyder, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 17:40:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-491de198169424b187331c09790def1e3fcc61d525e15e76d059f6897a9f28f7-merged.mount: Deactivated successfully.
Sep 30 17:40:52 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Sep 30 17:40:52 compute-0 podman[99994]: 2025-09-30 17:40:52.911956746 +0000 UTC m=+0.489170542 container remove 74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467 (image=quay.io/ceph/ceph:v19, name=nifty_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:40:52 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Sep 30 17:40:52 compute-0 systemd[1]: libpod-conmon-74350e19d899df08821f8dd31ce8adc7178e48a6c872dfd68e669bd8c04c8467.scope: Deactivated successfully.
Sep 30 17:40:52 compute-0 sudo[99991]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:53 compute-0 sudo[100130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcokllomrucmfyfdpyxxslagkvjherjj ; /usr/bin/python3'
Sep 30 17:40:53 compute-0 sudo[100130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:40:53 compute-0 python3[100132]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:40:53 compute-0 podman[100133]: 2025-09-30 17:40:53.251889144 +0000 UTC m=+0.039725210 container create 437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e (image=quay.io/ceph/ceph:v19, name=distracted_napier, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:40:53 compute-0 systemd[1]: Started libpod-conmon-437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e.scope.
Sep 30 17:40:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaee3c3c601b1096b5e8a8eba3e38084047b3e389e9876bdcdf1ec0c56c30c9c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaee3c3c601b1096b5e8a8eba3e38084047b3e389e9876bdcdf1ec0c56c30c9c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:53 compute-0 podman[100133]: 2025-09-30 17:40:53.236087275 +0000 UTC m=+0.023923361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 17:40:53 compute-0 podman[100133]: 2025-09-30 17:40:53.337767536 +0000 UTC m=+0.125603672 container init 437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e (image=quay.io/ceph/ceph:v19, name=distracted_napier, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:40:53 compute-0 podman[100133]: 2025-09-30 17:40:53.34370985 +0000 UTC m=+0.131545896 container start 437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e (image=quay.io/ceph/ceph:v19, name=distracted_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 17:40:53 compute-0 podman[100133]: 2025-09-30 17:40:53.348065593 +0000 UTC m=+0.135901669 container attach 437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e (image=quay.io/ceph/ceph:v19, name=distracted_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 17:40:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:40:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:40:53.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:40:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:53 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c001110 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:53 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c0025d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:53 compute-0 distracted_napier[100149]: {
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "user_id": "openstack",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "display_name": "openstack",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "email": "",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "suspended": 0,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "max_buckets": 1000,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "subusers": [],
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "keys": [
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         {
Sep 30 17:40:53 compute-0 distracted_napier[100149]:             "user": "openstack",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:             "access_key": "2RNPX180AWXCQ63OR714",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:             "secret_key": "AKuYyav2FifLXAJimuCuHFXPXSZynLeMVZzGTVze",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:             "active": true,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:             "create_date": "2025-09-30T17:40:53.531548Z"
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         }
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     ],
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "swift_keys": [],
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "caps": [],
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "op_mask": "read, write, delete",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "default_placement": "",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "default_storage_class": "",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "placement_tags": [],
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "bucket_quota": {
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "enabled": false,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "check_on_raw": false,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "max_size": -1,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "max_size_kb": 0,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "max_objects": -1
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     },
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "user_quota": {
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "enabled": false,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "check_on_raw": false,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "max_size": -1,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "max_size_kb": 0,
Sep 30 17:40:53 compute-0 distracted_napier[100149]:         "max_objects": -1
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     },
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "temp_url_keys": [],
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "type": "rgw",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "mfa_ids": [],
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "account_id": "",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "path": "/",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "create_date": "2025-09-30T17:40:53.531113Z",
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "tags": [],
Sep 30 17:40:53 compute-0 distracted_napier[100149]:     "group_ids": []
Sep 30 17:40:53 compute-0 distracted_napier[100149]: }
Sep 30 17:40:53 compute-0 distracted_napier[100149]: 
Sep 30 17:40:53 compute-0 systemd[1]: libpod-437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e.scope: Deactivated successfully.
Sep 30 17:40:53 compute-0 podman[100238]: 2025-09-30 17:40:53.65363053 +0000 UTC m=+0.030692405 container died 437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e (image=quay.io/ceph/ceph:v19, name=distracted_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:40:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaee3c3c601b1096b5e8a8eba3e38084047b3e389e9876bdcdf1ec0c56c30c9c-merged.mount: Deactivated successfully.
Sep 30 17:40:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:40:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:40:53.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:40:53 compute-0 podman[100238]: 2025-09-30 17:40:53.688839981 +0000 UTC m=+0.065901856 container remove 437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e (image=quay.io/ceph/ceph:v19, name=distracted_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:40:53 compute-0 systemd[1]: libpod-conmon-437fe85faff9fe17a92e0f7785ac8f7a12858163d6578f454a0f6301b376154e.scope: Deactivated successfully.
Sep 30 17:40:53 compute-0 sudo[100130]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:53 compute-0 ceph-mon[73755]: 10.0 scrub ok
Sep 30 17:40:53 compute-0 ceph-mon[73755]: pgmap v58: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 58 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:53 compute-0 ceph-mon[73755]: 8.2 deep-scrub starts
Sep 30 17:40:53 compute-0 ceph-mon[73755]: 8.2 deep-scrub ok
Sep 30 17:40:53 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:53 compute-0 ceph-mon[73755]: 10.2 scrub starts
Sep 30 17:40:53 compute-0 ceph-mon[73755]: 10.2 scrub ok
Sep 30 17:40:53 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Sep 30 17:40:53 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Sep 30 17:40:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:40:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:40:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 17:40:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:53 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev 1594b8e3-4cd2-42ec-a1e0-f535c3afe9b6 (Updating ingress.rgw.default deployment (+4 -> 4))
Sep 30 17:40:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 17:40:53 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 1594b8e3-4cd2-42ec-a1e0-f535c3afe9b6 (Updating ingress.rgw.default deployment (+4 -> 4)) in 8 seconds
Sep 30 17:40:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:54 compute-0 ceph-mgr[74051]: [progress INFO root] update: starting ev ea8fbe14-b69c-4e1e-ad65-19948db3a131 (Updating prometheus deployment (+1 -> 1))
Sep 30 17:40:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:54 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Sep 30 17:40:54 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Sep 30 17:40:54 compute-0 python3[100277]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:40:54 compute-0 sudo[100278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:40:54 compute-0 sudo[100278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:54 compute-0 sudo[100278]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:54 compute-0 sudo[100303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:40:54 compute-0 sudo[100303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:40:54 compute-0 ceph-mgr[74051]: [dashboard INFO request] [192.168.122.100:51142] [GET] [200] [0.118s] [6.3K] [6cf42367-2dd8-4e27-bf24-bc3b32ed3306] /
Sep 30 17:40:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 76 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:40:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:40:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:40:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Sep 30 17:40:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:40:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: 8.f scrub starts
Sep 30 17:40:54 compute-0 ceph-mon[73755]: 8.f scrub ok
Sep 30 17:40:54 compute-0 ceph-mon[73755]: 10.13 scrub starts
Sep 30 17:40:54 compute-0 ceph-mon[73755]: 10.13 scrub ok
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Sep 30 17:40:54 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:40:54 compute-0 python3[100351]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:40:54 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.e scrub starts
Sep 30 17:40:54 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.e scrub ok
Sep 30 17:40:54 compute-0 ceph-mgr[74051]: [dashboard INFO request] [192.168.122.100:51158] [GET] [200] [0.002s] [6.3K] [41a7fba4-e0e1-44d2-b25b-1661885cdc50] /
Sep 30 17:40:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Sep 30 17:40:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e62 e62: 2 total, 2 up, 2 in
Sep 30 17:40:55 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e62: 2 total, 2 up, 2 in
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.270806313s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.410888672s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.10( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.270766258s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.410888672s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.268129349s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408462524s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.12( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.268093109s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408462524s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267663002s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408172607s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.13( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267568588s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408172607s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267521858s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408279419s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.7( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267502785s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408279419s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.9( v 61'1 (0'0,61'1] local-lis/les=60/61 n=1 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267412186s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=61'1 lcod 0'0 mlcod 0'0 active pruub 158.408401489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267320633s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408309937s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.9( v 61'1 (0'0,61'1] local-lis/les=60/61 n=1 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267388344s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=61'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.408401489s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.6( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267295837s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408309937s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267251015s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408477783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267219543s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408477783s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267082214s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408523560s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.b( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.267067909s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408523560s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266924858s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408432007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.8( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266849518s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408432007s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266814232s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408554077s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.e( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266767502s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408554077s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266515732s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408493042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266523361s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408615112s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.3( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266490936s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408615112s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.268260002s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.410705566s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.1c( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.268242836s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.410705566s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.a( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266401291s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408493042s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266044617s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408737183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.1d( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.266004562s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408737183s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.265842438s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408798218s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.19( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.265814781s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408798218s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.265819550s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 active pruub 158.408782959s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[12.18( empty local-lis/les=60/61 n=0 ec=60/52 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=11.265687943s) [1] r=-1 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.408782959s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.16( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.15( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.14( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.13( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.17( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.10( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.10( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.11( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.12( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.2( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.9( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.8( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.d( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.3( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.5( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.5( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.7( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.a( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.4( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.1b( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.1a( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.18( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.19( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.1b( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.18( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.1c( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.1e( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.1c( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.12( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[8.12( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 62 pg[11.1d( empty local-lis/les=0/0 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-rgw-default-compute-0-fjegxm[99960]: Tue Sep 30 17:40:55 2025: (VI_0) Entering MASTER STATE
Sep 30 17:40:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:40:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:40:55.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:40:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:55 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c0025d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:55 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:40:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:40:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:40:55 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Sep 30 17:40:55 compute-0 ceph-mon[73755]: Deploying daemon prometheus.compute-0 on compute-0
Sep 30 17:40:55 compute-0 ceph-mon[73755]: pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 76 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:55 compute-0 ceph-mon[73755]: 8.9 scrub starts
Sep 30 17:40:55 compute-0 ceph-mon[73755]: 8.9 scrub ok
Sep 30 17:40:55 compute-0 ceph-mon[73755]: 10.e scrub starts
Sep 30 17:40:55 compute-0 ceph-mon[73755]: 10.e scrub ok
Sep 30 17:40:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:40:55 compute-0 ceph-mon[73755]: osdmap e62: 2 total, 2 up, 2 in
Sep 30 17:40:55 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Sep 30 17:40:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Sep 30 17:40:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e63 e63: 2 total, 2 up, 2 in
Sep 30 17:40:56 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e63: 2 total, 2 up, 2 in
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.17( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.15( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.17( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.16( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.15( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.16( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.13( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.14( v 61'40 lc 61'39 (0'0,61'40] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=61'40 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.17( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.12( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.11( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.10( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.3( v 61'40 lc 61'39 (0'0,61'40] local-lis/les=62/63 n=1 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=61'40 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.3( v 47'12 (0'0,47'12] local-lis/les=62/63 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.d( v 54'26 lc 0'0 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.e( v 61'40 lc 61'39 (0'0,61'40] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=61'40 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.2( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.f( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.f( v 47'12 lc 0'0 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.8( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.a( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.14( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.d( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.9( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.5( v 51'32 (0'0,51'32] local-lis/les=62/63 n=1 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.7( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.4( v 51'32 (0'0,51'32] local-lis/les=62/63 n=1 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.e( v 47'12 lc 0'0 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.6( v 47'12 lc 0'0 (0'0,47'12] local-lis/les=62/63 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.7( v 51'32 (0'0,51'32] local-lis/les=62/63 n=1 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.5( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.1b( v 54'26 lc 52'8 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.1( v 51'32 (0'0,51'32] local-lis/les=62/63 n=1 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.1b( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.19( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.18( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.18( v 54'26 lc 0'0 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.1a( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.1d( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.1c( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.1c( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[11.1e( v 51'32 (0'0,51'32] local-lis/les=62/63 n=0 ec=60/50 lis/c=60/60 les/c/f=61/61/0 sis=62) [0] r=0 lpr=62 pi=[60,62)/1 crt=51'32 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.13( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.12( v 54'26 (0'0,54'26] local-lis/les=62/63 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.12( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[9.11( v 47'12 (0'0,47'12] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [0] r=0 lpr=62 pi=[58,62)/1 crt=47'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.10( v 59'28 lc 52'4 (0'0,59'28] local-lis/les=62/63 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=59'28 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 63 pg[8.4( v 54'26 (0'0,54'26] local-lis/les=62/63 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=54'26 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 76 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Sep 30 17:40:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Sep 30 17:40:56 compute-0 ceph-mon[73755]: 9.14 scrub starts
Sep 30 17:40:56 compute-0 ceph-mon[73755]: 9.14 scrub ok
Sep 30 17:40:56 compute-0 ceph-mon[73755]: 10.17 scrub starts
Sep 30 17:40:56 compute-0 ceph-mon[73755]: 10.17 scrub ok
Sep 30 17:40:56 compute-0 ceph-mon[73755]: osdmap e63: 2 total, 2 up, 2 in
Sep 30 17:40:56 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Sep 30 17:40:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Sep 30 17:40:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Sep 30 17:40:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e64 e64: 2 total, 2 up, 2 in
Sep 30 17:40:57 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e64: 2 total, 2 up, 2 in
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=58/59 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.142557144s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.321807861s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=58/59 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.142479897s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.321807861s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.2( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146841049s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.326660156s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.2( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146807671s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.326660156s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146813393s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.326751709s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146770477s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.326751709s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146573067s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.326782227s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146557808s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.326782227s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146512985s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.327178955s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146491051s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.327178955s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146348953s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.327255249s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146326065s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.327255249s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146456718s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.327606201s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146437645s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.327606201s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146429062s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.327789307s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:57 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 64 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=64 pruub=15.146411896s) [1] r=-1 lpr=64 pi=[58,64)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.327789307s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:40:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:40:57.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:40:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:57 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c001110 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:57 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c0025d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:40:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:40:57.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:40:57 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.e scrub starts
Sep 30 17:40:57 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.e scrub ok
Sep 30 17:40:57 compute-0 ceph-mgr[74051]: [progress INFO root] Writing back 24 completed events
Sep 30 17:40:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 17:40:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:57 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event 7d4327f2-4a13-43d3-a7a2-6b541c613549 (Global Recovery Event) in 10 seconds
Sep 30 17:40:57 compute-0 ceph-mon[73755]: pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 76 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:57 compute-0 ceph-mon[73755]: 11.15 scrub starts
Sep 30 17:40:57 compute-0 ceph-mon[73755]: 11.15 scrub ok
Sep 30 17:40:57 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Sep 30 17:40:57 compute-0 ceph-mon[73755]: osdmap e64: 2 total, 2 up, 2 in
Sep 30 17:40:57 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Sep 30 17:40:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e65 e65: 2 total, 2 up, 2 in
Sep 30 17:40:58 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e65: 2 total, 2 up, 2 in
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=58/59 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=58/59 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.2( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.2( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 65 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.180080932 +0000 UTC m=+3.389638480 volume create 03b70af7493bda5d667e4f5232f178f60ddf88df694ee89cac55b5d47e2a313c
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.19044436 +0000 UTC m=+3.400001908 container create 12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725 (image=quay.io/prometheus/prometheus:v2.51.0, name=objective_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.163222665 +0000 UTC m=+3.372780233 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Sep 30 17:40:58 compute-0 systemd[1]: Started libpod-conmon-12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725.scope.
Sep 30 17:40:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf2880f1c5e64af49cd235600ec1794b2438d831d92e38aaa3d53d4cc28ea70/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.270311677 +0000 UTC m=+3.479869235 container init 12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725 (image=quay.io/prometheus/prometheus:v2.51.0, name=objective_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.277473102 +0000 UTC m=+3.487030650 container start 12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725 (image=quay.io/prometheus/prometheus:v2.51.0, name=objective_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 objective_euler[100651]: 65534 65534
Sep 30 17:40:58 compute-0 systemd[1]: libpod-12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725.scope: Deactivated successfully.
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.282864272 +0000 UTC m=+3.492421840 container attach 12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725 (image=quay.io/prometheus/prometheus:v2.51.0, name=objective_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.283055797 +0000 UTC m=+3.492613345 container died 12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725 (image=quay.io/prometheus/prometheus:v2.51.0, name=objective_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf2880f1c5e64af49cd235600ec1794b2438d831d92e38aaa3d53d4cc28ea70-merged.mount: Deactivated successfully.
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.318519495 +0000 UTC m=+3.528077043 container remove 12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725 (image=quay.io/prometheus/prometheus:v2.51.0, name=objective_euler, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 podman[100392]: 2025-09-30 17:40:58.32220979 +0000 UTC m=+3.531767338 volume remove 03b70af7493bda5d667e4f5232f178f60ddf88df694ee89cac55b5d47e2a313c
Sep 30 17:40:58 compute-0 systemd[1]: libpod-conmon-12be2a8538e9112ffd1e3c740cf6824a2e5883d418a167c918321a0088a10725.scope: Deactivated successfully.
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.390124378 +0000 UTC m=+0.039897444 volume create adacf22b35be9e36f5e38b9540b51e621bbcd00be734bc795abdfe411bbf31a7
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.398099914 +0000 UTC m=+0.047872980 container create f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3 (image=quay.io/prometheus/prometheus:v2.51.0, name=crazy_lichterman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 systemd[1]: Started libpod-conmon-f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3.scope.
Sep 30 17:40:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266428f36819671566cb851b4b1e95fa5a190d6b80e172b19a4cbcc02740b14e/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.374806162 +0000 UTC m=+0.024579258 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.47058601 +0000 UTC m=+0.120359116 container init f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3 (image=quay.io/prometheus/prometheus:v2.51.0, name=crazy_lichterman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.476531054 +0000 UTC m=+0.126304120 container start f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3 (image=quay.io/prometheus/prometheus:v2.51.0, name=crazy_lichterman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 crazy_lichterman[100684]: 65534 65534
Sep 30 17:40:58 compute-0 systemd[1]: libpod-f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3.scope: Deactivated successfully.
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.48061346 +0000 UTC m=+0.130386576 container attach f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3 (image=quay.io/prometheus/prometheus:v2.51.0, name=crazy_lichterman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.480931808 +0000 UTC m=+0.130704904 container died f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3 (image=quay.io/prometheus/prometheus:v2.51.0, name=crazy_lichterman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-266428f36819671566cb851b4b1e95fa5a190d6b80e172b19a4cbcc02740b14e-merged.mount: Deactivated successfully.
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.520672117 +0000 UTC m=+0.170445183 container remove f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3 (image=quay.io/prometheus/prometheus:v2.51.0, name=crazy_lichterman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:58 compute-0 podman[100668]: 2025-09-30 17:40:58.524401463 +0000 UTC m=+0.174174529 volume remove adacf22b35be9e36f5e38b9540b51e621bbcd00be734bc795abdfe411bbf31a7
Sep 30 17:40:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 353 active+clean; 456 KiB data, 76 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Sep 30 17:40:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Sep 30 17:40:58 compute-0 systemd[1]: libpod-conmon-f741002cfb18d66ef6f3a07610056b9ffb1cc0452fb094f4d32cc1f085d772d3.scope: Deactivated successfully.
Sep 30 17:40:58 compute-0 systemd[1]: Reloading.
Sep 30 17:40:58 compute-0 systemd-rc-local-generator[100730]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:58 compute-0 systemd-sysv-generator[100734]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:58 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Sep 30 17:40:58 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Sep 30 17:40:58 compute-0 systemd[1]: Reloading.
Sep 30 17:40:58 compute-0 ceph-mon[73755]: 8.3 scrub starts
Sep 30 17:40:58 compute-0 ceph-mon[73755]: 8.3 scrub ok
Sep 30 17:40:58 compute-0 ceph-mon[73755]: 9.e scrub starts
Sep 30 17:40:58 compute-0 ceph-mon[73755]: 9.e scrub ok
Sep 30 17:40:58 compute-0 ceph-mon[73755]: osdmap e65: 2 total, 2 up, 2 in
Sep 30 17:40:58 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Sep 30 17:40:58 compute-0 ceph-mon[73755]: 9.2 scrub starts
Sep 30 17:40:58 compute-0 ceph-mon[73755]: 9.2 scrub ok
Sep 30 17:40:58 compute-0 systemd-sysv-generator[100774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:40:58 compute-0 systemd-rc-local-generator[100770]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Sep 30 17:40:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e66 e66: 2 total, 2 up, 2 in
Sep 30 17:40:59 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e66: 2 total, 2 up, 2 in
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=58/59 n=3 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.092910767s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.326797485s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=58/59 n=3 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.092851639s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.326797485s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.13( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.091553688s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.326583862s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.13( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.091521263s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.326583862s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.090902328s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.326812744s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.090867996s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.326812744s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.090050697s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.326828003s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.3( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.090237617s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.327056885s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.090021133s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.326828003s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.3( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.090211868s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.327056885s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.090064049s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.327499390s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.090046883s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.327499390s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.089380264s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.327285767s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.089364052s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.327285767s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.089384079s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 164.327728271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.089305878s) [1] r=-1 lpr=66 pi=[58,66)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.327728271s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] async=[1] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.2( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] async=[1] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] async=[1] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] async=[1] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] async=[1] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] async=[1] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=65/66 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] async=[1] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 66 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=65) [1]/[0] async=[1] r=0 lpr=65 pi=[58,65)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e67 e67: 2 total, 2 up, 2 in
Sep 30 17:40:59 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e67: 2 total, 2 up, 2 in
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.3( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.3( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.13( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.13( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=58/59 n=3 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=58/59 n=3 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 67 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:40:59 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:40:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:40:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:40:59.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:40:59 compute-0 podman[100830]: 2025-09-30 17:40:59.415798114 +0000 UTC m=+0.050986230 container create 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:59 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c0025d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c764a7b2a42b1a6ff730e116cccfa21fa10678ebfe004329bf640831f2f77396/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c764a7b2a42b1a6ff730e116cccfa21fa10678ebfe004329bf640831f2f77396/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Sep 30 17:40:59 compute-0 podman[100830]: 2025-09-30 17:40:59.392673306 +0000 UTC m=+0.027861462 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Sep 30 17:40:59 compute-0 podman[100830]: 2025-09-30 17:40:59.492716295 +0000 UTC m=+0.127904441 container init 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:59 compute-0 podman[100830]: 2025-09-30 17:40:59.499523741 +0000 UTC m=+0.134711877 container start 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:40:59 compute-0 bash[100830]: 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53
Sep 30 17:40:59 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:40:59 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.549Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.549Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.549Z caller=main.go:623 level=info host_details="(Linux 5.14.0-617.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025 x86_64 compute-0 (none))"
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.549Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.549Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.555Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.556Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Sep 30 17:40:59 compute-0 sudo[100303]: pam_unix(sudo:session): session closed for user root
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.560Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.560Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.563Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.563Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.86µs
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.563Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.563Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.563Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=32.261µs wal_replay_duration=286.037µs wbl_replay_duration=300ns total_replay_duration=343.149µs
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.565Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.565Z caller=main.go:1153 level=info msg="TSDB started"
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.565Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Sep 30 17:40:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:40:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Sep 30 17:40:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:59 compute-0 ceph-mgr[74051]: [progress INFO root] complete: finished ev ea8fbe14-b69c-4e1e-ad65-19948db3a131 (Updating prometheus deployment (+1 -> 1))
Sep 30 17:40:59 compute-0 ceph-mgr[74051]: [progress INFO root] Completed event ea8fbe14-b69c-4e1e-ad65-19948db3a131 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.606Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=41.369211ms db_storage=1.95µs remote_storage=2.51µs web_handler=410ns query_engine=1.45µs scrape=3.46535ms scrape_sd=260.057µs notify=20.84µs notify_sd=18.731µs rules=37.065449ms tracing=24.721µs
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.606Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Sep 30 17:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0[100846]: ts=2025-09-30T17:40:59.607Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Sep 30 17:40:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Sep 30 17:40:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Sep 30 17:40:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:40:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:40:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:40:59.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:40:59 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Sep 30 17:40:59 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Sep 30 17:40:59 compute-0 ceph-mon[73755]: pgmap v65: 353 pgs: 353 active+clean; 456 KiB data, 76 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:40:59 compute-0 ceph-mon[73755]: 9.6 deep-scrub starts
Sep 30 17:40:59 compute-0 ceph-mon[73755]: 9.6 deep-scrub ok
Sep 30 17:40:59 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Sep 30 17:40:59 compute-0 ceph-mon[73755]: osdmap e66: 2 total, 2 up, 2 in
Sep 30 17:40:59 compute-0 ceph-mon[73755]: osdmap e67: 2 total, 2 up, 2 in
Sep 30 17:40:59 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:59 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:59 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' 
Sep 30 17:40:59 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Sep 30 17:40:59 compute-0 ceph-mon[73755]: 11.0 scrub starts
Sep 30 17:40:59 compute-0 ceph-mon[73755]: 11.0 scrub ok
Sep 30 17:41:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Sep 30 17:41:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e68 e68: 2 total, 2 up, 2 in
Sep 30 17:41:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e68: 2 total, 2 up, 2 in
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.954349518s) [1] async=[1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 167.249252319s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.954730034s) [1] async=[1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 167.249710083s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.954280853s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.249252319s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.954689980s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.249710083s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.954576492s) [1] async=[1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 167.249435425s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.954245567s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.249435425s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.957596779s) [1] async=[1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 167.253036499s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.957547188s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.253036499s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.953973770s) [1] async=[1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 167.249511719s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.953924179s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.249511719s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.2( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.953649521s) [1] async=[1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 167.249267578s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=65/66 n=4 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.957219124s) [1] async=[1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 167.253005981s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.2( v 54'774 (0'0,54'774] local-lis/les=65/66 n=5 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.953488350s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.249267578s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=65/66 n=4 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.957182884s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.253005981s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.953853607s) [1] async=[1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 167.249710083s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=65/66 n=6 ec=58/48 lis/c=65/58 les/c/f=66/59/0 sis=68 pruub=14.953803062s) [1] r=-1 lpr=68 pi=[58,68)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.249710083s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.13( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=67/68 n=3 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:00 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 68 pg[10.3( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[58,67)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 8 remapped+peering, 345 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 154 B/s, 3 objects/s recovering
Sep 30 17:41:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 17:41:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.efvthf(active, since 78s), standbys: compute-1.glbusf
Sep 30 17:41:00 compute-0 sshd-session[92421]: Connection closed by 192.168.122.100 port 57202
Sep 30 17:41:00 compute-0 sshd-session[92390]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 17:41:00 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Sep 30 17:41:00 compute-0 systemd[1]: session-36.scope: Consumed 45.411s CPU time.
Sep 30 17:41:00 compute-0 systemd-logind[811]: Session 36 logged out. Waiting for processes to exit.
Sep 30 17:41:00 compute-0 systemd-logind[811]: Removed session 36.
Sep 30 17:41:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setuser ceph since I am not root
Sep 30 17:41:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ignoring --setgroup ceph since I am not root
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: pidfile_write: ignore empty --pid-file
Sep 30 17:41:00 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Sep 30 17:41:00 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'alerts'
Sep 30 17:41:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:00.872+0000 7fab14142140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'balancer'
Sep 30 17:41:00 compute-0 ceph-mon[73755]: 8.1b scrub starts
Sep 30 17:41:00 compute-0 ceph-mon[73755]: 8.1b scrub ok
Sep 30 17:41:00 compute-0 ceph-mon[73755]: osdmap e68: 2 total, 2 up, 2 in
Sep 30 17:41:00 compute-0 ceph-mon[73755]: from='mgr.14372 192.168.122.100:0/240735010' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Sep 30 17:41:00 compute-0 ceph-mon[73755]: mgrmap e25: compute-0.efvthf(active, since 78s), standbys: compute-1.glbusf
Sep 30 17:41:00 compute-0 ceph-mon[73755]: 11.c scrub starts
Sep 30 17:41:00 compute-0 ceph-mon[73755]: 11.c scrub ok
Sep 30 17:41:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:00.950+0000 7fab14142140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 17:41:00 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'cephadm'
Sep 30 17:41:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Sep 30 17:41:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e69 e69: 2 total, 2 up, 2 in
Sep 30 17:41:01 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e69: 2 total, 2 up, 2 in
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=67/68 n=3 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.010173798s) [1] async=[1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 168.307540894s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=67/68 n=3 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.010073662s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.307540894s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.13( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.009536743s) [1] async=[1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 168.307373047s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.13( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.009491920s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.307373047s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.009186745s) [1] async=[1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 168.307495117s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.009094238s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.307495117s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.008366585s) [1] async=[1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 168.307296753s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.3( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.008627892s) [1] async=[1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 168.307586670s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.008334160s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.307296753s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.3( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.008588791s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.307586670s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.008002281s) [1] async=[1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 168.307312012s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=67/68 n=6 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.007960320s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.307312012s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.002311707s) [1] async=[1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 168.301910400s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.002276421s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.301910400s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.002108574s) [1] async=[1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 168.301940918s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:01 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 69 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=67/68 n=5 ec=58/48 lis/c=67/58 les/c/f=68/59/0 sis=69 pruub=15.002059937s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.301940918s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:01 compute-0 PackageKit[31204]: daemon quit
Sep 30 17:41:01 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Sep 30 17:41:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:01 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c0022a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:01 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:01.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:01 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'crash'
Sep 30 17:41:01 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Sep 30 17:41:01 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Sep 30 17:41:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:01.818+0000 7fab14142140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:41:01 compute-0 ceph-mgr[74051]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 17:41:01 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'dashboard'
Sep 30 17:41:01 compute-0 ceph-mon[73755]: 8.18 deep-scrub starts
Sep 30 17:41:01 compute-0 ceph-mon[73755]: 8.18 deep-scrub ok
Sep 30 17:41:01 compute-0 ceph-mon[73755]: osdmap e69: 2 total, 2 up, 2 in
Sep 30 17:41:01 compute-0 ceph-mon[73755]: 11.b scrub starts
Sep 30 17:41:01 compute-0 ceph-mon[73755]: 11.b scrub ok
Sep 30 17:41:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Sep 30 17:41:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e70 e70: 2 total, 2 up, 2 in
Sep 30 17:41:02 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e70: 2 total, 2 up, 2 in
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'devicehealth'
Sep 30 17:41:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:02.488+0000 7fab14142140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 17:41:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 17:41:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 17:41:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]:   from numpy import show_config as show_numpy_config
Sep 30 17:41:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:02.656+0000 7fab14142140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'influx'
Sep 30 17:41:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:02.727+0000 7fab14142140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'insights'
Sep 30 17:41:02 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.11 scrub starts
Sep 30 17:41:02 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.11 scrub ok
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'iostat'
Sep 30 17:41:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:02.858+0000 7fab14142140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 17:41:02 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'k8sevents'
Sep 30 17:41:02 compute-0 ceph-mon[73755]: 10.1 scrub starts
Sep 30 17:41:02 compute-0 ceph-mon[73755]: 10.1 scrub ok
Sep 30 17:41:02 compute-0 ceph-mon[73755]: osdmap e70: 2 total, 2 up, 2 in
Sep 30 17:41:02 compute-0 ceph-mon[73755]: 9.9 scrub starts
Sep 30 17:41:02 compute-0 ceph-mon[73755]: 9.9 scrub ok
Sep 30 17:41:02 compute-0 ceph-mon[73755]: 12.11 scrub starts
Sep 30 17:41:02 compute-0 ceph-mon[73755]: 12.11 scrub ok
Sep 30 17:41:03 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'localpool'
Sep 30 17:41:03 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 17:41:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:03.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:03 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:03 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:03 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'mirroring'
Sep 30 17:41:03 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'nfs'
Sep 30 17:41:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:03.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:03 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.15 deep-scrub starts
Sep 30 17:41:03 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.15 deep-scrub ok
Sep 30 17:41:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:03.854+0000 7fab14142140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:41:03 compute-0 ceph-mgr[74051]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 17:41:03 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'orchestrator'
Sep 30 17:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:04.073+0000 7fab14142140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 17:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:04.148+0000 7fab14142140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'osd_support'
Sep 30 17:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:04.222+0000 7fab14142140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 17:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:04.302+0000 7fab14142140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'progress'
Sep 30 17:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:04.372+0000 7fab14142140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'prometheus'
Sep 30 17:41:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:04.751+0000 7fab14142140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rbd_support'
Sep 30 17:41:04 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.4 scrub starts
Sep 30 17:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:04.849+0000 7fab14142140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 17:41:04 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'restful'
Sep 30 17:41:04 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.4 scrub ok
Sep 30 17:41:04 compute-0 ceph-mon[73755]: 11.a scrub starts
Sep 30 17:41:04 compute-0 ceph-mon[73755]: 11.a scrub ok
Sep 30 17:41:04 compute-0 ceph-mon[73755]: 12.15 deep-scrub starts
Sep 30 17:41:04 compute-0 ceph-mon[73755]: 12.15 deep-scrub ok
Sep 30 17:41:05 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rgw'
Sep 30 17:41:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:05.313+0000 7fab14142140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:41:05 compute-0 ceph-mgr[74051]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 17:41:05 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'rook'
Sep 30 17:41:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:05 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c0022a0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:05 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:05.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:05 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.f scrub starts
Sep 30 17:41:05 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.f scrub ok
Sep 30 17:41:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:05.902+0000 7fab14142140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:41:05 compute-0 ceph-mgr[74051]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 17:41:05 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'selftest'
Sep 30 17:41:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:05.984+0000 7fab14142140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:41:05 compute-0 ceph-mgr[74051]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 17:41:05 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'snap_schedule'
Sep 30 17:41:06 compute-0 ceph-mon[73755]: 9.8 deep-scrub starts
Sep 30 17:41:06 compute-0 ceph-mon[73755]: 9.8 deep-scrub ok
Sep 30 17:41:06 compute-0 ceph-mon[73755]: 12.4 scrub starts
Sep 30 17:41:06 compute-0 ceph-mon[73755]: 12.4 scrub ok
Sep 30 17:41:06 compute-0 ceph-mon[73755]: 11.9 scrub starts
Sep 30 17:41:06 compute-0 ceph-mon[73755]: 11.9 scrub ok
Sep 30 17:41:06 compute-0 ceph-mon[73755]: 12.f scrub starts
Sep 30 17:41:06 compute-0 ceph-mon[73755]: 12.f scrub ok
Sep 30 17:41:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:06.076+0000 7fab14142140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'stats'
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'status'
Sep 30 17:41:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:06.248+0000 7fab14142140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telegraf'
Sep 30 17:41:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:06.325+0000 7fab14142140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'telemetry'
Sep 30 17:41:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:06.491+0000 7fab14142140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 17:41:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:06.729+0000 7fab14142140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 17:41:06 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'volumes'
Sep 30 17:41:06 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.d scrub starts
Sep 30 17:41:06 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.d scrub ok
Sep 30 17:41:06 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf restarted
Sep 30 17:41:06 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.glbusf started
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:07.025+0000 7fab14142140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr[py] Loading python module 'zabbix'
Sep 30 17:41:07 compute-0 ceph-mon[73755]: 8.a scrub starts
Sep 30 17:41:07 compute-0 ceph-mon[73755]: 8.a scrub ok
Sep 30 17:41:07 compute-0 ceph-mon[73755]: 12.d scrub starts
Sep 30 17:41:07 compute-0 ceph-mon[73755]: 12.d scrub ok
Sep 30 17:41:07 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf restarted
Sep 30 17:41:07 compute-0 ceph-mon[73755]: Standby manager daemon compute-1.glbusf started
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.efvthf(active, since 84s), standbys: compute-1.glbusf
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:07.111+0000 7fab14142140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Active manager daemon compute-0.efvthf restarted
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.efvthf
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: ms_deliver_dispatch: unhandled message 0x56287239e340 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e71 e71: 2 total, 2 up, 2 in
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e71: 2 total, 2 up, 2 in
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr handle_mgr_map Activating!
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.efvthf(active, starting, since 0.030113s), standbys: compute-1.glbusf
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr handle_mgr_map I am now activating
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.vrwlru"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.vrwlru"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e8 all = 0
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.wibdub"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.wibdub"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e8 all = 0
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).mds e8 all = 1
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: balancer
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Starting
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [INF] : Manager daemon compute-0.efvthf is now available
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:41:07
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: cephadm
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: crash
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: dashboard
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: devicehealth
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO access_control] Loading user roles DB version=2
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO sso] Loading SSO DB version=1
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO root] Configured CherryPy, starting engine...
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: iostat
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Starting
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: nfs
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: orchestrator
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: pg_autoscaler
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: progress
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [progress INFO root] Loading...
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7faa92ca9dc0>, <progress.module.GhostEvent object at 0x7faa92440040>, <progress.module.GhostEvent object at 0x7faa92440070>, <progress.module.GhostEvent object at 0x7faa924400a0>, <progress.module.GhostEvent object at 0x7faa924400d0>, <progress.module.GhostEvent object at 0x7faa92440100>, <progress.module.GhostEvent object at 0x7faa92440130>, <progress.module.GhostEvent object at 0x7faa92440160>, <progress.module.GhostEvent object at 0x7faa92440190>, <progress.module.GhostEvent object at 0x7faa924401c0>, <progress.module.GhostEvent object at 0x7faa924401f0>, <progress.module.GhostEvent object at 0x7faa92440220>, <progress.module.GhostEvent object at 0x7faa92440250>, <progress.module.GhostEvent object at 0x7faa92440280>, <progress.module.GhostEvent object at 0x7faa924402b0>, <progress.module.GhostEvent object at 0x7faa924402e0>, <progress.module.GhostEvent object at 0x7faa92440310>, <progress.module.GhostEvent object at 0x7faa92440340>, <progress.module.GhostEvent object at 0x7faa92440370>, <progress.module.GhostEvent object at 0x7faa924403a0>, <progress.module.GhostEvent object at 0x7faa924403d0>, <progress.module.GhostEvent object at 0x7faa92440400>, <progress.module.GhostEvent object at 0x7faa92440430>, <progress.module.GhostEvent object at 0x7faa92440460>] historic events
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: prometheus
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus INFO root] server_addr: :: server_port: 9283
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus INFO root] Cache enabled
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus INFO root] starting metric collection thread
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus INFO root] Starting engine...
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:07] ENGINE Bus STARTING
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:07] ENGINE Bus STARTING
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: CherryPy Checker:
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: The Application mounted at '' has an empty config.
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] recovery thread starting
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] starting setup
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: rbd_support
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: restful
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [restful WARNING root] server not running: no certificate configured
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: status
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: telemetry
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] PerfHandler: starting
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: vms, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: volumes, start_after=
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:07] ENGINE Serving on http://:::9283
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:07] ENGINE Serving on http://:::9283
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:07] ENGINE Bus STARTED
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:07] ENGINE Bus STARTED
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [prometheus INFO root] Engine started.
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: backups, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_task_task: images, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TaskHandler: starting
Sep 30 17:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"} v 0)
Sep 30 17:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] setup complete
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: mgr load Constructed class from module: volumes
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:07.378+0000 7faa7c88c640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:07.379+0000 7faa7724f640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:07.379+0000 7faa7724f640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:07.379+0000 7faa7724f640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:07.379+0000 7faa7724f640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T17:41:07.379+0000 7faa7724f640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: client.0 error registering admin socket command: (17) File exists
Sep 30 17:41:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:07.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:07 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Sep 30 17:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:07 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Sep 30 17:41:07 compute-0 sshd-session[101058]: Accepted publickey for ceph-admin from 192.168.122.100 port 40076 ssh2: RSA SHA256:VErvvXRx5E6TZRj2L+dQwgZehzW+L2wAETKKYOgEi0M
Sep 30 17:41:07 compute-0 systemd-logind[811]: New session 38 of user ceph-admin.
Sep 30 17:41:07 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Sep 30 17:41:07 compute-0 sshd-session[101058]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 17:41:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:07.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:07 compute-0 ceph-mgr[74051]: [dashboard INFO dashboard.module] Engine started.
Sep 30 17:41:07 compute-0 sudo[101074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:07 compute-0 sudo[101074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:07 compute-0 sudo[101074]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:07 compute-0 sudo[101100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:41:07 compute-0 sudo[101100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Sep 30 17:41:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Sep 30 17:41:08 compute-0 ceph-mon[73755]: mgrmap e26: compute-0.efvthf(active, since 84s), standbys: compute-1.glbusf
Sep 30 17:41:08 compute-0 ceph-mon[73755]: Active manager daemon compute-0.efvthf restarted
Sep 30 17:41:08 compute-0 ceph-mon[73755]: Activating manager daemon compute-0.efvthf
Sep 30 17:41:08 compute-0 ceph-mon[73755]: osdmap e71: 2 total, 2 up, 2 in
Sep 30 17:41:08 compute-0 ceph-mon[73755]: mgrmap e27: compute-0.efvthf(active, starting, since 0.030113s), standbys: compute-1.glbusf
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.vrwlru"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.wibdub"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.efvthf", "id": "compute-0.efvthf"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.glbusf", "id": "compute-1.glbusf"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: Manager daemon compute-0.efvthf is now available
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/mirror_snapshot_schedule"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.efvthf/trash_purge_schedule"}]: dispatch
Sep 30 17:41:08 compute-0 ceph-mon[73755]: 9.b scrub starts
Sep 30 17:41:08 compute-0 ceph-mon[73755]: 9.b scrub ok
Sep 30 17:41:08 compute-0 ceph-mon[73755]: 12.5 scrub starts
Sep 30 17:41:08 compute-0 ceph-mon[73755]: 12.5 scrub ok
Sep 30 17:41:08 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.efvthf(active, since 1.04459s), standbys: compute-1.glbusf
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:08 compute-0 podman[101198]: 2025-09-30 17:41:08.325768467 +0000 UTC m=+0.057337565 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:41:08 compute-0 podman[101198]: 2025-09-30 17:41:08.426662488 +0000 UTC m=+0.158231586 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:41:08] ENGINE Bus STARTING
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:41:08] ENGINE Bus STARTING
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:41:08] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:41:08] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:41:08] ENGINE Client ('192.168.122.100', 33390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:41:08] ENGINE Client ('192.168.122.100', 33390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:41:08] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:41:08] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: [cephadm INFO cherrypy.error] [30/Sep/2025:17:41:08] ENGINE Bus STARTED
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : [30/Sep/2025:17:41:08] ENGINE Bus STARTED
Sep 30 17:41:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:08] "GET /metrics HTTP/1.1" 200 45166 "" "Prometheus/2.51.0"
Sep 30 17:41:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:08] "GET /metrics HTTP/1.1" 200 45166 "" "Prometheus/2.51.0"
Sep 30 17:41:08 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.2 scrub starts
Sep 30 17:41:08 compute-0 podman[101342]: 2025-09-30 17:41:08.820962254 +0000 UTC m=+0.051789902 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:08 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.2 scrub ok
Sep 30 17:41:08 compute-0 podman[101368]: 2025-09-30 17:41:08.885535445 +0000 UTC m=+0.048545718 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:08 compute-0 podman[101342]: 2025-09-30 17:41:08.890946125 +0000 UTC m=+0.121773783 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Sep 30 17:41:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Sep 30 17:41:09 compute-0 podman[101436]: 2025-09-30 17:41:09.17900464 +0000 UTC m=+0.056247486 container exec 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 17:41:09 compute-0 ceph-mon[73755]: mgrmap e28: compute-0.efvthf(active, since 1.04459s), standbys: compute-1.glbusf
Sep 30 17:41:09 compute-0 ceph-mon[73755]: [30/Sep/2025:17:41:08] ENGINE Bus STARTING
Sep 30 17:41:09 compute-0 ceph-mon[73755]: [30/Sep/2025:17:41:08] ENGINE Serving on https://192.168.122.100:7150
Sep 30 17:41:09 compute-0 ceph-mon[73755]: [30/Sep/2025:17:41:08] ENGINE Client ('192.168.122.100', 33390) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 17:41:09 compute-0 ceph-mon[73755]: [30/Sep/2025:17:41:08] ENGINE Serving on http://192.168.122.100:8765
Sep 30 17:41:09 compute-0 ceph-mon[73755]: [30/Sep/2025:17:41:08] ENGINE Bus STARTED
Sep 30 17:41:09 compute-0 ceph-mon[73755]: 11.d scrub starts
Sep 30 17:41:09 compute-0 ceph-mon[73755]: 11.d scrub ok
Sep 30 17:41:09 compute-0 ceph-mon[73755]: 12.2 scrub starts
Sep 30 17:41:09 compute-0 ceph-mon[73755]: 12.2 scrub ok
Sep 30 17:41:09 compute-0 podman[101436]: 2025-09-30 17:41:09.192285344 +0000 UTC m=+0.069528170 container exec_died 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:41:09 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.efvthf(active, since 2s), standbys: compute-1.glbusf
Sep 30 17:41:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Sep 30 17:41:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Sep 30 17:41:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e72 e72: 2 total, 2 up, 2 in
Sep 30 17:41:09 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e72: 2 total, 2 up, 2 in
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 72 pg[10.14( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=10.971755981s) [1] r=-1 lpr=72 pi=[58,72)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 172.321868896s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 72 pg[10.14( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=10.971326828s) [1] r=-1 lpr=72 pi=[58,72)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.321868896s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 72 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=10.975951195s) [1] r=-1 lpr=72 pi=[58,72)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 172.326858521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 72 pg[10.4( v 61'780 (0'0,61'780] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=10.976071358s) [1] r=-1 lpr=72 pi=[58,72)/1 crt=61'780 lcod 61'779 mlcod 61'779 active pruub 172.327056885s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 72 pg[10.4( v 61'780 (0'0,61'780] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=10.976045609s) [1] r=-1 lpr=72 pi=[58,72)/1 crt=61'780 lcod 61'779 mlcod 0'0 unknown NOTIFY pruub 172.327056885s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 72 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=10.976486206s) [1] r=-1 lpr=72 pi=[58,72)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 172.327697754s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 72 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=10.976471901s) [1] r=-1 lpr=72 pi=[58,72)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.327697754s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 72 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=10.975258827s) [1] r=-1 lpr=72 pi=[58,72)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.326858521s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 17:41:09 compute-0 podman[101510]: 2025-09-30 17:41:09.384228842 +0000 UTC m=+0.049641446 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:41:09 compute-0 podman[101510]: 2025-09-30 17:41:09.393559213 +0000 UTC m=+0.058971807 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:41:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:09.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:09 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c0022a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:09 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:09 compute-0 podman[101576]: 2025-09-30 17:41:09.578454769 +0000 UTC m=+0.047855840 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, name=keepalived, io.openshift.expose-services=, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Sep 30 17:41:09 compute-0 podman[101576]: 2025-09-30 17:41:09.590622724 +0000 UTC m=+0.060023795 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Sep 30 17:41:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Sep 30 17:41:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e73 e73: 2 total, 2 up, 2 in
Sep 30 17:41:09 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e73: 2 total, 2 up, 2 in
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 73 pg[10.4( v 61'780 (0'0,61'780] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] r=0 lpr=73 pi=[58,73)/1 crt=61'780 lcod 61'779 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 73 pg[10.4( v 61'780 (0'0,61'780] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] r=0 lpr=73 pi=[58,73)/1 crt=61'780 lcod 61'779 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 73 pg[10.14( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 73 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 73 pg[10.14( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 73 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 73 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 73 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:09.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:09 compute-0 podman[101643]: 2025-09-30 17:41:09.799304405 +0000 UTC m=+0.064416718 container exec 54b2fea94257f4c0dbb8baa51cc4daf28060912f14272b21840329b3da1a781c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:09 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Sep 30 17:41:09 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Sep 30 17:41:09 compute-0 podman[101643]: 2025-09-30 17:41:09.829932908 +0000 UTC m=+0.095045251 container exec_died 54b2fea94257f4c0dbb8baa51cc4daf28060912f14272b21840329b3da1a781c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:10 compute-0 podman[101718]: 2025-09-30 17:41:10.084615079 +0000 UTC m=+0.063717460 container exec c0c4f203e50521af1e67ea671cd3250328ab176a59126da54fd0b28cda8d538c (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:10 compute-0 ceph-mon[73755]: pgmap v4: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Sep 30 17:41:10 compute-0 ceph-mon[73755]: mgrmap e29: compute-0.efvthf(active, since 2s), standbys: compute-1.glbusf
Sep 30 17:41:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Sep 30 17:41:10 compute-0 ceph-mon[73755]: osdmap e72: 2 total, 2 up, 2 in
Sep 30 17:41:10 compute-0 ceph-mon[73755]: osdmap e73: 2 total, 2 up, 2 in
Sep 30 17:41:10 compute-0 ceph-mon[73755]: 8.e scrub starts
Sep 30 17:41:10 compute-0 ceph-mon[73755]: 8.e scrub ok
Sep 30 17:41:10 compute-0 ceph-mon[73755]: 12.0 scrub starts
Sep 30 17:41:10 compute-0 ceph-mon[73755]: 12.0 scrub ok
Sep 30 17:41:10 compute-0 podman[101718]: 2025-09-30 17:41:10.23418695 +0000 UTC m=+0.213289311 container exec_died c0c4f203e50521af1e67ea671cd3250328ab176a59126da54fd0b28cda8d538c (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Sep 30 17:41:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e74 e74: 2 total, 2 up, 2 in
Sep 30 17:41:10 compute-0 podman[101829]: 2025-09-30 17:41:10.646866145 +0000 UTC m=+0.052793701 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:10 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e74: 2 total, 2 up, 2 in
Sep 30 17:41:10 compute-0 podman[101829]: 2025-09-30 17:41:10.690776604 +0000 UTC m=+0.096704140 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:10 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 74 pg[10.14( v 54'774 (0'0,54'774] local-lis/les=73/74 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:10 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 74 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=73/74 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:10 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 74 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=73/74 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[58,73)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:10 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 74 pg[10.4( v 61'780 (0'0,61'780] local-lis/les=73/74 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[58,73)/1 crt=61'780 lcod 61'779 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:41:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:41:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:10 compute-0 sudo[101100]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:41:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:41:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:10 compute-0 sudo[101869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:10 compute-0 sudo[101869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:10 compute-0 sudo[101869]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:10 compute-0 sudo[101894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:41:10 compute-0 sudo[101894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v8: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Sep 30 17:41:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 17:41:11 compute-0 sudo[101894]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:11.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:11 compute-0 sudo[101951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:11 compute-0 sudo[101951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:11 compute-0 sudo[101951]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:11 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:11 compute-0 sudo[101976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 17:41:11 compute-0 sudo[101976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:11 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:11 compute-0 ceph-mon[73755]: osdmap e74: 2 total, 2 up, 2 in
Sep 30 17:41:11 compute-0 ceph-mon[73755]: 9.c scrub starts
Sep 30 17:41:11 compute-0 ceph-mon[73755]: 9.c scrub ok
Sep 30 17:41:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 17:41:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:11.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:11 compute-0 sudo[101976]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Sep 30 17:41:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:41:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 17:41:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e75 e75: 2 total, 2 up, 2 in
Sep 30 17:41:11 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e75: 2 total, 2 up, 2 in
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=73/74 n=5 ec=58/48 lis/c=73/58 les/c/f=74/59/0 sis=75 pruub=14.948171616s) [1] async=[1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 178.828765869s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=73/74 n=5 ec=58/48 lis/c=73/58 les/c/f=74/59/0 sis=75 pruub=14.948085785s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.828765869s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=73/74 n=6 ec=58/48 lis/c=73/58 les/c/f=74/59/0 sis=75 pruub=14.948060989s) [1] async=[1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 178.828781128s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=73/74 n=6 ec=58/48 lis/c=73/58 les/c/f=74/59/0 sis=75 pruub=14.947960854s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.828781128s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.4( v 74'784 (0'0,74'784] local-lis/les=73/74 n=6 ec=58/48 lis/c=73/58 les/c/f=74/59/0 sis=75 pruub=14.947729111s) [1] async=[1] r=-1 lpr=75 pi=[58,75)/1 crt=61'780 lcod 74'783 mlcod 74'783 active pruub 178.829101562s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.15( v 54'774 (0'0,54'774] local-lis/les=58/59 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=8.440420151s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 172.321929932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.14( v 54'774 (0'0,54'774] local-lis/les=73/74 n=5 ec=58/48 lis/c=73/58 les/c/f=74/59/0 sis=75 pruub=14.944221497s) [1] async=[1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 178.825637817s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.15( v 54'774 (0'0,54'774] local-lis/les=58/59 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=8.440381050s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.321929932s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.14( v 54'774 (0'0,54'774] local-lis/les=73/74 n=5 ec=58/48 lis/c=73/58 les/c/f=74/59/0 sis=75 pruub=14.943639755s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.825637817s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=8.444633484s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 172.326950073s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=8.444612503s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.326950073s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.4( v 74'784 (0'0,74'784] local-lis/les=73/74 n=6 ec=58/48 lis/c=73/58 les/c/f=74/59/0 sis=75 pruub=14.947012901s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=61'780 lcod 74'783 mlcod 0'0 unknown NOTIFY pruub 178.829101562s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.5( v 59'776 (0'0,59'776] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=8.444499016s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 59'775 mlcod 59'775 active pruub 172.327285767s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.5( v 59'776 (0'0,59'776] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=8.444456100s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 59'775 mlcod 0'0 unknown NOTIFY pruub 172.327285767s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=8.444488525s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 172.327758789s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:11 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 75 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=8.444471359s) [1] r=-1 lpr=75 pi=[58,75)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.327758789s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:41:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 17:41:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:41:11 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.efvthf(active, since 4s), standbys: compute-1.glbusf
Sep 30 17:41:11 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Sep 30 17:41:11 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Sep 30 17:41:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:41:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:41:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 17:41:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:41:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:41:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:41:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:41:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:41:12 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:41:12 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:41:12 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:41:12 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:41:12 compute-0 sudo[102020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:41:12 compute-0 sudo[102020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102020]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:41:12 compute-0 sudo[102045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102045]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:41:12 compute-0 sudo[102070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102070]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:41:12 compute-0 sudo[102095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102095]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:41:12 compute-0 sudo[102120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102120]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:41:12 compute-0 sudo[102168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102168]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new
Sep 30 17:41:12 compute-0 sudo[102193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102193]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 17:41:12 compute-0 sudo[102218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102218]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:41:12 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:41:12 compute-0 sudo[102243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:41:12 compute-0 sudo[102243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102243]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:41:12 compute-0 sudo[102268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102268]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 ceph-mon[73755]: pgmap v8: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:12 compute-0 ceph-mon[73755]: 8.c scrub starts
Sep 30 17:41:12 compute-0 ceph-mon[73755]: 8.c scrub ok
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 17:41:12 compute-0 ceph-mon[73755]: osdmap e75: 2 total, 2 up, 2 in
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:41:12 compute-0 ceph-mon[73755]: mgrmap e30: compute-0.efvthf(active, since 4s), standbys: compute-1.glbusf
Sep 30 17:41:12 compute-0 ceph-mon[73755]: 10.9 scrub starts
Sep 30 17:41:12 compute-0 ceph-mon[73755]: 10.9 scrub ok
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:41:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:41:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Sep 30 17:41:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e76 e76: 2 total, 2 up, 2 in
Sep 30 17:41:12 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e76: 2 total, 2 up, 2 in
Sep 30 17:41:12 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 76 pg[10.5( v 59'776 (0'0,59'776] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 59'775 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:12 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 76 pg[10.15( v 54'774 (0'0,54'774] local-lis/les=58/59 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:12 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 76 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:12 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 76 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:12 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 76 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:12 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 76 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:12 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 76 pg[10.15( v 54'774 (0'0,54'774] local-lis/les=58/59 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:12 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 76 pg[10.5( v 59'776 (0'0,59'776] local-lis/les=58/59 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 59'775 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:12 compute-0 sudo[102293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:41:12 compute-0 sudo[102293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102293]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:41:12 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:41:12 compute-0 sudo[102318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:41:12 compute-0 sudo[102318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102318]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:12 compute-0 sudo[102343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:41:12 compute-0 sudo[102343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:12 compute-0 sudo[102343]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Sep 30 17:41:13 compute-0 sudo[102391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:41:13 compute-0 sudo[102391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102391]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Sep 30 17:41:13 compute-0 sudo[102416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new
Sep 30 17:41:13 compute-0 sudo[102416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102416]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 sudo[102441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:41:13 compute-0 sudo[102441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102441]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Sep 30 17:41:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Sep 30 17:41:13 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:41:13 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:41:13 compute-0 sudo[102466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 17:41:13 compute-0 sudo[102466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102466]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 sudo[102491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph
Sep 30 17:41:13 compute-0 sudo[102491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102491]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 sudo[102516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:41:13 compute-0 sudo[102516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102516]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 sudo[102542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:41:13 compute-0 sudo[102542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102542]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:13.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:13 compute-0 sudo[102567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:41:13 compute-0 sudo[102567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102567]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c003730 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:13 compute-0 sudo[102615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:41:13 compute-0 sudo[102615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102615]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:13 compute-0 sudo[102640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new
Sep 30 17:41:13 compute-0 sudo[102640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102640]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:41:13 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:41:13 compute-0 sudo[102666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Sep 30 17:41:13 compute-0 sudo[102666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102666]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:41:13 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:41:13 compute-0 sudo[102691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:41:13 compute-0 sudo[102691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102691]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:13.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:13 compute-0 sudo[102716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config
Sep 30 17:41:13 compute-0 sudo[102716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102716]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 17:41:13 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 17:41:13 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:41:13 compute-0 ceph-mon[73755]: 11.8 scrub starts
Sep 30 17:41:13 compute-0 ceph-mon[73755]: 11.8 scrub ok
Sep 30 17:41:13 compute-0 ceph-mon[73755]: osdmap e76: 2 total, 2 up, 2 in
Sep 30 17:41:13 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Sep 30 17:41:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Sep 30 17:41:13 compute-0 sudo[102741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:41:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Sep 30 17:41:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e77 e77: 2 total, 2 up, 2 in
Sep 30 17:41:13 compute-0 sudo[102741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102741]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e77: 2 total, 2 up, 2 in
Sep 30 17:41:13 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:13 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:13 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:13 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:13 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 77 pg[10.5( v 59'776 (0'0,59'776] local-lis/les=76/77 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] async=[1] r=0 lpr=76 pi=[58,76)/1 crt=59'776 lcod 59'775 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:13 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 77 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=76/77 n=6 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] async=[1] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:13 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 77 pg[10.15( v 54'774 (0'0,54'774] local-lis/les=76/77 n=4 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] async=[1] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:13 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 77 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=76/77 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=76) [1]/[0] async=[1] r=0 lpr=76 pi=[58,76)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:13 compute-0 sudo[102766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:41:13 compute-0 sudo[102766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102766]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:13 compute-0 sudo[102791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:41:13 compute-0 sudo[102791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:13 compute-0 sudo[102791]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:14 compute-0 sudo[102839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:41:14 compute-0 sudo[102839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:14 compute-0 sudo[102839]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:14 compute-0 sudo[102864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new
Sep 30 17:41:14 compute-0 sudo[102864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:14 compute-0 sudo[102864]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:14 compute-0 sudo[102889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-63d32c6a-fa18-54ed-8711-9a3915cc367b/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring.new /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:41:14 compute-0 sudo[102889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:14 compute-0 sudo[102889]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:41:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:41:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:14 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:41:14 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Sep 30 17:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e78 e78: 2 total, 2 up, 2 in
Sep 30 17:41:14 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e78: 2 total, 2 up, 2 in
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.15( v 54'774 (0'0,54'774] local-lis/les=76/77 n=4 ec=58/48 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=15.170084953s) [1] async=[1] r=-1 lpr=78 pi=[58,78)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 181.937026978s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.15( v 54'774 (0'0,54'774] local-lis/les=76/77 n=4 ec=58/48 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=15.170002937s) [1] r=-1 lpr=78 pi=[58,78)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.937026978s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=76/77 n=6 ec=58/48 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=15.168996811s) [1] async=[1] r=-1 lpr=78 pi=[58,78)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 181.936874390s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=76/77 n=6 ec=58/48 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=15.168925285s) [1] r=-1 lpr=78 pi=[58,78)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.936874390s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.5( v 77'780 (0'0,77'780] local-lis/les=76/77 n=6 ec=58/48 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=15.168447495s) [1] async=[1] r=-1 lpr=78 pi=[58,78)/1 crt=59'776 lcod 77'779 mlcod 77'779 active pruub 181.936752319s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.5( v 77'780 (0'0,77'780] local-lis/les=76/77 n=6 ec=58/48 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=15.168380737s) [1] r=-1 lpr=78 pi=[58,78)/1 crt=59'776 lcod 77'779 mlcod 0'0 unknown NOTIFY pruub 181.936752319s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=76/77 n=5 ec=58/48 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=15.168745041s) [1] async=[1] r=-1 lpr=78 pi=[58,78)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 181.937530518s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[68,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:14 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 78 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=76/77 n=5 ec=58/48 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=15.168711662s) [1] r=-1 lpr=78 pi=[58,78)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.937530518s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:14 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.conf
Sep 30 17:41:14 compute-0 ceph-mon[73755]: 10.8 scrub starts
Sep 30 17:41:14 compute-0 ceph-mon[73755]: 10.8 scrub ok
Sep 30 17:41:14 compute-0 ceph-mon[73755]: pgmap v11: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:14 compute-0 ceph-mon[73755]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:41:14 compute-0 ceph-mon[73755]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 17:41:14 compute-0 ceph-mon[73755]: Updating compute-0:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:41:14 compute-0 ceph-mon[73755]: 8.b scrub starts
Sep 30 17:41:14 compute-0 ceph-mon[73755]: 8.b scrub ok
Sep 30 17:41:14 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Sep 30 17:41:14 compute-0 ceph-mon[73755]: osdmap e77: 2 total, 2 up, 2 in
Sep 30 17:41:14 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:14 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:14 compute-0 ceph-mon[73755]: Updating compute-1:/var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/config/ceph.client.admin.keyring
Sep 30 17:41:14 compute-0 ceph-mon[73755]: osdmap e78: 2 total, 2 up, 2 in
Sep 30 17:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:41:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:41:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:41:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:41:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:41:15 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:41:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:41:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:41:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:41:15 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:41:15 compute-0 sudo[102914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:15 compute-0 sudo[102914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:15 compute-0 sudo[102914]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:15 compute-0 sudo[102939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:41:15 compute-0 sudo[102939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 4 remapped+peering, 349 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 116 B/s, 6 objects/s recovering
Sep 30 17:41:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:15.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:15 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:15 compute-0 podman[103007]: 2025-09-30 17:41:15.524819224 +0000 UTC m=+0.039074625 container create b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:41:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:15 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:15 compute-0 systemd[1]: Started libpod-conmon-b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070.scope.
Sep 30 17:41:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:15 compute-0 podman[103007]: 2025-09-30 17:41:15.577806709 +0000 UTC m=+0.092062130 container init b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 17:41:15 compute-0 podman[103007]: 2025-09-30 17:41:15.591003202 +0000 UTC m=+0.105258603 container start b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:41:15 compute-0 podman[103007]: 2025-09-30 17:41:15.594574635 +0000 UTC m=+0.108830086 container attach b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Sep 30 17:41:15 compute-0 interesting_grothendieck[103023]: 167 167
Sep 30 17:41:15 compute-0 systemd[1]: libpod-b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070.scope: Deactivated successfully.
Sep 30 17:41:15 compute-0 podman[103007]: 2025-09-30 17:41:15.597636794 +0000 UTC m=+0.111892195 container died b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 17:41:15 compute-0 podman[103007]: 2025-09-30 17:41:15.507769692 +0000 UTC m=+0.022025123 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a6eab126d85b4676e4ab5abeba21ed85b7ccba7f725ea305d23b3328fdf5792-merged.mount: Deactivated successfully.
Sep 30 17:41:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Sep 30 17:41:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e79 e79: 2 total, 2 up, 2 in
Sep 30 17:41:15 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e79: 2 total, 2 up, 2 in
Sep 30 17:41:15 compute-0 podman[103007]: 2025-09-30 17:41:15.654620603 +0000 UTC m=+0.168876004 container remove b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:41:15 compute-0 systemd[1]: libpod-conmon-b6a7efe2b4feb7aaf8b3bd2205493821723f280330519b0a9cd8bc7ba224d070.scope: Deactivated successfully.
Sep 30 17:41:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:15.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:15 compute-0 ceph-mon[73755]: 11.2 scrub starts
Sep 30 17:41:15 compute-0 ceph-mon[73755]: 11.2 scrub ok
Sep 30 17:41:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:41:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:41:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:41:15 compute-0 ceph-mon[73755]: pgmap v14: 353 pgs: 4 remapped+peering, 349 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 116 B/s, 6 objects/s recovering
Sep 30 17:41:15 compute-0 ceph-mon[73755]: osdmap e79: 2 total, 2 up, 2 in
Sep 30 17:41:15 compute-0 podman[103049]: 2025-09-30 17:41:15.808195009 +0000 UTC m=+0.037641508 container create 744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 17:41:15 compute-0 systemd[1]: Started libpod-conmon-744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37.scope.
Sep 30 17:41:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bbc306b791f080abd061aa1039cd37408d432b0496dd5700dde5969e8b33f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bbc306b791f080abd061aa1039cd37408d432b0496dd5700dde5969e8b33f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bbc306b791f080abd061aa1039cd37408d432b0496dd5700dde5969e8b33f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bbc306b791f080abd061aa1039cd37408d432b0496dd5700dde5969e8b33f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bbc306b791f080abd061aa1039cd37408d432b0496dd5700dde5969e8b33f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Sep 30 17:41:15 compute-0 podman[103049]: 2025-09-30 17:41:15.890839274 +0000 UTC m=+0.120285803 container init 744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 17:41:15 compute-0 podman[103049]: 2025-09-30 17:41:15.792164343 +0000 UTC m=+0.021610862 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Sep 30 17:41:15 compute-0 podman[103049]: 2025-09-30 17:41:15.898361389 +0000 UTC m=+0.127807888 container start 744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:41:15 compute-0 podman[103049]: 2025-09-30 17:41:15.903050621 +0000 UTC m=+0.132497140 container attach 744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:41:16 compute-0 sshd-session[103070]: Accepted publickey for zuul from 192.168.122.30 port 49150 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:41:16 compute-0 systemd-logind[811]: New session 39 of user zuul.
Sep 30 17:41:16 compute-0 systemd[1]: Started Session 39 of User zuul.
Sep 30 17:41:16 compute-0 sshd-session[103070]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:41:16 compute-0 festive_hellman[103065]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:41:16 compute-0 festive_hellman[103065]: --> All data devices are unavailable
Sep 30 17:41:16 compute-0 systemd[1]: libpod-744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37.scope: Deactivated successfully.
Sep 30 17:41:16 compute-0 podman[103049]: 2025-09-30 17:41:16.260541029 +0000 UTC m=+0.489987528 container died 744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-57bbc306b791f080abd061aa1039cd37408d432b0496dd5700dde5969e8b33f8-merged.mount: Deactivated successfully.
Sep 30 17:41:16 compute-0 podman[103049]: 2025-09-30 17:41:16.305439264 +0000 UTC m=+0.534885763 container remove 744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 17:41:16 compute-0 systemd[1]: libpod-conmon-744e3f407c19ec511bfbfa0ac8188dd356a49456f9f76d10107c3ae281cbcb37.scope: Deactivated successfully.
Sep 30 17:41:16 compute-0 sudo[102939]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:16 compute-0 sudo[103148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:16 compute-0 sudo[103148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:16 compute-0 sudo[103148]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:16 compute-0 sudo[103173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:41:16 compute-0 sudo[103173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:16 compute-0 ceph-mon[73755]: 8.1 scrub starts
Sep 30 17:41:16 compute-0 ceph-mon[73755]: 8.1 scrub ok
Sep 30 17:41:16 compute-0 ceph-mon[73755]: 12.1 scrub starts
Sep 30 17:41:16 compute-0 ceph-mon[73755]: 12.1 scrub ok
Sep 30 17:41:16 compute-0 podman[103304]: 2025-09-30 17:41:16.828398467 +0000 UTC m=+0.040948394 container create f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:41:16 compute-0 systemd[1]: Started libpod-conmon-f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7.scope.
Sep 30 17:41:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:16 compute-0 podman[103304]: 2025-09-30 17:41:16.812560286 +0000 UTC m=+0.025110233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:16 compute-0 podman[103304]: 2025-09-30 17:41:16.908550517 +0000 UTC m=+0.121100454 container init f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:41:16 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1e scrub starts
Sep 30 17:41:16 compute-0 podman[103304]: 2025-09-30 17:41:16.920117298 +0000 UTC m=+0.132667235 container start f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gauss, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 17:41:16 compute-0 objective_gauss[103352]: 167 167
Sep 30 17:41:16 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1e scrub ok
Sep 30 17:41:16 compute-0 podman[103304]: 2025-09-30 17:41:16.92482805 +0000 UTC m=+0.137377977 container attach f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gauss, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:41:16 compute-0 systemd[1]: libpod-f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7.scope: Deactivated successfully.
Sep 30 17:41:16 compute-0 podman[103304]: 2025-09-30 17:41:16.92600458 +0000 UTC m=+0.138554507 container died f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d566be4da743c0f7ac050c61cc0cb185a253e2bf4dd297480d2a4983d62214b4-merged.mount: Deactivated successfully.
Sep 30 17:41:16 compute-0 podman[103304]: 2025-09-30 17:41:16.982047325 +0000 UTC m=+0.194597252 container remove f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 17:41:16 compute-0 systemd[1]: libpod-conmon-f729db5b1d084b74dbf598b84ff35fc2617cb0038acaa5ddce1f33b90e578ff7.scope: Deactivated successfully.
Sep 30 17:41:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Sep 30 17:41:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e80 e80: 2 total, 2 up, 2 in
Sep 30 17:41:17 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e80: 2 total, 2 up, 2 in
Sep 30 17:41:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 80 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=0/0 n=4 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 80 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=0/0 n=4 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 80 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 80 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 80 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 80 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 80 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:17 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 80 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:17 compute-0 python3.9[103351]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:41:17 compute-0 podman[103376]: 2025-09-30 17:41:17.158934416 +0000 UTC m=+0.046679463 container create dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:41:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v17: 353 pgs: 4 remapped+peering, 349 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 116 B/s, 6 objects/s recovering
Sep 30 17:41:17 compute-0 systemd[1]: Started libpod-conmon-dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6.scope.
Sep 30 17:41:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:17 compute-0 podman[103376]: 2025-09-30 17:41:17.140026925 +0000 UTC m=+0.027771972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c8524b4ecf841f7e9de471eb2b7a967bfee16d286853da2308a8f0542653bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c8524b4ecf841f7e9de471eb2b7a967bfee16d286853da2308a8f0542653bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c8524b4ecf841f7e9de471eb2b7a967bfee16d286853da2308a8f0542653bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c8524b4ecf841f7e9de471eb2b7a967bfee16d286853da2308a8f0542653bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:17 compute-0 podman[103376]: 2025-09-30 17:41:17.25309438 +0000 UTC m=+0.140839437 container init dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:41:17 compute-0 podman[103376]: 2025-09-30 17:41:17.267317649 +0000 UTC m=+0.155062696 container start dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:41:17 compute-0 podman[103376]: 2025-09-30 17:41:17.270988164 +0000 UTC m=+0.158733231 container attach dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:41:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:17.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:17 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c003730 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:17 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:17 compute-0 determined_austin[103400]: {
Sep 30 17:41:17 compute-0 determined_austin[103400]:     "0": [
Sep 30 17:41:17 compute-0 determined_austin[103400]:         {
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "devices": [
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "/dev/loop3"
Sep 30 17:41:17 compute-0 determined_austin[103400]:             ],
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "lv_name": "ceph_lv0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "lv_size": "21470642176",
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "name": "ceph_lv0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "tags": {
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.cluster_name": "ceph",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.crush_device_class": "",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.encrypted": "0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.osd_id": "0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.type": "block",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.vdo": "0",
Sep 30 17:41:17 compute-0 determined_austin[103400]:                 "ceph.with_tpm": "0"
Sep 30 17:41:17 compute-0 determined_austin[103400]:             },
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "type": "block",
Sep 30 17:41:17 compute-0 determined_austin[103400]:             "vg_name": "ceph_vg0"
Sep 30 17:41:17 compute-0 determined_austin[103400]:         }
Sep 30 17:41:17 compute-0 determined_austin[103400]:     ]
Sep 30 17:41:17 compute-0 determined_austin[103400]: }
Sep 30 17:41:17 compute-0 systemd[1]: libpod-dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6.scope: Deactivated successfully.
Sep 30 17:41:17 compute-0 podman[103376]: 2025-09-30 17:41:17.560620911 +0000 UTC m=+0.448365958 container died dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-47c8524b4ecf841f7e9de471eb2b7a967bfee16d286853da2308a8f0542653bc-merged.mount: Deactivated successfully.
Sep 30 17:41:17 compute-0 podman[103376]: 2025-09-30 17:41:17.61642966 +0000 UTC m=+0.504174757 container remove dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_austin, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:41:17 compute-0 systemd[1]: libpod-conmon-dd06fc70027ced94b45e08cac46ce14382a79299a3e5f712cbcd5d3a950deac6.scope: Deactivated successfully.
Sep 30 17:41:17 compute-0 sudo[103173]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:17.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:17 compute-0 sudo[103446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:17 compute-0 sudo[103446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:17 compute-0 sudo[103446]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:17 compute-0 sudo[103480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:41:17 compute-0 sudo[103480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:17 compute-0 ceph-mon[73755]: 9.0 scrub starts
Sep 30 17:41:17 compute-0 ceph-mon[73755]: 9.0 scrub ok
Sep 30 17:41:17 compute-0 ceph-mon[73755]: 12.1e scrub starts
Sep 30 17:41:17 compute-0 ceph-mon[73755]: 12.1e scrub ok
Sep 30 17:41:17 compute-0 ceph-mon[73755]: osdmap e80: 2 total, 2 up, 2 in
Sep 30 17:41:17 compute-0 ceph-mon[73755]: pgmap v17: 353 pgs: 4 remapped+peering, 349 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 116 B/s, 6 objects/s recovering
Sep 30 17:41:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1f deep-scrub starts
Sep 30 17:41:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1f deep-scrub ok
Sep 30 17:41:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Sep 30 17:41:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e81 e81: 2 total, 2 up, 2 in
Sep 30 17:41:18 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e81: 2 total, 2 up, 2 in
Sep 30 17:41:18 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 81 pg[10.6( v 54'774 (0'0,54'774] local-lis/les=80/81 n=6 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:18 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 81 pg[10.16( v 54'774 (0'0,54'774] local-lis/les=80/81 n=4 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:18 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 81 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:18 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 81 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=78/68 les/c/f=79/69/0 sis=80) [0] r=0 lpr=80 pi=[68,80)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:18 compute-0 podman[103593]: 2025-09-30 17:41:18.202429829 +0000 UTC m=+0.043525411 container create 484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:41:18 compute-0 systemd[1]: Started libpod-conmon-484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03.scope.
Sep 30 17:41:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:18 compute-0 podman[103593]: 2025-09-30 17:41:18.264715195 +0000 UTC m=+0.105810807 container init 484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_moore, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Sep 30 17:41:18 compute-0 podman[103593]: 2025-09-30 17:41:18.271550602 +0000 UTC m=+0.112646184 container start 484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Sep 30 17:41:18 compute-0 podman[103593]: 2025-09-30 17:41:18.274489879 +0000 UTC m=+0.115585471 container attach 484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_moore, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:41:18 compute-0 strange_moore[103609]: 167 167
Sep 30 17:41:18 compute-0 systemd[1]: libpod-484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03.scope: Deactivated successfully.
Sep 30 17:41:18 compute-0 podman[103593]: 2025-09-30 17:41:18.27645725 +0000 UTC m=+0.117552832 container died 484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:41:18 compute-0 podman[103593]: 2025-09-30 17:41:18.184109973 +0000 UTC m=+0.025205585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd08a080620e231eec040dca07857e2463aed1e4dd55e4cc7a6ec6069ea03ad1-merged.mount: Deactivated successfully.
Sep 30 17:41:18 compute-0 podman[103593]: 2025-09-30 17:41:18.310559285 +0000 UTC m=+0.151654867 container remove 484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:41:18 compute-0 systemd[1]: libpod-conmon-484910a82617d1d4836c89c13ab6a19b4c5d6d8266b203fa416c326b99564b03.scope: Deactivated successfully.
Sep 30 17:41:18 compute-0 podman[103682]: 2025-09-30 17:41:18.487178208 +0000 UTC m=+0.042127433 container create fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_knuth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:41:18 compute-0 systemd[1]: Started libpod-conmon-fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6.scope.
Sep 30 17:41:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60c19ecde71172baf5624632b6804cf80cd77202b35c89d66956784c8940a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:18 compute-0 podman[103682]: 2025-09-30 17:41:18.466677146 +0000 UTC m=+0.021626391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60c19ecde71172baf5624632b6804cf80cd77202b35c89d66956784c8940a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60c19ecde71172baf5624632b6804cf80cd77202b35c89d66956784c8940a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60c19ecde71172baf5624632b6804cf80cd77202b35c89d66956784c8940a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:18 compute-0 podman[103682]: 2025-09-30 17:41:18.576768683 +0000 UTC m=+0.131717938 container init fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_knuth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 17:41:18 compute-0 podman[103682]: 2025-09-30 17:41:18.584131054 +0000 UTC m=+0.139080299 container start fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:41:18 compute-0 podman[103682]: 2025-09-30 17:41:18.587950763 +0000 UTC m=+0.142899998 container attach fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_knuth, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:41:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:18] "GET /metrics HTTP/1.1" 200 45166 "" "Prometheus/2.51.0"
Sep 30 17:41:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:18] "GET /metrics HTTP/1.1" 200 45166 "" "Prometheus/2.51.0"
Sep 30 17:41:18 compute-0 ceph-mon[73755]: 8.0 scrub starts
Sep 30 17:41:18 compute-0 ceph-mon[73755]: 8.0 scrub ok
Sep 30 17:41:18 compute-0 ceph-mon[73755]: osdmap e81: 2 total, 2 up, 2 in
Sep 30 17:41:18 compute-0 sudo[103793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imrhaiwngaldemsmoramkzkcbuizahug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254078.3853645-44-71538571062726/AnsiballZ_command.py'
Sep 30 17:41:18 compute-0 sudo[103793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:41:18 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1a deep-scrub starts
Sep 30 17:41:18 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1a deep-scrub ok
Sep 30 17:41:19 compute-0 python3.9[103798]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:41:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v19: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 180 B/s, 9 objects/s recovering
Sep 30 17:41:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Sep 30 17:41:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Sep 30 17:41:19 compute-0 lvm[103856]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:41:19 compute-0 lvm[103856]: VG ceph_vg0 finished
Sep 30 17:41:19 compute-0 nice_knuth[103700]: {}
Sep 30 17:41:19 compute-0 systemd[1]: libpod-fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6.scope: Deactivated successfully.
Sep 30 17:41:19 compute-0 systemd[1]: libpod-fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6.scope: Consumed 1.114s CPU time.
Sep 30 17:41:19 compute-0 podman[103682]: 2025-09-30 17:41:19.299162002 +0000 UTC m=+0.854111237 container died fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3de60c19ecde71172baf5624632b6804cf80cd77202b35c89d66956784c8940a-merged.mount: Deactivated successfully.
Sep 30 17:41:19 compute-0 podman[103682]: 2025-09-30 17:41:19.341782798 +0000 UTC m=+0.896732033 container remove fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:41:19 compute-0 systemd[1]: libpod-conmon-fba93df5c956262e2a5edf868bc0756fa263f7085ab362548e3bee493e402cf6.scope: Deactivated successfully.
Sep 30 17:41:19 compute-0 sudo[103480]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:41:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:41:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:19.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Sep 30 17:41:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:19 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:19 compute-0 sudo[103874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:41:19 compute-0 sudo[103875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:41:19 compute-0 sudo[103875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:19 compute-0 sudo[103874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:19 compute-0 sudo[103875]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:19 compute-0 sudo[103874]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:19 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:19 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Sep 30 17:41:19 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Sep 30 17:41:19 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Sep 30 17:41:19 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Sep 30 17:41:19 compute-0 sudo[103927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:19 compute-0 sudo[103927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:19 compute-0 sudo[103927]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:19 compute-0 sudo[103952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:41:19 compute-0 sudo[103952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:19.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Sep 30 17:41:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Sep 30 17:41:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e82 e82: 2 total, 2 up, 2 in
Sep 30 17:41:19 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e82: 2 total, 2 up, 2 in
Sep 30 17:41:19 compute-0 ceph-mon[73755]: 12.1f deep-scrub starts
Sep 30 17:41:19 compute-0 ceph-mon[73755]: 12.1f deep-scrub ok
Sep 30 17:41:19 compute-0 ceph-mon[73755]: 9.1 scrub starts
Sep 30 17:41:19 compute-0 ceph-mon[73755]: 9.1 scrub ok
Sep 30 17:41:19 compute-0 ceph-mon[73755]: pgmap v19: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 180 B/s, 9 objects/s recovering
Sep 30 17:41:19 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Sep 30 17:41:19 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:19 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:19 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:19 compute-0 ceph-mon[73755]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Sep 30 17:41:19 compute-0 ceph-mon[73755]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Sep 30 17:41:19 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Sep 30 17:41:19 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Sep 30 17:41:19 compute-0 podman[103992]: 2025-09-30 17:41:19.984415827 +0000 UTC m=+0.037018962 volume create 5a9ba234adccfce68e4931bf12bb190c3ee528be1c75f5d381dd17b964835d41
Sep 30 17:41:19 compute-0 podman[103992]: 2025-09-30 17:41:19.994052627 +0000 UTC m=+0.046655752 container create b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 systemd[1]: Started libpod-conmon-b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a.scope.
Sep 30 17:41:20 compute-0 podman[103992]: 2025-09-30 17:41:19.970049314 +0000 UTC m=+0.022652469 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 17:41:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb5381b4dac9617a3866c4115d7fdd1ffeff1dbdc84f9d3c21c91ce60914aada/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:20 compute-0 podman[103992]: 2025-09-30 17:41:20.080421408 +0000 UTC m=+0.133024533 container init b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 podman[103992]: 2025-09-30 17:41:20.088235421 +0000 UTC m=+0.140838586 container start b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 focused_diffie[104008]: 65534 65534
Sep 30 17:41:20 compute-0 systemd[1]: libpod-b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a.scope: Deactivated successfully.
Sep 30 17:41:20 compute-0 podman[103992]: 2025-09-30 17:41:20.093531139 +0000 UTC m=+0.146134294 container attach b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 podman[103992]: 2025-09-30 17:41:20.093867077 +0000 UTC m=+0.146470202 container died b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb5381b4dac9617a3866c4115d7fdd1ffeff1dbdc84f9d3c21c91ce60914aada-merged.mount: Deactivated successfully.
Sep 30 17:41:20 compute-0 podman[103992]: 2025-09-30 17:41:20.138923677 +0000 UTC m=+0.191526802 container remove b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_diffie, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 podman[103992]: 2025-09-30 17:41:20.141906154 +0000 UTC m=+0.194509279 volume remove 5a9ba234adccfce68e4931bf12bb190c3ee528be1c75f5d381dd17b964835d41
Sep 30 17:41:20 compute-0 systemd[1]: libpod-conmon-b819f7390fcef05923d45924bca3bfe2522ea100e99dc83b901650d1354d4c6a.scope: Deactivated successfully.
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.198707638 +0000 UTC m=+0.034810524 volume create cddb6ff2bb030bd0c04429edac83dd5594d50b8bfddfca5b1576957bbc50c512
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.20684101 +0000 UTC m=+0.042943906 container create a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494 (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 systemd[1]: Started libpod-conmon-a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494.scope.
Sep 30 17:41:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f517ebc85d10bef86f12eb31fd81e0cddce35840299fe4bb8e5ce6aa9e07f4/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.184451128 +0000 UTC m=+0.020554024 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.28199214 +0000 UTC m=+0.118095066 container init a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494 (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.287503653 +0000 UTC m=+0.123606549 container start a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494 (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 agitated_grothendieck[104043]: 65534 65534
Sep 30 17:41:20 compute-0 systemd[1]: libpod-a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494.scope: Deactivated successfully.
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.291855976 +0000 UTC m=+0.127958952 container attach a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494 (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.292257786 +0000 UTC m=+0.128360722 container died a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494 (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-56f517ebc85d10bef86f12eb31fd81e0cddce35840299fe4bb8e5ce6aa9e07f4-merged.mount: Deactivated successfully.
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.335769036 +0000 UTC m=+0.171871932 container remove a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494 (image=quay.io/prometheus/alertmanager:v0.25.0, name=agitated_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 podman[104026]: 2025-09-30 17:41:20.338827815 +0000 UTC m=+0.174930711 volume remove cddb6ff2bb030bd0c04429edac83dd5594d50b8bfddfca5b1576957bbc50c512
Sep 30 17:41:20 compute-0 systemd[1]: libpod-conmon-a374f93e7aca1ea788ace93b5c79cf74cddb28548c0ecfe42224d20acfd8b494.scope: Deactivated successfully.
Sep 30 17:41:20 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:41:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[98852]: ts=2025-09-30T17:41:20.614Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Sep 30 17:41:20 compute-0 podman[104091]: 2025-09-30 17:41:20.62531046 +0000 UTC m=+0.050119251 container died 54b2fea94257f4c0dbb8baa51cc4daf28060912f14272b21840329b3da1a781c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e94b9d0ffad0207bfc182feb8704c5b56943dd28b98cd325accb5583c086a525-merged.mount: Deactivated successfully.
Sep 30 17:41:20 compute-0 podman[104091]: 2025-09-30 17:41:20.667046834 +0000 UTC m=+0.091855635 container remove 54b2fea94257f4c0dbb8baa51cc4daf28060912f14272b21840329b3da1a781c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:20 compute-0 podman[104091]: 2025-09-30 17:41:20.671288674 +0000 UTC m=+0.096097485 volume remove 0ff05485dc6191c213faf58adc02a5a4acca2eb6e133f30602e71b69a3a6e65f
Sep 30 17:41:20 compute-0 bash[104091]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0
Sep 30 17:41:20 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@alertmanager.compute-0.service: Deactivated successfully.
Sep 30 17:41:20 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:41:20 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@alertmanager.compute-0.service: Consumed 1.014s CPU time.
Sep 30 17:41:20 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:41:20 compute-0 ceph-mon[73755]: 12.1a deep-scrub starts
Sep 30 17:41:20 compute-0 ceph-mon[73755]: 12.1a deep-scrub ok
Sep 30 17:41:20 compute-0 ceph-mon[73755]: 8.7 scrub starts
Sep 30 17:41:20 compute-0 ceph-mon[73755]: 8.7 scrub ok
Sep 30 17:41:20 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Sep 30 17:41:20 compute-0 ceph-mon[73755]: osdmap e82: 2 total, 2 up, 2 in
Sep 30 17:41:20 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Sep 30 17:41:21 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Sep 30 17:41:21 compute-0 podman[104197]: 2025-09-30 17:41:21.016253787 +0000 UTC m=+0.034179178 volume create 4a64cbf938ef36dd2e2d0d154cb8b838ce5ae6da92e4d4f70c4ff4fa78850b67
Sep 30 17:41:21 compute-0 podman[104197]: 2025-09-30 17:41:21.025499717 +0000 UTC m=+0.043425098 container create 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d88b4b04220de564af0757d6c288ac066b8692ced3f3a3a66128b9ddbb4dc1/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d88b4b04220de564af0757d6c288ac066b8692ced3f3a3a66128b9ddbb4dc1/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:21 compute-0 podman[104197]: 2025-09-30 17:41:21.080695519 +0000 UTC m=+0.098620960 container init 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:21 compute-0 podman[104197]: 2025-09-30 17:41:21.085119274 +0000 UTC m=+0.103044675 container start 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:21 compute-0 bash[104197]: 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331
Sep 30 17:41:21 compute-0 podman[104197]: 2025-09-30 17:41:21.003606649 +0000 UTC m=+0.021532060 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 17:41:21 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:21.109Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:21.109Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:21.120Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Sep 30 17:41:21 compute-0 sudo[103952]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:21.123Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Sep 30 17:41:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:41:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:21.158Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:21.159Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Sep 30 17:41:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v21: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 147 B/s, 8 objects/s recovering
Sep 30 17:41:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Sep 30 17:41:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Sep 30 17:41:21 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Sep 30 17:41:21 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:21.164Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:21.164Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Sep 30 17:41:21 compute-0 ceph-mgr[74051]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Sep 30 17:41:21 compute-0 ceph-mgr[74051]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Sep 30 17:41:21 compute-0 sudo[104235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:21 compute-0 sudo[104235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:21 compute-0 sudo[104235]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:21 compute-0 sudo[104260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b
Sep 30 17:41:21 compute-0 sudo[104260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:21 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c003730 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:21 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:21.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:21 compute-0 podman[104304]: 2025-09-30 17:41:21.769329322 +0000 UTC m=+0.056496407 container create 37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e (image=quay.io/ceph/grafana:10.4.0, name=brave_wescoff, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:21 compute-0 podman[104304]: 2025-09-30 17:41:21.73765496 +0000 UTC m=+0.024822075 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 17:41:21 compute-0 systemd[1]: Started libpod-conmon-37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e.scope.
Sep 30 17:41:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:21 compute-0 podman[104304]: 2025-09-30 17:41:21.886545254 +0000 UTC m=+0.173712359 container init 37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e (image=quay.io/ceph/grafana:10.4.0, name=brave_wescoff, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:21 compute-0 podman[104304]: 2025-09-30 17:41:21.892589261 +0000 UTC m=+0.179756346 container start 37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e (image=quay.io/ceph/grafana:10.4.0, name=brave_wescoff, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:21 compute-0 podman[104304]: 2025-09-30 17:41:21.895143497 +0000 UTC m=+0.182310592 container attach 37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e (image=quay.io/ceph/grafana:10.4.0, name=brave_wescoff, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:21 compute-0 brave_wescoff[104320]: 472 0
Sep 30 17:41:21 compute-0 systemd[1]: libpod-37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e.scope: Deactivated successfully.
Sep 30 17:41:21 compute-0 podman[104304]: 2025-09-30 17:41:21.897426837 +0000 UTC m=+0.184593942 container died 37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e (image=quay.io/ceph/grafana:10.4.0, name=brave_wescoff, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:21 compute-0 ceph-mon[73755]: 12.1b scrub starts
Sep 30 17:41:21 compute-0 ceph-mon[73755]: 12.1b scrub ok
Sep 30 17:41:21 compute-0 ceph-mon[73755]: 8.6 scrub starts
Sep 30 17:41:21 compute-0 ceph-mon[73755]: 8.6 scrub ok
Sep 30 17:41:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:21 compute-0 ceph-mon[73755]: pgmap v21: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 147 B/s, 8 objects/s recovering
Sep 30 17:41:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Sep 30 17:41:21 compute-0 ceph-mon[73755]: Reconfiguring grafana.compute-0 (dependencies changed)...
Sep 30 17:41:21 compute-0 ceph-mon[73755]: Reconfiguring daemon grafana.compute-0 on compute-0
Sep 30 17:41:21 compute-0 ceph-mon[73755]: 11.6 scrub starts
Sep 30 17:41:21 compute-0 ceph-mon[73755]: 11.6 scrub ok
Sep 30 17:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1bbe86285fc0215712d9523ed0ca0ff402b06c6577f594b3a2c48a2347fa5e8-merged.mount: Deactivated successfully.
Sep 30 17:41:21 compute-0 podman[104304]: 2025-09-30 17:41:21.936648605 +0000 UTC m=+0.223815690 container remove 37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e (image=quay.io/ceph/grafana:10.4.0, name=brave_wescoff, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:21 compute-0 systemd[1]: libpod-conmon-37866cafa8d024ef17da212f635e07e7a5ec7c0deca3497d37a730b11408948e.scope: Deactivated successfully.
Sep 30 17:41:22 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.17 scrub starts
Sep 30 17:41:22 compute-0 podman[104336]: 2025-09-30 17:41:22.003786587 +0000 UTC m=+0.049052194 container create 8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e (image=quay.io/ceph/grafana:10.4.0, name=infallible_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:22 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.17 scrub ok
Sep 30 17:41:22 compute-0 systemd[1]: Started libpod-conmon-8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e.scope.
Sep 30 17:41:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:22 compute-0 podman[104336]: 2025-09-30 17:41:21.975191365 +0000 UTC m=+0.020456992 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 17:41:22 compute-0 podman[104336]: 2025-09-30 17:41:22.075563839 +0000 UTC m=+0.120829456 container init 8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e (image=quay.io/ceph/grafana:10.4.0, name=infallible_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:22 compute-0 podman[104336]: 2025-09-30 17:41:22.082413677 +0000 UTC m=+0.127679284 container start 8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e (image=quay.io/ceph/grafana:10.4.0, name=infallible_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:22 compute-0 infallible_torvalds[104353]: 472 0
Sep 30 17:41:22 compute-0 systemd[1]: libpod-8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e.scope: Deactivated successfully.
Sep 30 17:41:22 compute-0 podman[104336]: 2025-09-30 17:41:22.088072134 +0000 UTC m=+0.133337771 container attach 8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e (image=quay.io/ceph/grafana:10.4.0, name=infallible_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:22 compute-0 podman[104336]: 2025-09-30 17:41:22.088977697 +0000 UTC m=+0.134243304 container died 8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e (image=quay.io/ceph/grafana:10.4.0, name=infallible_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Sep 30 17:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ccdc28b3a8e680c3f8fce1bd90d6f2d78cf7a46a4b70025fbbbf05f12e4d620-merged.mount: Deactivated successfully.
Sep 30 17:41:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Sep 30 17:41:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e83 e83: 2 total, 2 up, 2 in
Sep 30 17:41:22 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e83: 2 total, 2 up, 2 in
Sep 30 17:41:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:41:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:41:22 compute-0 podman[104336]: 2025-09-30 17:41:22.240419878 +0000 UTC m=+0.285685525 container remove 8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e (image=quay.io/ceph/grafana:10.4.0, name=infallible_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 83 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=82) [0] r=0 lpr=83 pi=[69,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 83 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=82) [0] r=0 lpr=83 pi=[69,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 83 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=82) [0] r=0 lpr=83 pi=[69,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 83 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=82) [0] r=0 lpr=83 pi=[69,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 83 pg[10.8( v 54'774 (0'0,54'774] local-lis/les=58/59 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=83 pruub=13.955559731s) [1] r=-1 lpr=83 pi=[58,83)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 188.327270508s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 83 pg[10.8( v 54'774 (0'0,54'774] local-lis/les=58/59 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=83 pruub=13.955533981s) [1] r=-1 lpr=83 pi=[58,83)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.327270508s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 83 pg[10.18( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=83 pruub=13.955241203s) [1] r=-1 lpr=83 pi=[58,83)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 188.327713013s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 83 pg[10.18( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=83 pruub=13.955220222s) [1] r=-1 lpr=83 pi=[58,83)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.327713013s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:22 compute-0 systemd[1]: libpod-conmon-8e648f3b39634993851d036d1c05751d84155d63b87cacb9d0c1460446b9f02e.scope: Deactivated successfully.
Sep 30 17:41:22 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:41:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=server t=2025-09-30T17:41:22.537450747Z level=info msg="Shutdown started" reason="System signal: terminated"
Sep 30 17:41:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=ticker t=2025-09-30T17:41:22.537851487Z level=info msg=stopped last_tick=2025-09-30T17:41:20Z
Sep 30 17:41:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=tracing t=2025-09-30T17:41:22.537861648Z level=info msg="Closing tracing"
Sep 30 17:41:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=grafana-apiserver t=2025-09-30T17:41:22.538034272Z level=info msg="StorageObjectCountTracker pruner is exiting"
Sep 30 17:41:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[99380]: logger=sqlstore.transactions t=2025-09-30T17:41:22.549574182Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Sep 30 17:41:22 compute-0 podman[104404]: 2025-09-30 17:41:22.577825215 +0000 UTC m=+0.070567753 container died c0c4f203e50521af1e67ea671cd3250328ab176a59126da54fd0b28cda8d538c (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9dca52ac82893bd3df0349982ca23dca6ca2b36e5fa2a8dbc4af22904dfbc73-merged.mount: Deactivated successfully.
Sep 30 17:41:22 compute-0 podman[104404]: 2025-09-30 17:41:22.690317714 +0000 UTC m=+0.183060232 container remove c0c4f203e50521af1e67ea671cd3250328ab176a59126da54fd0b28cda8d538c (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:22 compute-0 bash[104404]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0
Sep 30 17:41:22 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@grafana.compute-0.service: Deactivated successfully.
Sep 30 17:41:22 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:41:22 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@grafana.compute-0.service: Consumed 4.026s CPU time.
Sep 30 17:41:22 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:41:22 compute-0 ceph-mon[73755]: 12.16 scrub starts
Sep 30 17:41:22 compute-0 ceph-mon[73755]: 12.16 scrub ok
Sep 30 17:41:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Sep 30 17:41:22 compute-0 ceph-mon[73755]: osdmap e83: 2 total, 2 up, 2 in
Sep 30 17:41:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:41:22 compute-0 ceph-mon[73755]: 9.4 scrub starts
Sep 30 17:41:22 compute-0 ceph-mon[73755]: 9.4 scrub ok
Sep 30 17:41:22 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Sep 30 17:41:23 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Sep 30 17:41:23 compute-0 podman[104506]: 2025-09-30 17:41:23.063223263 +0000 UTC m=+0.041846217 container create cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5d6b159baf3649620b2bce5d7f44a9c401134856fc5dbfeb5ea992fc77619a/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5d6b159baf3649620b2bce5d7f44a9c401134856fc5dbfeb5ea992fc77619a/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5d6b159baf3649620b2bce5d7f44a9c401134856fc5dbfeb5ea992fc77619a/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5d6b159baf3649620b2bce5d7f44a9c401134856fc5dbfeb5ea992fc77619a/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5d6b159baf3649620b2bce5d7f44a9c401134856fc5dbfeb5ea992fc77619a/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:23.124Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00048539s
Sep 30 17:41:23 compute-0 podman[104506]: 2025-09-30 17:41:23.044159858 +0000 UTC m=+0.022782802 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 17:41:23 compute-0 podman[104506]: 2025-09-30 17:41:23.143401804 +0000 UTC m=+0.122024768 container init cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:23 compute-0 podman[104506]: 2025-09-30 17:41:23.149164843 +0000 UTC m=+0.127787797 container start cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:23 compute-0 bash[104506]: cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2
Sep 30 17:41:23 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v23: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 135 B/s, 7 objects/s recovering
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Sep 30 17:41:23 compute-0 sudo[104260]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Sep 30 17:41:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e84 e84: 2 total, 2 up, 2 in
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e84: 2 total, 2 up, 2 in
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=84) [0]/[1] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=84) [0]/[1] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=84) [0]/[1] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=84) [0]/[1] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.8( v 54'774 (0'0,54'774] local-lis/les=58/59 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=84) [1]/[0] r=0 lpr=84 pi=[58,84)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.8( v 54'774 (0'0,54'774] local-lis/les=58/59 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=84) [1]/[0] r=0 lpr=84 pi=[58,84)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=84) [0]/[1] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=84) [0]/[1] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.18( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=84) [1]/[0] r=0 lpr=84 pi=[58,84)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.18( v 54'774 (0'0,54'774] local-lis/les=58/59 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=84) [1]/[0] r=0 lpr=84 pi=[58,84)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=84) [0]/[1] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 84 pg[10.1f( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=84) [0]/[1] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: [prometheus INFO root] Restarting engine...
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:23] ENGINE Bus STOPPING
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:23] ENGINE Bus STOPPING
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:23] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:23] ENGINE Bus STOPPED
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:23] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:23] ENGINE Bus STOPPED
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:23] ENGINE Bus STARTING
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:23] ENGINE Bus STARTING
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.32782599Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-09-30T17:41:23Z
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328133948Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328142628Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328147029Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328154839Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328158589Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328162219Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328165899Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328169879Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328173589Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328177159Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328183309Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.328187Z level=info msg=Target target=[all]
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.32820151Z level=info msg="Path Home" path=/usr/share/grafana
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.32820562Z level=info msg="Path Data" path=/var/lib/grafana
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.32820915Z level=info msg="Path Logs" path=/var/log/grafana
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.32821271Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.32821948Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=settings t=2025-09-30T17:41:23.32822309Z level=info msg="App mode production"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=sqlstore t=2025-09-30T17:41:23.328813426Z level=info msg="Connecting to DB" dbtype=sqlite3
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=sqlstore t=2025-09-30T17:41:23.328832526Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=migrator t=2025-09-30T17:41:23.329720189Z level=info msg="Starting DB migrations"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=migrator t=2025-09-30T17:41:23.347262115Z level=info msg="migrations completed" performed=0 skipped=547 duration=573.365µs
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=sqlstore t=2025-09-30T17:41:23.348361023Z level=info msg="Created default organization"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=secrets t=2025-09-30T17:41:23.348792934Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugin.store t=2025-09-30T17:41:23.366449113Z level=info msg="Loading plugins..."
Sep 30 17:41:23 compute-0 sudo[104557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:23 compute-0 sudo[104557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:23 compute-0 sudo[104557]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:23] ENGINE Serving on http://:::9283
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: [30/Sep/2025:17:41:23] ENGINE Bus STARTED
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:23] ENGINE Serving on http://:::9283
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.error] [30/Sep/2025:17:41:23] ENGINE Bus STARTED
Sep 30 17:41:23 compute-0 ceph-mgr[74051]: [prometheus INFO root] Engine started.
Sep 30 17:41:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:23.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:23 compute-0 sudo[104582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:41:23 compute-0 sudo[104582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=local.finder t=2025-09-30T17:41:23.438486332Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugin.store t=2025-09-30T17:41:23.438589575Z level=info msg="Plugins loaded" count=55 duration=72.140792ms
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=query_data t=2025-09-30T17:41:23.443490912Z level=info msg="Query Service initialization"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=live.push_http t=2025-09-30T17:41:23.44805204Z level=info msg="Live Push Gateway initialization"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=ngalert.migration t=2025-09-30T17:41:23.455460903Z level=info msg=Starting
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:23 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=ngalert.state.manager t=2025-09-30T17:41:23.466779777Z level=info msg="Running in alternative execution of Error/NoData mode"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=infra.usagestats.collector t=2025-09-30T17:41:23.469068076Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=provisioning.datasources t=2025-09-30T17:41:23.471287704Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=provisioning.alerting t=2025-09-30T17:41:23.493446819Z level=info msg="starting to provision alerting"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=provisioning.alerting t=2025-09-30T17:41:23.493470199Z level=info msg="finished to provision alerting"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=http.server t=2025-09-30T17:41:23.495755359Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=ngalert.state.manager t=2025-09-30T17:41:23.495941783Z level=info msg="Warming state cache for startup"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafanaStorageLogger t=2025-09-30T17:41:23.495986285Z level=info msg="Storage starting"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=ngalert.multiorg.alertmanager t=2025-09-30T17:41:23.498615913Z level=info msg="Starting MultiOrg Alertmanager"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=provisioning.dashboard t=2025-09-30T17:41:23.499614299Z level=info msg="starting to provision dashboards"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=ngalert.state.manager t=2025-09-30T17:41:23.500788039Z level=info msg="State cache has been initialized" states=0 duration=4.843106ms
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=ngalert.scheduler t=2025-09-30T17:41:23.500918843Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=ticker t=2025-09-30T17:41:23.500965944Z level=info msg=starting first_tick=2025-09-30T17:41:30Z
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=http.server t=2025-09-30T17:41:23.502019451Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=provisioning.dashboard t=2025-09-30T17:41:23.522222466Z level=info msg="finished to provision dashboards"
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:23 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T17:41:23.562440189Z level=info msg="Update check succeeded" duration=63.791925ms
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T17:41:23.567228494Z level=info msg="Update check succeeded" duration=69.602106ms
Sep 30 17:41:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:23 compute-0 podman[104681]: 2025-09-30 17:41:23.9365867 +0000 UTC m=+0.063928960 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:41:23 compute-0 ceph-mon[73755]: 12.17 scrub starts
Sep 30 17:41:23 compute-0 ceph-mon[73755]: 12.17 scrub ok
Sep 30 17:41:23 compute-0 ceph-mon[73755]: pgmap v23: 353 pgs: 353 active+clean; 458 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 135 B/s, 7 objects/s recovering
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Sep 30 17:41:23 compute-0 ceph-mon[73755]: osdmap e84: 2 total, 2 up, 2 in
Sep 30 17:41:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:23 compute-0 ceph-mon[73755]: 9.5 scrub starts
Sep 30 17:41:23 compute-0 ceph-mon[73755]: 9.5 scrub ok
Sep 30 17:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana-apiserver t=2025-09-30T17:41:23.999235186Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Sep 30 17:41:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana-apiserver t=2025-09-30T17:41:23.999821211Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Sep 30 17:41:24 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Sep 30 17:41:24 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Sep 30 17:41:24 compute-0 podman[104681]: 2025-09-30 17:41:24.050325832 +0000 UTC m=+0.177668072 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:41:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Sep 30 17:41:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e85 e85: 2 total, 2 up, 2 in
Sep 30 17:41:24 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e85: 2 total, 2 up, 2 in
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 85 pg[10.18( v 54'774 (0'0,54'774] local-lis/les=84/85 n=5 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[58,84)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 85 pg[10.8( v 54'774 (0'0,54'774] local-lis/les=84/85 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[58,84)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:24 compute-0 podman[104799]: 2025-09-30 17:41:24.476311308 +0000 UTC m=+0.049727732 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:24 compute-0 podman[104799]: 2025-09-30 17:41:24.485754703 +0000 UTC m=+0.059171097 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Sep 30 17:41:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e86 e86: 2 total, 2 up, 2 in
Sep 30 17:41:24 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e86: 2 total, 2 up, 2 in
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.8( v 54'774 (0'0,54'774] local-lis/les=84/85 n=7 ec=58/48 lis/c=84/58 les/c/f=85/59/0 sis=86 pruub=15.639336586s) [1] async=[1] r=-1 lpr=86 pi=[58,86)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 192.417175293s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.8( v 54'774 (0'0,54'774] local-lis/les=84/85 n=7 ec=58/48 lis/c=84/58 les/c/f=85/59/0 sis=86 pruub=15.639277458s) [1] r=-1 lpr=86 pi=[58,86)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.417175293s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.660285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254084660402, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7546, "num_deletes": 250, "total_data_size": 13768958, "memory_usage": 14172352, "flush_reason": "Manual Compaction"}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=0/0 n=3 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=0/0 n=3 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.18( v 54'774 (0'0,54'774] local-lis/les=84/85 n=5 ec=58/48 lis/c=84/58 les/c/f=85/59/0 sis=86 pruub=15.638794899s) [1] async=[1] r=-1 lpr=86 pi=[58,86)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 192.417175293s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 86 pg[10.18( v 54'774 (0'0,54'774] local-lis/les=84/85 n=5 ec=58/48 lis/c=84/58 les/c/f=85/59/0 sis=86 pruub=15.638615608s) [1] r=-1 lpr=86 pi=[58,86)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.417175293s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254084720845, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12145485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 155, "largest_seqno": 7696, "table_properties": {"data_size": 12117258, "index_size": 18256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8901, "raw_key_size": 85693, "raw_average_key_size": 24, "raw_value_size": 12048659, "raw_average_value_size": 3392, "num_data_blocks": 806, "num_entries": 3552, "num_filter_entries": 3552, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253794, "oldest_key_time": 1759253794, "file_creation_time": 1759254084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 60600 microseconds, and 20539 cpu microseconds.
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.720891) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12145485 bytes OK
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.720925) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.722414) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.722428) EVENT_LOG_v1 {"time_micros": 1759254084722424, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.722445) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13734942, prev total WAL file size 13734942, number of live WAL files 2.
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.724581) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(58KB) 8(1944B)]
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254084724697, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12207376, "oldest_snapshot_seqno": -1}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3380 keys, 12189637 bytes, temperature: kUnknown
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254084794326, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12189637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12161569, "index_size": 18486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 84774, "raw_average_key_size": 25, "raw_value_size": 12094339, "raw_average_value_size": 3578, "num_data_blocks": 816, "num_entries": 3380, "num_filter_entries": 3380, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759254084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.794644) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12189637 bytes
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.796777) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.0 rd, 174.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.6, 0.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3666, records dropped: 286 output_compression: NoCompression
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.796818) EVENT_LOG_v1 {"time_micros": 1759254084796801, "job": 4, "event": "compaction_finished", "compaction_time_micros": 69738, "compaction_time_cpu_micros": 31871, "output_level": 6, "num_output_files": 1, "total_output_size": 12189637, "num_input_records": 3666, "num_output_records": 3380, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254084800818, "job": 4, "event": "table_file_deletion", "file_number": 19}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254084800940, "job": 4, "event": "table_file_deletion", "file_number": 13}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254084801010, "job": 4, "event": "table_file_deletion", "file_number": 8}
Sep 30 17:41:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:24.724495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:41:24 compute-0 podman[104886]: 2025-09-30 17:41:24.829558356 +0000 UTC m=+0.056269461 container exec 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 17:41:24 compute-0 podman[104886]: 2025-09-30 17:41:24.839707299 +0000 UTC m=+0.066418394 container exec_died 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:41:24 compute-0 ceph-mon[73755]: 12.14 scrub starts
Sep 30 17:41:24 compute-0 ceph-mon[73755]: 12.14 scrub ok
Sep 30 17:41:24 compute-0 ceph-mon[73755]: 11.17 scrub starts
Sep 30 17:41:24 compute-0 ceph-mon[73755]: 11.17 scrub ok
Sep 30 17:41:24 compute-0 ceph-mon[73755]: osdmap e85: 2 total, 2 up, 2 in
Sep 30 17:41:24 compute-0 ceph-mon[73755]: 11.18 scrub starts
Sep 30 17:41:24 compute-0 ceph-mon[73755]: 11.18 scrub ok
Sep 30 17:41:24 compute-0 ceph-mon[73755]: osdmap e86: 2 total, 2 up, 2 in
Sep 30 17:41:25 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Sep 30 17:41:25 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Sep 30 17:41:25 compute-0 podman[104950]: 2025-09-30 17:41:25.028200631 +0000 UTC m=+0.041633511 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:41:25 compute-0 podman[104950]: 2025-09-30 17:41:25.038576031 +0000 UTC m=+0.052008891 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:41:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v27: 353 pgs: 6 unknown, 347 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:25 compute-0 podman[105021]: 2025-09-30 17:41:25.228051898 +0000 UTC m=+0.051825746 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, vcs-type=git, release=1793, com.redhat.component=keepalived-container, version=2.2.4, distribution-scope=public, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Sep 30 17:41:25 compute-0 podman[105021]: 2025-09-30 17:41:25.240652005 +0000 UTC m=+0.064425853 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, version=2.2.4, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Sep 30 17:41:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:25.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:25 compute-0 podman[105086]: 2025-09-30 17:41:25.441723504 +0000 UTC m=+0.050879241 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:25 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:25 compute-0 podman[105086]: 2025-09-30 17:41:25.466798325 +0000 UTC m=+0.075954082 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:25 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Sep 30 17:41:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e87 e87: 2 total, 2 up, 2 in
Sep 30 17:41:25 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e87: 2 total, 2 up, 2 in
Sep 30 17:41:25 compute-0 podman[105163]: 2025-09-30 17:41:25.672847212 +0000 UTC m=+0.048545321 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:25 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 87 pg[10.17( v 54'774 (0'0,54'774] local-lis/les=86/87 n=3 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:25 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 87 pg[10.f( v 54'774 (0'0,54'774] local-lis/les=86/87 n=6 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:25 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 87 pg[10.7( v 54'774 (0'0,54'774] local-lis/les=86/87 n=6 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:25 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 87 pg[10.1f( v 54'774 (0'0,54'774] local-lis/les=86/87 n=5 ec=58/48 lis/c=84/69 les/c/f=85/70/0 sis=86) [0] r=0 lpr=86 pi=[69,86)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:25.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:25 compute-0 podman[105163]: 2025-09-30 17:41:25.857506884 +0000 UTC m=+0.233204973 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:41:25 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Sep 30 17:41:25 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Sep 30 17:41:26 compute-0 sudo[103793]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:26 compute-0 podman[105296]: 2025-09-30 17:41:26.221550943 +0000 UTC m=+0.063716175 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:26 compute-0 podman[105296]: 2025-09-30 17:41:26.256677764 +0000 UTC m=+0.098842976 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:41:26 compute-0 sudo[104582]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.356258) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254086356319, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 301, "num_deletes": 251, "total_data_size": 142258, "memory_usage": 149848, "flush_reason": "Manual Compaction"}
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Sep 30 17:41:26 compute-0 sshd-session[103079]: Connection closed by 192.168.122.30 port 49150
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254086358578, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 142254, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7697, "largest_seqno": 7997, "table_properties": {"data_size": 140170, "index_size": 248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5019, "raw_average_key_size": 17, "raw_value_size": 136083, "raw_average_value_size": 477, "num_data_blocks": 10, "num_entries": 285, "num_filter_entries": 285, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759254084, "oldest_key_time": 1759254084, "file_creation_time": 1759254086, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 2339 microseconds, and 949 cpu microseconds.
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:41:26 compute-0 sshd-session[103070]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.358607) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 142254 bytes OK
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.358620) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.360631) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.360668) EVENT_LOG_v1 {"time_micros": 1759254086360664, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.360685) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 140056, prev total WAL file size 140345, number of live WAL files 2.
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.361015) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(138KB)], [20(11MB)]
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254086361058, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12331891, "oldest_snapshot_seqno": -1}
Sep 30 17:41:26 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Sep 30 17:41:26 compute-0 systemd[1]: session-39.scope: Consumed 8.218s CPU time.
Sep 30 17:41:26 compute-0 systemd-logind[811]: Session 39 logged out. Waiting for processes to exit.
Sep 30 17:41:26 compute-0 systemd-logind[811]: Removed session 39.
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3149 keys, 11094711 bytes, temperature: kUnknown
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254086424918, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11094711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11069018, "index_size": 16687, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 81051, "raw_average_key_size": 25, "raw_value_size": 11006495, "raw_average_value_size": 3495, "num_data_blocks": 730, "num_entries": 3149, "num_filter_entries": 3149, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759254086, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.425141) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11094711 bytes
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.426511) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.9 rd, 173.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.6 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(164.7) write-amplify(78.0) OK, records in: 3665, records dropped: 516 output_compression: NoCompression
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.426530) EVENT_LOG_v1 {"time_micros": 1759254086426521, "job": 6, "event": "compaction_finished", "compaction_time_micros": 63939, "compaction_time_cpu_micros": 22853, "output_level": 6, "num_output_files": 1, "total_output_size": 11094711, "num_input_records": 3665, "num_output_records": 3149, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254086426773, "job": 6, "event": "table_file_deletion", "file_number": 22}
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254086428388, "job": 6, "event": "table_file_deletion", "file_number": 20}
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.360936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.428463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.428489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.428511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.428534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:41:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:41:26.428557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:41:26 compute-0 sudo[105338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:26 compute-0 sudo[105338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:26 compute-0 sudo[105338]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:26 compute-0 sudo[105363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:41:26 compute-0 sudo[105363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:26 compute-0 ceph-mon[73755]: 9.15 scrub starts
Sep 30 17:41:26 compute-0 ceph-mon[73755]: 9.15 scrub ok
Sep 30 17:41:26 compute-0 ceph-mon[73755]: pgmap v27: 353 pgs: 6 unknown, 347 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:26 compute-0 ceph-mon[73755]: 9.1a scrub starts
Sep 30 17:41:26 compute-0 ceph-mon[73755]: 9.1a scrub ok
Sep 30 17:41:26 compute-0 ceph-mon[73755]: osdmap e87: 2 total, 2 up, 2 in
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:41:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:41:26 compute-0 podman[105430]: 2025-09-30 17:41:26.923257185 +0000 UTC m=+0.041144469 container create 237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:41:26 compute-0 systemd[1]: Started libpod-conmon-237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83.scope.
Sep 30 17:41:26 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Sep 30 17:41:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:26 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Sep 30 17:41:27 compute-0 podman[105430]: 2025-09-30 17:41:26.906396157 +0000 UTC m=+0.024283461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:27 compute-0 podman[105430]: 2025-09-30 17:41:27.002215444 +0000 UTC m=+0.120102748 container init 237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 17:41:27 compute-0 podman[105430]: 2025-09-30 17:41:27.010214841 +0000 UTC m=+0.128102145 container start 237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 17:41:27 compute-0 magical_easley[105447]: 167 167
Sep 30 17:41:27 compute-0 podman[105430]: 2025-09-30 17:41:27.015606771 +0000 UTC m=+0.133494075 container attach 237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:41:27 compute-0 systemd[1]: libpod-237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83.scope: Deactivated successfully.
Sep 30 17:41:27 compute-0 podman[105430]: 2025-09-30 17:41:27.016924596 +0000 UTC m=+0.134812130 container died 237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:41:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf215d192ab23f81bf4eb3135bdaf7f200f1d212ed807b6ed1a197899fa7122b-merged.mount: Deactivated successfully.
Sep 30 17:41:27 compute-0 podman[105430]: 2025-09-30 17:41:27.060492436 +0000 UTC m=+0.178379720 container remove 237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:41:27 compute-0 systemd[1]: libpod-conmon-237c87dbef640772dc9d41e42e7df7373e3cbc22c69e0d4dd34c5c833a23cc83.scope: Deactivated successfully.
Sep 30 17:41:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v29: 353 pgs: 6 unknown, 347 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:27 compute-0 podman[105473]: 2025-09-30 17:41:27.321582893 +0000 UTC m=+0.098403005 container create 495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cartwright, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:41:27 compute-0 podman[105473]: 2025-09-30 17:41:27.262229172 +0000 UTC m=+0.039049314 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:27 compute-0 systemd[1]: Started libpod-conmon-495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d.scope.
Sep 30 17:41:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1bee2f692fa08f867f27acb06f481e52be456067908ae6a6c37c6fa08ac6a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1bee2f692fa08f867f27acb06f481e52be456067908ae6a6c37c6fa08ac6a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1bee2f692fa08f867f27acb06f481e52be456067908ae6a6c37c6fa08ac6a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1bee2f692fa08f867f27acb06f481e52be456067908ae6a6c37c6fa08ac6a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1bee2f692fa08f867f27acb06f481e52be456067908ae6a6c37c6fa08ac6a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:27 compute-0 podman[105473]: 2025-09-30 17:41:27.403872268 +0000 UTC m=+0.180692410 container init 495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:41:27 compute-0 podman[105473]: 2025-09-30 17:41:27.413845367 +0000 UTC m=+0.190665489 container start 495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:41:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:27.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:27 compute-0 podman[105473]: 2025-09-30 17:41:27.417063921 +0000 UTC m=+0.193884073 container attach 495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cartwright, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:41:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:27 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:27 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:27 compute-0 ceph-mon[73755]: 11.16 scrub starts
Sep 30 17:41:27 compute-0 ceph-mon[73755]: 11.16 scrub ok
Sep 30 17:41:27 compute-0 ceph-mon[73755]: 11.19 scrub starts
Sep 30 17:41:27 compute-0 ceph-mon[73755]: 11.19 scrub ok
Sep 30 17:41:27 compute-0 happy_cartwright[105490]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:41:27 compute-0 happy_cartwright[105490]: --> All data devices are unavailable
Sep 30 17:41:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:27 compute-0 systemd[1]: libpod-495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d.scope: Deactivated successfully.
Sep 30 17:41:27 compute-0 podman[105473]: 2025-09-30 17:41:27.749895079 +0000 UTC m=+0.526715201 container died 495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:41:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c1bee2f692fa08f867f27acb06f481e52be456067908ae6a6c37c6fa08ac6a5-merged.mount: Deactivated successfully.
Sep 30 17:41:27 compute-0 podman[105473]: 2025-09-30 17:41:27.796465968 +0000 UTC m=+0.573286090 container remove 495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:41:27 compute-0 systemd[1]: libpod-conmon-495f97a946c6efad66bdc3f3f1388b9ba3949e88c8ff65dcae7ca98c63547b5d.scope: Deactivated successfully.
Sep 30 17:41:27 compute-0 sudo[105363]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:27 compute-0 sudo[105518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:27 compute-0 sudo[105518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:27 compute-0 sudo[105518]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:27 compute-0 sudo[105543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:41:27 compute-0 sudo[105543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:27 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Sep 30 17:41:27 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Sep 30 17:41:28 compute-0 podman[105608]: 2025-09-30 17:41:28.296528126 +0000 UTC m=+0.034528557 container create 24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 17:41:28 compute-0 systemd[1]: Started libpod-conmon-24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60.scope.
Sep 30 17:41:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:28 compute-0 podman[105608]: 2025-09-30 17:41:28.370498806 +0000 UTC m=+0.108499237 container init 24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:41:28 compute-0 podman[105608]: 2025-09-30 17:41:28.375428414 +0000 UTC m=+0.113428845 container start 24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_elgamal, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:41:28 compute-0 podman[105608]: 2025-09-30 17:41:28.281382443 +0000 UTC m=+0.019382894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:28 compute-0 podman[105608]: 2025-09-30 17:41:28.378434542 +0000 UTC m=+0.116434973 container attach 24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_elgamal, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 17:41:28 compute-0 festive_elgamal[105625]: 167 167
Sep 30 17:41:28 compute-0 systemd[1]: libpod-24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60.scope: Deactivated successfully.
Sep 30 17:41:28 compute-0 podman[105608]: 2025-09-30 17:41:28.379285644 +0000 UTC m=+0.117286075 container died 24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_elgamal, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:41:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bdabc06e5c3d721eaf290fefd462c339826b0c55299f6f0b9d4840f28e0bf11-merged.mount: Deactivated successfully.
Sep 30 17:41:28 compute-0 podman[105608]: 2025-09-30 17:41:28.422118396 +0000 UTC m=+0.160118827 container remove 24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_elgamal, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 17:41:28 compute-0 systemd[1]: libpod-conmon-24b962376be0c550236a6060f7eccd2733e610babb7a5ab80dc46cbc99867e60.scope: Deactivated successfully.
Sep 30 17:41:28 compute-0 podman[105649]: 2025-09-30 17:41:28.617877327 +0000 UTC m=+0.052914105 container create 79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curran, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:41:28 compute-0 systemd[1]: Started libpod-conmon-79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055.scope.
Sep 30 17:41:28 compute-0 podman[105649]: 2025-09-30 17:41:28.593672628 +0000 UTC m=+0.028709426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97da0c8531168bb259cc461f54dc31d018bac761743ec19a77b7f50b35dfd0b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:28 compute-0 ceph-mon[73755]: 9.16 scrub starts
Sep 30 17:41:28 compute-0 ceph-mon[73755]: 9.16 scrub ok
Sep 30 17:41:28 compute-0 ceph-mon[73755]: pgmap v29: 353 pgs: 6 unknown, 347 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:28 compute-0 ceph-mon[73755]: 9.1b deep-scrub starts
Sep 30 17:41:28 compute-0 ceph-mon[73755]: 9.1b deep-scrub ok
Sep 30 17:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97da0c8531168bb259cc461f54dc31d018bac761743ec19a77b7f50b35dfd0b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97da0c8531168bb259cc461f54dc31d018bac761743ec19a77b7f50b35dfd0b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97da0c8531168bb259cc461f54dc31d018bac761743ec19a77b7f50b35dfd0b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:28 compute-0 podman[105649]: 2025-09-30 17:41:28.725474039 +0000 UTC m=+0.160510827 container init 79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:41:28 compute-0 podman[105649]: 2025-09-30 17:41:28.732480581 +0000 UTC m=+0.167517339 container start 79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:41:28 compute-0 podman[105649]: 2025-09-30 17:41:28.735583841 +0000 UTC m=+0.170620619 container attach 79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curran, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:41:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:28] "GET /metrics HTTP/1.1" 200 46539 "" "Prometheus/2.51.0"
Sep 30 17:41:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:28] "GET /metrics HTTP/1.1" 200 46539 "" "Prometheus/2.51.0"
Sep 30 17:41:28 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Sep 30 17:41:28 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Sep 30 17:41:28 compute-0 dreamy_curran[105666]: {
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:     "0": [
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:         {
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "devices": [
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "/dev/loop3"
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             ],
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "lv_name": "ceph_lv0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "lv_size": "21470642176",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "name": "ceph_lv0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "tags": {
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.cluster_name": "ceph",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.crush_device_class": "",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.encrypted": "0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.osd_id": "0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.type": "block",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.vdo": "0",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:                 "ceph.with_tpm": "0"
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             },
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "type": "block",
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:             "vg_name": "ceph_vg0"
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:         }
Sep 30 17:41:28 compute-0 dreamy_curran[105666]:     ]
Sep 30 17:41:28 compute-0 dreamy_curran[105666]: }
Sep 30 17:41:29 compute-0 systemd[1]: libpod-79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055.scope: Deactivated successfully.
Sep 30 17:41:29 compute-0 podman[105649]: 2025-09-30 17:41:29.014655464 +0000 UTC m=+0.449692222 container died 79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:41:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-97da0c8531168bb259cc461f54dc31d018bac761743ec19a77b7f50b35dfd0b4-merged.mount: Deactivated successfully.
Sep 30 17:41:29 compute-0 podman[105649]: 2025-09-30 17:41:29.054783316 +0000 UTC m=+0.489820074 container remove 79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_curran, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:41:29 compute-0 systemd[1]: libpod-conmon-79255a016e3fcaf287018db1ad96ea79f4244ad4712003fac0aee6716555d055.scope: Deactivated successfully.
Sep 30 17:41:29 compute-0 sudo[105543]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:29 compute-0 sudo[105687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:41:29 compute-0 sudo[105687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:29 compute-0 sudo[105687]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 520 B/s wr, 44 op/s; 223 B/s, 8 objects/s recovering
Sep 30 17:41:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Sep 30 17:41:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Sep 30 17:41:29 compute-0 sudo[105712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:41:29 compute-0 sudo[105712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:29.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:29 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:29 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30001d30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:29 compute-0 podman[105778]: 2025-09-30 17:41:29.574023901 +0000 UTC m=+0.052878123 container create d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mayer, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:41:29 compute-0 systemd[1]: Started libpod-conmon-d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221.scope.
Sep 30 17:41:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:29 compute-0 podman[105778]: 2025-09-30 17:41:29.630287082 +0000 UTC m=+0.109141354 container init d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mayer, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:41:29 compute-0 podman[105778]: 2025-09-30 17:41:29.638668439 +0000 UTC m=+0.117522681 container start d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mayer, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 17:41:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:29 compute-0 podman[105778]: 2025-09-30 17:41:29.54660132 +0000 UTC m=+0.025455622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:29 compute-0 podman[105778]: 2025-09-30 17:41:29.642258842 +0000 UTC m=+0.121113074 container attach d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mayer, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:41:29 compute-0 determined_mayer[105795]: 167 167
Sep 30 17:41:29 compute-0 systemd[1]: libpod-d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221.scope: Deactivated successfully.
Sep 30 17:41:29 compute-0 podman[105778]: 2025-09-30 17:41:29.64408419 +0000 UTC m=+0.122938412 container died d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mayer, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:41:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5a9a7b40483030481ac165f0cd6846a527ae19f9943df1f91de69d44d3989e-merged.mount: Deactivated successfully.
Sep 30 17:41:29 compute-0 podman[105778]: 2025-09-30 17:41:29.67917182 +0000 UTC m=+0.158026052 container remove d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 17:41:29 compute-0 systemd[1]: libpod-conmon-d8b587a4bd89e1ea4335360bc03f9ef17227f079819757d6e8b6064f0a774221.scope: Deactivated successfully.
Sep 30 17:41:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Sep 30 17:41:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Sep 30 17:41:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e88 e88: 2 total, 2 up, 2 in
Sep 30 17:41:29 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e88: 2 total, 2 up, 2 in
Sep 30 17:41:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:29.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:29 compute-0 ceph-mon[73755]: 11.13 scrub starts
Sep 30 17:41:29 compute-0 ceph-mon[73755]: 11.13 scrub ok
Sep 30 17:41:29 compute-0 ceph-mon[73755]: 8.1a deep-scrub starts
Sep 30 17:41:29 compute-0 ceph-mon[73755]: 8.1a deep-scrub ok
Sep 30 17:41:29 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Sep 30 17:41:29 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 88 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=88) [0] r=0 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:29 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 88 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=88) [0] r=0 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:29 compute-0 podman[105819]: 2025-09-30 17:41:29.820525929 +0000 UTC m=+0.035796620 container create eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_faraday, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:41:29 compute-0 systemd[1]: Started libpod-conmon-eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be.scope.
Sep 30 17:41:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:41:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85daedc20575f8c02329a9ad01f69c8abc1fa081ab140ca3c1fdb7c7f59e7948/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85daedc20575f8c02329a9ad01f69c8abc1fa081ab140ca3c1fdb7c7f59e7948/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85daedc20575f8c02329a9ad01f69c8abc1fa081ab140ca3c1fdb7c7f59e7948/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85daedc20575f8c02329a9ad01f69c8abc1fa081ab140ca3c1fdb7c7f59e7948/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:41:29 compute-0 podman[105819]: 2025-09-30 17:41:29.879583672 +0000 UTC m=+0.094854383 container init eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_faraday, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Sep 30 17:41:29 compute-0 podman[105819]: 2025-09-30 17:41:29.892469246 +0000 UTC m=+0.107739937 container start eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_faraday, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:41:29 compute-0 podman[105819]: 2025-09-30 17:41:29.895248978 +0000 UTC m=+0.110519669 container attach eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_faraday, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:41:29 compute-0 podman[105819]: 2025-09-30 17:41:29.805673213 +0000 UTC m=+0.020943924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:41:29 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.12 deep-scrub starts
Sep 30 17:41:29 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.12 deep-scrub ok
Sep 30 17:41:30 compute-0 lvm[105909]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:41:30 compute-0 lvm[105909]: VG ceph_vg0 finished
Sep 30 17:41:30 compute-0 peaceful_faraday[105835]: {}
Sep 30 17:41:30 compute-0 systemd[1]: libpod-eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be.scope: Deactivated successfully.
Sep 30 17:41:30 compute-0 systemd[1]: libpod-eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be.scope: Consumed 1.018s CPU time.
Sep 30 17:41:30 compute-0 podman[105912]: 2025-09-30 17:41:30.58230084 +0000 UTC m=+0.023025199 container died eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_faraday, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 17:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-85daedc20575f8c02329a9ad01f69c8abc1fa081ab140ca3c1fdb7c7f59e7948-merged.mount: Deactivated successfully.
Sep 30 17:41:30 compute-0 podman[105912]: 2025-09-30 17:41:30.62505371 +0000 UTC m=+0.065778079 container remove eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:41:30 compute-0 systemd[1]: libpod-conmon-eb4d20afbdc72cac756b386d81981aedef8f9035aab515e482f3637784c5b8be.scope: Deactivated successfully.
Sep 30 17:41:30 compute-0 sudo[105712]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:41:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:41:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Sep 30 17:41:30 compute-0 ceph-mon[73755]: 9.17 scrub starts
Sep 30 17:41:30 compute-0 ceph-mon[73755]: 9.17 scrub ok
Sep 30 17:41:30 compute-0 ceph-mon[73755]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 520 B/s wr, 44 op/s; 223 B/s, 8 objects/s recovering
Sep 30 17:41:30 compute-0 ceph-mon[73755]: 9.19 scrub starts
Sep 30 17:41:30 compute-0 ceph-mon[73755]: 9.19 scrub ok
Sep 30 17:41:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Sep 30 17:41:30 compute-0 ceph-mon[73755]: osdmap e88: 2 total, 2 up, 2 in
Sep 30 17:41:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:41:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e89 e89: 2 total, 2 up, 2 in
Sep 30 17:41:30 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e89: 2 total, 2 up, 2 in
Sep 30 17:41:30 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[68,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:30 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[68,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:30 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[68,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:30 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[68,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:30 compute-0 sudo[105927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:41:30 compute-0 sudo[105927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:30 compute-0 sudo[105927]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:30 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Sep 30 17:41:30 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Sep 30 17:41:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:41:31.127Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003759824s
Sep 30 17:41:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 511 B/s wr, 43 op/s; 219 B/s, 8 objects/s recovering
Sep 30 17:41:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Sep 30 17:41:31 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Sep 30 17:41:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:31 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:31 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:31.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Sep 30 17:41:31 compute-0 ceph-mon[73755]: 11.12 deep-scrub starts
Sep 30 17:41:31 compute-0 ceph-mon[73755]: 11.12 deep-scrub ok
Sep 30 17:41:31 compute-0 ceph-mon[73755]: 9.1e scrub starts
Sep 30 17:41:31 compute-0 ceph-mon[73755]: 9.1e scrub ok
Sep 30 17:41:31 compute-0 ceph-mon[73755]: osdmap e89: 2 total, 2 up, 2 in
Sep 30 17:41:31 compute-0 ceph-mon[73755]: pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 511 B/s wr, 43 op/s; 219 B/s, 8 objects/s recovering
Sep 30 17:41:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Sep 30 17:41:31 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Sep 30 17:41:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e90 e90: 2 total, 2 up, 2 in
Sep 30 17:41:31 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e90: 2 total, 2 up, 2 in
Sep 30 17:41:31 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 90 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:31 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 90 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Sep 30 17:41:32 compute-0 ceph-mon[73755]: 9.10 scrub starts
Sep 30 17:41:32 compute-0 ceph-mon[73755]: 9.10 scrub ok
Sep 30 17:41:32 compute-0 ceph-mon[73755]: 8.1f deep-scrub starts
Sep 30 17:41:32 compute-0 ceph-mon[73755]: 8.1f deep-scrub ok
Sep 30 17:41:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Sep 30 17:41:32 compute-0 ceph-mon[73755]: osdmap e90: 2 total, 2 up, 2 in
Sep 30 17:41:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e91 e91: 2 total, 2 up, 2 in
Sep 30 17:41:32 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e91: 2 total, 2 up, 2 in
Sep 30 17:41:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 91 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 91 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 91 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[69,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 91 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[69,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 91 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 91 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 91 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[69,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 91 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=69/69 les/c/f=70/70/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[69,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Sep 30 17:41:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Sep 30 17:41:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:33.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:33 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:33 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30001d30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:33.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Sep 30 17:41:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Sep 30 17:41:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e92 e92: 2 total, 2 up, 2 in
Sep 30 17:41:34 compute-0 ceph-mon[73755]: 9.1f scrub starts
Sep 30 17:41:34 compute-0 ceph-mon[73755]: 9.1f scrub ok
Sep 30 17:41:34 compute-0 ceph-mon[73755]: osdmap e91: 2 total, 2 up, 2 in
Sep 30 17:41:34 compute-0 ceph-mon[73755]: pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Sep 30 17:41:34 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e92: 2 total, 2 up, 2 in
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 92 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=91/92 n=6 ec=58/48 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 92 pg[10.1a( v 54'774 (0'0,54'774] local-lis/les=91/92 n=5 ec=58/48 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 92 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=92) [0] r=0 lpr=92 pi=[75,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 92 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=92) [0] r=0 lpr=92 pi=[75,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Sep 30 17:41:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e93 e93: 2 total, 2 up, 2 in
Sep 30 17:41:34 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e93: 2 total, 2 up, 2 in
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 93 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=93) [0]/[1] r=-1 lpr=93 pi=[75,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 93 pg[10.c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=93) [0]/[1] r=-1 lpr=93 pi=[75,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 93 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=91/69 les/c/f=92/70/0 sis=93) [0] r=0 lpr=93 pi=[69,93)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 93 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=91/69 les/c/f=92/70/0 sis=93) [0] r=0 lpr=93 pi=[69,93)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 93 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=91/69 les/c/f=92/70/0 sis=93) [0] r=0 lpr=93 pi=[69,93)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 93 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=91/69 les/c/f=92/70/0 sis=93) [0] r=0 lpr=93 pi=[69,93)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 93 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=93) [0]/[1] r=-1 lpr=93 pi=[75,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:34 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 93 pg[10.1c( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=75/75 les/c/f=76/76/0 sis=93) [0]/[1] r=-1 lpr=93 pi=[75,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:34 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.f scrub starts
Sep 30 17:41:34 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.f scrub ok
Sep 30 17:41:35 compute-0 ceph-mon[73755]: 8.1e scrub starts
Sep 30 17:41:35 compute-0 ceph-mon[73755]: 8.1e scrub ok
Sep 30 17:41:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Sep 30 17:41:35 compute-0 ceph-mon[73755]: osdmap e92: 2 total, 2 up, 2 in
Sep 30 17:41:35 compute-0 ceph-mon[73755]: 9.1c scrub starts
Sep 30 17:41:35 compute-0 ceph-mon[73755]: 9.1c scrub ok
Sep 30 17:41:35 compute-0 ceph-mon[73755]: osdmap e93: 2 total, 2 up, 2 in
Sep 30 17:41:35 compute-0 ceph-mon[73755]: 10.f scrub starts
Sep 30 17:41:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 54 B/s, 3 objects/s recovering
Sep 30 17:41:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:35.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:35 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30001d30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:35 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30001d30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Sep 30 17:41:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e94 e94: 2 total, 2 up, 2 in
Sep 30 17:41:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e94: 2 total, 2 up, 2 in
Sep 30 17:41:35 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 94 pg[10.b( v 54'774 (0'0,54'774] local-lis/les=93/94 n=6 ec=58/48 lis/c=91/69 les/c/f=92/70/0 sis=93) [0] r=0 lpr=93 pi=[69,93)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:35 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 94 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=93/94 n=5 ec=58/48 lis/c=91/69 les/c/f=92/70/0 sis=93) [0] r=0 lpr=93 pi=[69,93)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:35.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:35 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Sep 30 17:41:35 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Sep 30 17:41:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Sep 30 17:41:36 compute-0 ceph-mon[73755]: 10.f scrub ok
Sep 30 17:41:36 compute-0 ceph-mon[73755]: pgmap v39: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 54 B/s, 3 objects/s recovering
Sep 30 17:41:36 compute-0 ceph-mon[73755]: 8.1d deep-scrub starts
Sep 30 17:41:36 compute-0 ceph-mon[73755]: 8.1d deep-scrub ok
Sep 30 17:41:36 compute-0 ceph-mon[73755]: osdmap e94: 2 total, 2 up, 2 in
Sep 30 17:41:36 compute-0 ceph-mon[73755]: 10.6 scrub starts
Sep 30 17:41:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e95 e95: 2 total, 2 up, 2 in
Sep 30 17:41:36 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e95: 2 total, 2 up, 2 in
Sep 30 17:41:36 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 95 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=93/75 les/c/f=94/76/0 sis=95) [0] r=0 lpr=95 pi=[75,95)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:36 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 95 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=93/75 les/c/f=94/76/0 sis=95) [0] r=0 lpr=95 pi=[75,95)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:36 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 95 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=93/75 les/c/f=94/76/0 sis=95) [0] r=0 lpr=95 pi=[75,95)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:36 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 95 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=93/75 les/c/f=94/76/0 sis=95) [0] r=0 lpr=95 pi=[75,95)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:36 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Sep 30 17:41:36 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Sep 30 17:41:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v42: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 54 B/s, 3 objects/s recovering
Sep 30 17:41:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:41:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:41:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:37.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:37 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:37 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Sep 30 17:41:37 compute-0 ceph-mon[73755]: 10.6 scrub ok
Sep 30 17:41:37 compute-0 ceph-mon[73755]: 9.1d scrub starts
Sep 30 17:41:37 compute-0 ceph-mon[73755]: 9.1d scrub ok
Sep 30 17:41:37 compute-0 ceph-mon[73755]: osdmap e95: 2 total, 2 up, 2 in
Sep 30 17:41:37 compute-0 ceph-mon[73755]: 10.7 scrub starts
Sep 30 17:41:37 compute-0 ceph-mon[73755]: 10.7 scrub ok
Sep 30 17:41:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:41:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e96 e96: 2 total, 2 up, 2 in
Sep 30 17:41:37 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e96: 2 total, 2 up, 2 in
Sep 30 17:41:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:37.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:37 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 96 pg[10.c( v 54'774 (0'0,54'774] local-lis/les=95/96 n=6 ec=58/48 lis/c=93/75 les/c/f=94/76/0 sis=95) [0] r=0 lpr=95 pi=[75,95)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:37 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 96 pg[10.1c( v 54'774 (0'0,54'774] local-lis/les=95/96 n=5 ec=58/48 lis/c=93/75 les/c/f=94/76/0 sis=95) [0] r=0 lpr=95 pi=[75,95)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:37 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Sep 30 17:41:37 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Sep 30 17:41:38 compute-0 ceph-mon[73755]: pgmap v42: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 98 MiB used, 40 GiB / 40 GiB avail; 54 B/s, 3 objects/s recovering
Sep 30 17:41:38 compute-0 ceph-mon[73755]: 11.1f deep-scrub starts
Sep 30 17:41:38 compute-0 ceph-mon[73755]: 11.1f deep-scrub ok
Sep 30 17:41:38 compute-0 ceph-mon[73755]: osdmap e96: 2 total, 2 up, 2 in
Sep 30 17:41:38 compute-0 ceph-mon[73755]: 9.3 scrub starts
Sep 30 17:41:38 compute-0 ceph-mon[73755]: 9.3 scrub ok
Sep 30 17:41:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:38] "GET /metrics HTTP/1.1" 200 46532 "" "Prometheus/2.51.0"
Sep 30 17:41:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:38] "GET /metrics HTTP/1.1" 200 46532 "" "Prometheus/2.51.0"
Sep 30 17:41:38 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.f scrub starts
Sep 30 17:41:38 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.f scrub ok
Sep 30 17:41:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Sep 30 17:41:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Sep 30 17:41:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:39.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:39 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:39 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30003220 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:39 compute-0 sudo[105961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:41:39 compute-0 sudo[105961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:39 compute-0 sudo[105961]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:39.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Sep 30 17:41:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Sep 30 17:41:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e97 e97: 2 total, 2 up, 2 in
Sep 30 17:41:39 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e97: 2 total, 2 up, 2 in
Sep 30 17:41:39 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 97 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=78/78 les/c/f=79/79/0 sis=97) [0] r=0 lpr=97 pi=[78,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:39 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 97 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=78/78 les/c/f=79/79/0 sis=97) [0] r=0 lpr=97 pi=[78,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:39 compute-0 ceph-mon[73755]: 11.10 scrub starts
Sep 30 17:41:39 compute-0 ceph-mon[73755]: 11.10 scrub ok
Sep 30 17:41:39 compute-0 ceph-mon[73755]: 11.f scrub starts
Sep 30 17:41:39 compute-0 ceph-mon[73755]: 11.f scrub ok
Sep 30 17:41:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Sep 30 17:41:39 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Sep 30 17:41:39 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Sep 30 17:41:40 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.a scrub starts
Sep 30 17:41:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Sep 30 17:41:40 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.a scrub ok
Sep 30 17:41:40 compute-0 ceph-mon[73755]: pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:40 compute-0 ceph-mon[73755]: 8.13 deep-scrub starts
Sep 30 17:41:40 compute-0 ceph-mon[73755]: 8.13 deep-scrub ok
Sep 30 17:41:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Sep 30 17:41:40 compute-0 ceph-mon[73755]: osdmap e97: 2 total, 2 up, 2 in
Sep 30 17:41:40 compute-0 ceph-mon[73755]: 8.8 scrub starts
Sep 30 17:41:40 compute-0 ceph-mon[73755]: 8.8 scrub ok
Sep 30 17:41:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e98 e98: 2 total, 2 up, 2 in
Sep 30 17:41:41 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e98: 2 total, 2 up, 2 in
Sep 30 17:41:41 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 98 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=-1 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:41 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 98 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=-1 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:41 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 98 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=-1 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:41 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 98 pg[10.1d( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=-1 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Sep 30 17:41:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Sep 30 17:41:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:41 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:41 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:41 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.d scrub starts
Sep 30 17:41:41 compute-0 ceph-mon[73755]: 11.11 scrub starts
Sep 30 17:41:41 compute-0 ceph-mon[73755]: 11.11 scrub ok
Sep 30 17:41:41 compute-0 ceph-mon[73755]: 9.a scrub starts
Sep 30 17:41:41 compute-0 ceph-mon[73755]: 9.a scrub ok
Sep 30 17:41:41 compute-0 ceph-mon[73755]: osdmap e98: 2 total, 2 up, 2 in
Sep 30 17:41:41 compute-0 ceph-mon[73755]: pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:41 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Sep 30 17:41:41 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.d scrub ok
Sep 30 17:41:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Sep 30 17:41:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Sep 30 17:41:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e99 e99: 2 total, 2 up, 2 in
Sep 30 17:41:42 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e99: 2 total, 2 up, 2 in
Sep 30 17:41:42 compute-0 sshd-session[105989]: Accepted publickey for zuul from 192.168.122.30 port 55072 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:41:42 compute-0 systemd-logind[811]: New session 40 of user zuul.
Sep 30 17:41:42 compute-0 systemd[1]: Started Session 40 of User zuul.
Sep 30 17:41:42 compute-0 sshd-session[105989]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:41:42 compute-0 ceph-mon[73755]: 12.10 scrub starts
Sep 30 17:41:42 compute-0 ceph-mon[73755]: 12.10 scrub ok
Sep 30 17:41:42 compute-0 ceph-mon[73755]: 9.d scrub starts
Sep 30 17:41:42 compute-0 ceph-mon[73755]: 9.d scrub ok
Sep 30 17:41:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Sep 30 17:41:42 compute-0 ceph-mon[73755]: osdmap e99: 2 total, 2 up, 2 in
Sep 30 17:41:42 compute-0 ceph-mon[73755]: 12.13 scrub starts
Sep 30 17:41:42 compute-0 ceph-mon[73755]: 12.13 scrub ok
Sep 30 17:41:42 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Sep 30 17:41:42 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Sep 30 17:41:42 compute-0 python3.9[106142]: ansible-ansible.legacy.ping Invoked with data=pong
Sep 30 17:41:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Sep 30 17:41:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e100 e100: 2 total, 2 up, 2 in
Sep 30 17:41:43 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e100: 2 total, 2 up, 2 in
Sep 30 17:41:43 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 100 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:43 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 100 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:43 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 100 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:43 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 100 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=0/0 n=5 ec=58/48 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Sep 30 17:41:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Sep 30 17:41:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:43 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:43 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30003220 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:43 compute-0 ceph-mon[73755]: 11.5 scrub starts
Sep 30 17:41:43 compute-0 ceph-mon[73755]: 11.5 scrub ok
Sep 30 17:41:43 compute-0 ceph-mon[73755]: osdmap e100: 2 total, 2 up, 2 in
Sep 30 17:41:43 compute-0 ceph-mon[73755]: pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Sep 30 17:41:43 compute-0 ceph-mon[73755]: 12.12 scrub starts
Sep 30 17:41:43 compute-0 ceph-mon[73755]: 12.12 scrub ok
Sep 30 17:41:43 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Sep 30 17:41:43 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Sep 30 17:41:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Sep 30 17:41:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Sep 30 17:41:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e101 e101: 2 total, 2 up, 2 in
Sep 30 17:41:44 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e101: 2 total, 2 up, 2 in
Sep 30 17:41:44 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 101 pg[10.d( v 54'774 (0'0,54'774] local-lis/les=100/101 n=6 ec=58/48 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:44 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 101 pg[10.1d( v 54'774 (0'0,54'774] local-lis/les=100/101 n=5 ec=58/48 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:44 compute-0 python3.9[106318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:41:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:44 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Sep 30 17:41:44 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Sep 30 17:41:44 compute-0 ceph-mon[73755]: 11.4 deep-scrub starts
Sep 30 17:41:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Sep 30 17:41:44 compute-0 ceph-mon[73755]: osdmap e101: 2 total, 2 up, 2 in
Sep 30 17:41:44 compute-0 ceph-mon[73755]: 12.6 scrub starts
Sep 30 17:41:44 compute-0 ceph-mon[73755]: 12.6 scrub ok
Sep 30 17:41:45 compute-0 sudo[106472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdckkerhploqabpmpqmflgkixtukfzds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254104.694982-69-1510173200327/AnsiballZ_command.py'
Sep 30 17:41:45 compute-0 sudo[106472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:41:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 79 B/s, 3 objects/s recovering
Sep 30 17:41:45 compute-0 python3.9[106474]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:41:45 compute-0 sudo[106472]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:45 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:45 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:45 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Sep 30 17:41:45 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Sep 30 17:41:45 compute-0 ceph-mon[73755]: 11.4 deep-scrub ok
Sep 30 17:41:45 compute-0 ceph-mon[73755]: 11.7 scrub starts
Sep 30 17:41:45 compute-0 ceph-mon[73755]: 11.7 scrub ok
Sep 30 17:41:45 compute-0 ceph-mon[73755]: pgmap v52: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 79 B/s, 3 objects/s recovering
Sep 30 17:41:45 compute-0 ceph-mon[73755]: 12.e scrub starts
Sep 30 17:41:45 compute-0 ceph-mon[73755]: 12.e scrub ok
Sep 30 17:41:46 compute-0 sudo[106627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxgtjzupshvabvknufwxiapltzbugcye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254105.7532516-93-277749006133933/AnsiballZ_stat.py'
Sep 30 17:41:46 compute-0 sudo[106627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:41:46 compute-0 python3.9[106629]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:41:46 compute-0 sudo[106627]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:46 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Sep 30 17:41:46 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Sep 30 17:41:46 compute-0 ceph-mon[73755]: 8.5 scrub starts
Sep 30 17:41:46 compute-0 ceph-mon[73755]: 8.5 scrub ok
Sep 30 17:41:46 compute-0 ceph-mon[73755]: 12.c scrub starts
Sep 30 17:41:46 compute-0 ceph-mon[73755]: 12.c scrub ok
Sep 30 17:41:47 compute-0 sudo[106781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmchwmkfjgklbjokqzionuqaolhpkuwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254106.6868334-115-112355915745519/AnsiballZ_file.py'
Sep 30 17:41:47 compute-0 sudo[106781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:41:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 54 B/s, 2 objects/s recovering
Sep 30 17:41:47 compute-0 python3.9[106783]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:41:47 compute-0 sudo[106781]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:47 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:47 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:47.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:47 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Sep 30 17:41:47 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Sep 30 17:41:47 compute-0 ceph-mon[73755]: 9.7 scrub starts
Sep 30 17:41:47 compute-0 ceph-mon[73755]: 9.7 scrub ok
Sep 30 17:41:47 compute-0 ceph-mon[73755]: pgmap v53: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 54 B/s, 2 objects/s recovering
Sep 30 17:41:47 compute-0 ceph-mon[73755]: 12.a scrub starts
Sep 30 17:41:47 compute-0 ceph-mon[73755]: 12.a scrub ok
Sep 30 17:41:48 compute-0 python3.9[106935]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:41:48 compute-0 network[106952]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:41:48 compute-0 network[106953]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:41:48 compute-0 network[106954]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:41:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:48] "GET /metrics HTTP/1.1" 200 46532 "" "Prometheus/2.51.0"
Sep 30 17:41:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:48] "GET /metrics HTTP/1.1" 200 46532 "" "Prometheus/2.51.0"
Sep 30 17:41:48 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Sep 30 17:41:48 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Sep 30 17:41:48 compute-0 ceph-mon[73755]: 11.1 scrub starts
Sep 30 17:41:48 compute-0 ceph-mon[73755]: 11.1 scrub ok
Sep 30 17:41:48 compute-0 ceph-mon[73755]: 12.7 scrub starts
Sep 30 17:41:48 compute-0 ceph-mon[73755]: 12.7 scrub ok
Sep 30 17:41:48 compute-0 ceph-mon[73755]: 11.1b scrub starts
Sep 30 17:41:48 compute-0 ceph-mon[73755]: 11.1b scrub ok
Sep 30 17:41:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 46 B/s, 1 objects/s recovering
Sep 30 17:41:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Sep 30 17:41:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Sep 30 17:41:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:49.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:49 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:49 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:49.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:49 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Sep 30 17:41:49 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Sep 30 17:41:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Sep 30 17:41:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Sep 30 17:41:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e102 e102: 2 total, 2 up, 2 in
Sep 30 17:41:49 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e102: 2 total, 2 up, 2 in
Sep 30 17:41:49 compute-0 ceph-mon[73755]: pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 46 B/s, 1 objects/s recovering
Sep 30 17:41:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Sep 30 17:41:49 compute-0 ceph-mon[73755]: 12.3 scrub starts
Sep 30 17:41:49 compute-0 ceph-mon[73755]: 12.3 scrub ok
Sep 30 17:41:49 compute-0 ceph-mon[73755]: 8.19 scrub starts
Sep 30 17:41:50 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Sep 30 17:41:50 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Sep 30 17:41:51 compute-0 ceph-mon[73755]: 8.19 scrub ok
Sep 30 17:41:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Sep 30 17:41:51 compute-0 ceph-mon[73755]: osdmap e102: 2 total, 2 up, 2 in
Sep 30 17:41:51 compute-0 ceph-mon[73755]: 12.8 scrub starts
Sep 30 17:41:51 compute-0 ceph-mon[73755]: 12.8 scrub ok
Sep 30 17:41:51 compute-0 ceph-mon[73755]: 9.18 scrub starts
Sep 30 17:41:51 compute-0 ceph-mon[73755]: 9.18 scrub ok
Sep 30 17:41:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 41 B/s, 1 objects/s recovering
Sep 30 17:41:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Sep 30 17:41:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Sep 30 17:41:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:51.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:51 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:51 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff30003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:51 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Sep 30 17:41:51 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Sep 30 17:41:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Sep 30 17:41:52 compute-0 ceph-mon[73755]: pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 41 B/s, 1 objects/s recovering
Sep 30 17:41:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Sep 30 17:41:52 compute-0 ceph-mon[73755]: 12.b scrub starts
Sep 30 17:41:52 compute-0 ceph-mon[73755]: 12.b scrub ok
Sep 30 17:41:52 compute-0 ceph-mon[73755]: 11.1a scrub starts
Sep 30 17:41:52 compute-0 ceph-mon[73755]: 11.1a scrub ok
Sep 30 17:41:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Sep 30 17:41:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e103 e103: 2 total, 2 up, 2 in
Sep 30 17:41:52 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e103: 2 total, 2 up, 2 in
Sep 30 17:41:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:41:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:41:52 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Sep 30 17:41:52 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Sep 30 17:41:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Sep 30 17:41:53 compute-0 ceph-mon[73755]: osdmap e103: 2 total, 2 up, 2 in
Sep 30 17:41:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:41:53 compute-0 ceph-mon[73755]: 12.18 scrub starts
Sep 30 17:41:53 compute-0 ceph-mon[73755]: 11.1d scrub starts
Sep 30 17:41:53 compute-0 ceph-mon[73755]: 11.1d scrub ok
Sep 30 17:41:53 compute-0 python3.9[107221]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:41:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Sep 30 17:41:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Sep 30 17:41:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:53.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:53 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:53 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff3c004440 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:53 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Sep 30 17:41:53 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Sep 30 17:41:53 compute-0 python3.9[107374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:41:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Sep 30 17:41:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Sep 30 17:41:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e104 e104: 2 total, 2 up, 2 in
Sep 30 17:41:54 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e104: 2 total, 2 up, 2 in
Sep 30 17:41:54 compute-0 ceph-mon[73755]: 12.18 scrub ok
Sep 30 17:41:54 compute-0 ceph-mon[73755]: pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:54 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Sep 30 17:41:54 compute-0 ceph-mon[73755]: 12.1c scrub starts
Sep 30 17:41:54 compute-0 ceph-mon[73755]: 12.1c scrub ok
Sep 30 17:41:54 compute-0 ceph-mon[73755]: 11.1c scrub starts
Sep 30 17:41:54 compute-0 ceph-mon[73755]: 11.1c scrub ok
Sep 30 17:41:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:54 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Sep 30 17:41:54 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Sep 30 17:41:55 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Sep 30 17:41:55 compute-0 ceph-mon[73755]: osdmap e104: 2 total, 2 up, 2 in
Sep 30 17:41:55 compute-0 ceph-mon[73755]: 12.19 scrub starts
Sep 30 17:41:55 compute-0 ceph-mon[73755]: 12.19 scrub ok
Sep 30 17:41:55 compute-0 ceph-mon[73755]: 8.1c scrub starts
Sep 30 17:41:55 compute-0 ceph-mon[73755]: 8.1c scrub ok
Sep 30 17:41:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Sep 30 17:41:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Sep 30 17:41:55 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=104) [0] r=0 lpr=104 pi=[68,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:55.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:55 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:55 compute-0 python3.9[107528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:41:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:55 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:55 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Sep 30 17:41:55 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Sep 30 17:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Sep 30 17:41:56 compute-0 ceph-mon[73755]: pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Sep 30 17:41:56 compute-0 ceph-mon[73755]: 12.1d scrub starts
Sep 30 17:41:56 compute-0 ceph-mon[73755]: 11.1e scrub starts
Sep 30 17:41:56 compute-0 ceph-mon[73755]: 12.1d scrub ok
Sep 30 17:41:56 compute-0 ceph-mon[73755]: 11.1e scrub ok
Sep 30 17:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Sep 30 17:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e105 e105: 2 total, 2 up, 2 in
Sep 30 17:41:56 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e105: 2 total, 2 up, 2 in
Sep 30 17:41:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 105 pg[10.12( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[68,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:56 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 105 pg[10.12( empty local-lis/les=0/0 n=0 ec=58/48 lis/c=68/68 les/c/f=69/69/0 sis=105) [0]/[1] r=-1 lpr=105 pi=[68,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 17:41:56 compute-0 sudo[107687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-difvhrpaccdtejuomagftzosfroaposu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254116.2500186-211-217675106342693/AnsiballZ_setup.py'
Sep 30 17:41:56 compute-0 sudo[107687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:41:56 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Sep 30 17:41:56 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Sep 30 17:41:56 compute-0 python3.9[107689]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:41:57 compute-0 sudo[107687]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Sep 30 17:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Sep 30 17:41:57 compute-0 ceph-mon[73755]: osdmap e105: 2 total, 2 up, 2 in
Sep 30 17:41:57 compute-0 ceph-mon[73755]: 9.13 deep-scrub starts
Sep 30 17:41:57 compute-0 ceph-mon[73755]: 9.13 deep-scrub ok
Sep 30 17:41:57 compute-0 ceph-mon[73755]: 12.9 deep-scrub starts
Sep 30 17:41:57 compute-0 ceph-mon[73755]: 12.9 deep-scrub ok
Sep 30 17:41:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e106 e106: 2 total, 2 up, 2 in
Sep 30 17:41:57 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e106: 2 total, 2 up, 2 in
Sep 30 17:41:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Sep 30 17:41:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Sep 30 17:41:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:57.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:57 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:57 compute-0 sudo[107772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvnxsuaqudpkiqbmxcvbharusmkhiaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254116.2500186-211-217675106342693/AnsiballZ_dnf.py'
Sep 30 17:41:57 compute-0 sudo[107772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:41:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:57 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c001670 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:57 compute-0 python3.9[107774]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:41:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:57 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Sep 30 17:41:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:41:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:41:57 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Sep 30 17:41:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Sep 30 17:41:58 compute-0 ceph-mon[73755]: osdmap e106: 2 total, 2 up, 2 in
Sep 30 17:41:58 compute-0 ceph-mon[73755]: pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Sep 30 17:41:58 compute-0 ceph-mon[73755]: 8.12 scrub starts
Sep 30 17:41:58 compute-0 ceph-mon[73755]: 8.12 scrub ok
Sep 30 17:41:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Sep 30 17:41:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e107 e107: 2 total, 2 up, 2 in
Sep 30 17:41:58 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e107: 2 total, 2 up, 2 in
Sep 30 17:41:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 107 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=105/68 les/c/f=106/69/0 sis=107) [0] r=0 lpr=107 pi=[68,107)/1 luod=0'0 crt=54'774 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:41:58 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 107 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=0/0 n=6 ec=58/48 lis/c=105/68 les/c/f=106/69/0 sis=107) [0] r=0 lpr=107 pi=[68,107)/1 crt=54'774 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 17:41:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:58] "GET /metrics HTTP/1.1" 200 46524 "" "Prometheus/2.51.0"
Sep 30 17:41:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:41:58] "GET /metrics HTTP/1.1" 200 46524 "" "Prometheus/2.51.0"
Sep 30 17:41:58 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Sep 30 17:41:58 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Sep 30 17:41:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Sep 30 17:41:59 compute-0 ceph-mon[73755]: 10.3 scrub starts
Sep 30 17:41:59 compute-0 ceph-mon[73755]: 10.3 scrub ok
Sep 30 17:41:59 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Sep 30 17:41:59 compute-0 ceph-mon[73755]: osdmap e107: 2 total, 2 up, 2 in
Sep 30 17:41:59 compute-0 ceph-mon[73755]: 9.11 scrub starts
Sep 30 17:41:59 compute-0 ceph-mon[73755]: 9.11 scrub ok
Sep 30 17:41:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e108 e108: 2 total, 2 up, 2 in
Sep 30 17:41:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:41:59 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e108: 2 total, 2 up, 2 in
Sep 30 17:41:59 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 108 pg[10.12( v 54'774 (0'0,54'774] local-lis/les=107/108 n=6 ec=58/48 lis/c=105/68 les/c/f=106/69/0 sis=107) [0] r=0 lpr=107 pi=[68,107)/1 crt=54'774 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:41:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:41:59.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:59 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:41:59 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:41:59 compute-0 sudo[107828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:41:59 compute-0 sudo[107828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:41:59 compute-0 sudo[107828]: pam_unix(sudo:session): session closed for user root
Sep 30 17:41:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:41:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:41:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:41:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:41:59.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:41:59 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Sep 30 17:41:59 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Sep 30 17:42:00 compute-0 ceph-mon[73755]: 10.5 deep-scrub starts
Sep 30 17:42:00 compute-0 ceph-mon[73755]: 10.5 deep-scrub ok
Sep 30 17:42:00 compute-0 ceph-mon[73755]: pgmap v65: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:00 compute-0 ceph-mon[73755]: osdmap e108: 2 total, 2 up, 2 in
Sep 30 17:42:00 compute-0 ceph-mon[73755]: 9.12 scrub starts
Sep 30 17:42:00 compute-0 ceph-mon[73755]: 9.12 scrub ok
Sep 30 17:42:00 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Sep 30 17:42:00 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Sep 30 17:42:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:01 compute-0 ceph-mon[73755]: 10.4 scrub starts
Sep 30 17:42:01 compute-0 ceph-mon[73755]: 10.4 scrub ok
Sep 30 17:42:01 compute-0 ceph-mon[73755]: 8.4 scrub starts
Sep 30 17:42:01 compute-0 ceph-mon[73755]: 8.4 scrub ok
Sep 30 17:42:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:01.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:01 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:01 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff2c001670 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:01.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:01 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Sep 30 17:42:01 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Sep 30 17:42:02 compute-0 ceph-mon[73755]: 10.18 scrub starts
Sep 30 17:42:02 compute-0 ceph-mon[73755]: 10.18 scrub ok
Sep 30 17:42:02 compute-0 ceph-mon[73755]: pgmap v67: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:02 compute-0 ceph-mon[73755]: 11.14 scrub starts
Sep 30 17:42:02 compute-0 ceph-mon[73755]: 11.14 scrub ok
Sep 30 17:42:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=infra.usagestats t=2025-09-30T17:42:02.504510985Z level=info msg="Usage stats are ready to report"
Sep 30 17:42:02 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Sep 30 17:42:02 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Sep 30 17:42:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:03 compute-0 ceph-mon[73755]: 11.3 scrub starts
Sep 30 17:42:03 compute-0 ceph-mon[73755]: 11.3 scrub ok
Sep 30 17:42:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:03.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:03 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:03 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:42:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:42:03 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.d scrub starts
Sep 30 17:42:03 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 8.d scrub ok
Sep 30 17:42:04 compute-0 ceph-mon[73755]: pgmap v68: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:04 compute-0 ceph-mon[73755]: 8.d scrub starts
Sep 30 17:42:04 compute-0 ceph-mon[73755]: 8.d scrub ok
Sep 30 17:42:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:42:04 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.e scrub starts
Sep 30 17:42:04 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 11.e scrub ok
Sep 30 17:42:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 458 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Sep 30 17:42:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Sep 30 17:42:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Sep 30 17:42:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Sep 30 17:42:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e109 e109: 2 total, 2 up, 2 in
Sep 30 17:42:05 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e109: 2 total, 2 up, 2 in
Sep 30 17:42:05 compute-0 ceph-mon[73755]: 11.e scrub starts
Sep 30 17:42:05 compute-0 ceph-mon[73755]: 11.e scrub ok
Sep 30 17:42:05 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Sep 30 17:42:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:05.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:05 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:05 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:05.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:05 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.f scrub starts
Sep 30 17:42:05 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 9.f scrub ok
Sep 30 17:42:06 compute-0 ceph-mon[73755]: pgmap v69: 353 pgs: 353 active+clean; 458 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Sep 30 17:42:06 compute-0 ceph-mon[73755]: osdmap e109: 2 total, 2 up, 2 in
Sep 30 17:42:06 compute-0 ceph-mon[73755]: 9.f scrub starts
Sep 30 17:42:06 compute-0 ceph-mon[73755]: 9.f scrub ok
Sep 30 17:42:06 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.c scrub starts
Sep 30 17:42:06 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.c scrub ok
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:42:07
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'backups', '.mgr', 'images', 'volumes', 'cephfs.cephfs.data', 'vms']
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 2/10 upmap changes
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Executing plan auto_2025-09-30_17:42:07
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [balancer INFO root] ceph osd pg-upmap-items 10.a mappings [{'from': 0, 'to': 1}]
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [balancer INFO root] ceph osd pg-upmap-items 10.e mappings [{'from': 0, 'to': 1}]
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.a", "id": [0, 1]} v 0)
Sep 30 17:42:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.a", "id": [0, 1]}]: dispatch
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.e", "id": [0, 1]} v 0)
Sep 30 17:42:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.e", "id": [0, 1]}]: dispatch
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 458 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Sep 30 17:42:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:42:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:42:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.a", "id": [0, 1]}]': finished
Sep 30 17:42:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.e", "id": [0, 1]}]': finished
Sep 30 17:42:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e110 e110: 2 total, 2 up, 2 in
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e110 crush map has features 3314933000854323200, adjusting msgr requires
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e110 crush map has features 432629239337189376, adjusting msgr requires
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e110 crush map has features 432629239337189376, adjusting msgr requires
Sep 30 17:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e110 crush map has features 432629239337189376, adjusting msgr requires
Sep 30 17:42:07 compute-0 ceph-mon[73755]: 10.c scrub starts
Sep 30 17:42:07 compute-0 ceph-mon[73755]: 10.c scrub ok
Sep 30 17:42:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.a", "id": [0, 1]}]: dispatch
Sep 30 17:42:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.e", "id": [0, 1]}]: dispatch
Sep 30 17:42:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Sep 30 17:42:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:42:07 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e110: 2 total, 2 up, 2 in
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:42:07 compute-0 ceph-osd[82241]: osd.0 110 crush map has features 432629239337189376, adjusting msgr requires for clients
Sep 30 17:42:07 compute-0 ceph-osd[82241]: osd.0 110 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Sep 30 17:42:07 compute-0 ceph-osd[82241]: osd.0 110 crush map has features 3314933000854323200, adjusting msgr requires for osds
Sep 30 17:42:07 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 110 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=110 pruub=14.736370087s) [1] r=-1 lpr=110 pi=[80,110)/1 crt=54'774 mlcod 0'0 active pruub 234.185714722s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:07 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 110 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=110 pruub=14.736305237s) [1] r=-1 lpr=110 pi=[80,110)/1 crt=54'774 mlcod 0'0 unknown NOTIFY pruub 234.185714722s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:42:07 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 110 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=91/92 n=6 ec=58/48 lis/c=91/91 les/c/f=92/92/0 sis=110 pruub=14.880281448s) [1] r=-1 lpr=110 pi=[91,110)/1 crt=54'774 mlcod 0'0 active pruub 234.330032349s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:07 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 110 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=91/92 n=6 ec=58/48 lis/c=91/91 les/c/f=92/92/0 sis=110 pruub=14.880200386s) [1] r=-1 lpr=110 pi=[91,110)/1 crt=54'774 mlcod 0'0 unknown NOTIFY pruub 234.330032349s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:42:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:42:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:07.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:07 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:07 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:07.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.d scrub starts
Sep 30 17:42:07 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.d scrub ok
Sep 30 17:42:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Sep 30 17:42:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e111 e111: 2 total, 2 up, 2 in
Sep 30 17:42:08 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e111: 2 total, 2 up, 2 in
Sep 30 17:42:08 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 111 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] r=0 lpr=111 pi=[80,111)/1 crt=54'774 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:08 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 111 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] r=0 lpr=111 pi=[80,111)/1 crt=54'774 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:42:08 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 111 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=91/92 n=6 ec=58/48 lis/c=91/91 les/c/f=92/92/0 sis=111) [1]/[0] r=0 lpr=111 pi=[91,111)/1 crt=54'774 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:08 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 111 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=91/92 n=6 ec=58/48 lis/c=91/91 les/c/f=92/92/0 sis=111) [1]/[0] r=0 lpr=111 pi=[91,111)/1 crt=54'774 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:42:08 compute-0 ceph-mon[73755]: pgmap v71: 353 pgs: 353 active+clean; 458 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.a", "id": [0, 1]}]': finished
Sep 30 17:42:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.e", "id": [0, 1]}]': finished
Sep 30 17:42:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Sep 30 17:42:08 compute-0 ceph-mon[73755]: osdmap e110: 2 total, 2 up, 2 in
Sep 30 17:42:08 compute-0 ceph-mon[73755]: 10.d scrub starts
Sep 30 17:42:08 compute-0 ceph-mon[73755]: 10.d scrub ok
Sep 30 17:42:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:08] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:42:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:08] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:42:08 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.b scrub starts
Sep 30 17:42:08 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.b scrub ok
Sep 30 17:42:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Sep 30 17:42:09 compute-0 ceph-mon[73755]: osdmap e111: 2 total, 2 up, 2 in
Sep 30 17:42:09 compute-0 ceph-mon[73755]: 10.b scrub starts
Sep 30 17:42:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e112 e112: 2 total, 2 up, 2 in
Sep 30 17:42:09 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e112: 2 total, 2 up, 2 in
Sep 30 17:42:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 112 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=111/112 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] async=[1] r=0 lpr=111 pi=[80,111)/1 crt=54'774 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:42:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 112 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=111/112 n=6 ec=58/48 lis/c=91/91 les/c/f=92/92/0 sis=111) [1]/[0] async=[1] r=0 lpr=111 pi=[91,111)/1 crt=54'774 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:42:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:09.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:09 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:09 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:42:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Sep 30 17:42:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e113 e113: 2 total, 2 up, 2 in
Sep 30 17:42:09 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e113: 2 total, 2 up, 2 in
Sep 30 17:42:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 113 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=111/112 n=6 ec=58/48 lis/c=111/91 les/c/f=112/92/0 sis=113 pruub=15.719518661s) [1] async=[1] r=-1 lpr=113 pi=[91,113)/1 crt=54'774 mlcod 54'774 active pruub 237.516967773s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 113 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=111/112 n=5 ec=58/48 lis/c=111/80 les/c/f=112/81/0 sis=113 pruub=15.715503693s) [1] async=[1] r=-1 lpr=113 pi=[80,113)/1 crt=54'774 mlcod 54'774 active pruub 237.513000488s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 113 pg[10.a( v 54'774 (0'0,54'774] local-lis/les=111/112 n=6 ec=58/48 lis/c=111/91 les/c/f=112/92/0 sis=113 pruub=15.719460487s) [1] r=-1 lpr=113 pi=[91,113)/1 crt=54'774 mlcod 0'0 unknown NOTIFY pruub 237.516967773s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:09 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 113 pg[10.e( v 54'774 (0'0,54'774] local-lis/les=111/112 n=5 ec=58/48 lis/c=111/80 les/c/f=112/81/0 sis=113 pruub=15.715457916s) [1] r=-1 lpr=113 pi=[80,113)/1 crt=54'774 mlcod 0'0 unknown NOTIFY pruub 237.513000488s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:09.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:09 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Sep 30 17:42:09 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Sep 30 17:42:10 compute-0 ceph-mon[73755]: 10.b scrub ok
Sep 30 17:42:10 compute-0 ceph-mon[73755]: pgmap v74: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:10 compute-0 ceph-mon[73755]: osdmap e112: 2 total, 2 up, 2 in
Sep 30 17:42:10 compute-0 ceph-mon[73755]: osdmap e113: 2 total, 2 up, 2 in
Sep 30 17:42:10 compute-0 ceph-mon[73755]: 10.19 deep-scrub starts
Sep 30 17:42:10 compute-0 ceph-mon[73755]: 10.19 deep-scrub ok
Sep 30 17:42:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Sep 30 17:42:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e114 e114: 2 total, 2 up, 2 in
Sep 30 17:42:10 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e114: 2 total, 2 up, 2 in
Sep 30 17:42:10 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Sep 30 17:42:11 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Sep 30 17:42:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v78: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:11.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:11 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:11 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:11 compute-0 ceph-mon[73755]: osdmap e114: 2 total, 2 up, 2 in
Sep 30 17:42:11 compute-0 ceph-mon[73755]: 10.1a scrub starts
Sep 30 17:42:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:11.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:11 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Sep 30 17:42:11 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Sep 30 17:42:12 compute-0 ceph-mon[73755]: 10.1a scrub ok
Sep 30 17:42:12 compute-0 ceph-mon[73755]: 10.a scrub starts
Sep 30 17:42:12 compute-0 ceph-mon[73755]: 10.a scrub ok
Sep 30 17:42:12 compute-0 ceph-mon[73755]: pgmap v78: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:12 compute-0 ceph-mon[73755]: 10.1b deep-scrub starts
Sep 30 17:42:12 compute-0 ceph-mon[73755]: 10.1b deep-scrub ok
Sep 30 17:42:12 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Sep 30 17:42:12 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Sep 30 17:42:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:13.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:13 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff44009ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:13 compute-0 ceph-mon[73755]: 10.1c scrub starts
Sep 30 17:42:13 compute-0 ceph-mon[73755]: 10.1c scrub ok
Sep 30 17:42:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:42:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:13.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:42:13 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Sep 30 17:42:13 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Sep 30 17:42:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:42:14 compute-0 ceph-mon[73755]: pgmap v79: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:14 compute-0 ceph-mon[73755]: 10.1d scrub starts
Sep 30 17:42:14 compute-0 ceph-mon[73755]: 10.1d scrub ok
Sep 30 17:42:14 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Sep 30 17:42:14 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Sep 30 17:42:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Sep 30 17:42:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Sep 30 17:42:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:15.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:15 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:15 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff14003c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Sep 30 17:42:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Sep 30 17:42:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e115 e115: 2 total, 2 up, 2 in
Sep 30 17:42:15 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e115: 2 total, 2 up, 2 in
Sep 30 17:42:15 compute-0 ceph-mon[73755]: 10.1e scrub starts
Sep 30 17:42:15 compute-0 ceph-mon[73755]: 10.1e scrub ok
Sep 30 17:42:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Sep 30 17:42:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:15.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Sep 30 17:42:15 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Sep 30 17:42:16 compute-0 ceph-mon[73755]: pgmap v80: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Sep 30 17:42:16 compute-0 ceph-mon[73755]: osdmap e115: 2 total, 2 up, 2 in
Sep 30 17:42:16 compute-0 ceph-mon[73755]: 10.1f deep-scrub starts
Sep 30 17:42:16 compute-0 ceph-mon[73755]: 10.1f deep-scrub ok
Sep 30 17:42:16 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Sep 30 17:42:16 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Sep 30 17:42:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Sep 30 17:42:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Sep 30 17:42:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:17.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[97645]: 30/09/2025 17:42:17 : epoch 68dc15fd : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff18002690 fd 47 proxy ignored for local
Sep 30 17:42:17 compute-0 kernel: ganesha.nfsd[97688]: segfault at 50 ip 00007effedd8e32e sp 00007effbdffa210 error 4 in libntirpc.so.5.8[7effedd73000+2c000] likely on CPU 6 (core 0, socket 6)
Sep 30 17:42:17 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:42:17 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Sep 30 17:42:17 compute-0 systemd[1]: Started Process Core Dump (PID 107937/UID 0).
Sep 30 17:42:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Sep 30 17:42:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Sep 30 17:42:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e116 e116: 2 total, 2 up, 2 in
Sep 30 17:42:17 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e116: 2 total, 2 up, 2 in
Sep 30 17:42:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:17.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:17 compute-0 ceph-mon[73755]: 10.10 scrub starts
Sep 30 17:42:17 compute-0 ceph-mon[73755]: 10.10 scrub ok
Sep 30 17:42:17 compute-0 ceph-mon[73755]: pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Sep 30 17:42:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.12 deep-scrub starts
Sep 30 17:42:17 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.12 deep-scrub ok
Sep 30 17:42:18 compute-0 systemd-coredump[107938]: Process 97649 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007effedd8e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:42:18 compute-0 systemd[1]: systemd-coredump@0-107937-0.service: Deactivated successfully.
Sep 30 17:42:18 compute-0 systemd[1]: systemd-coredump@0-107937-0.service: Consumed 1.140s CPU time.
Sep 30 17:42:18 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Sep 30 17:42:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:18] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:42:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:18] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:42:18 compute-0 podman[107944]: 2025-09-30 17:42:18.805543648 +0000 UTC m=+0.031280027 container died 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:42:18 compute-0 ceph-osd[82241]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Sep 30 17:42:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Sep 30 17:42:18 compute-0 ceph-mon[73755]: osdmap e116: 2 total, 2 up, 2 in
Sep 30 17:42:18 compute-0 ceph-mon[73755]: 10.12 deep-scrub starts
Sep 30 17:42:18 compute-0 ceph-mon[73755]: 10.12 deep-scrub ok
Sep 30 17:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-92686d956068f925812fc42d9f71b6ab54a3b07b027d2d050ec8ab17b15e0227-merged.mount: Deactivated successfully.
Sep 30 17:42:18 compute-0 systemd[92405]: Starting Mark boot as successful...
Sep 30 17:42:18 compute-0 systemd[92405]: Finished Mark boot as successful.
Sep 30 17:42:18 compute-0 podman[107944]: 2025-09-30 17:42:18.849667759 +0000 UTC m=+0.075404138 container remove 9e97ff95260c5eee634ed5be7e6f6acdd2a5f44fb41d87718213241efcab83ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:42:18 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:42:18 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:42:18 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.504s CPU time.
Sep 30 17:42:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Sep 30 17:42:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Sep 30 17:42:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:42:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:19.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:42:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:42:19 compute-0 sudo[107992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:42:19 compute-0 sudo[107992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:19 compute-0 sudo[107992]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:19.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Sep 30 17:42:19 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Sep 30 17:42:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e117 e117: 2 total, 2 up, 2 in
Sep 30 17:42:19 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e117: 2 total, 2 up, 2 in
Sep 30 17:42:19 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 117 pg[10.19( v 54'774 (0'0,54'774] local-lis/les=58/59 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=117 pruub=12.358149529s) [1] r=-1 lpr=117 pi=[58,117)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 244.328720093s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:19 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 117 pg[10.19( v 54'774 (0'0,54'774] local-lis/les=58/59 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=117 pruub=12.358114243s) [1] r=-1 lpr=117 pi=[58,117)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 244.328720093s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:19 compute-0 ceph-mon[73755]: 10.11 scrub starts
Sep 30 17:42:19 compute-0 ceph-mon[73755]: 10.11 scrub ok
Sep 30 17:42:19 compute-0 ceph-mon[73755]: pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:19 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Sep 30 17:42:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Sep 30 17:42:20 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Sep 30 17:42:20 compute-0 ceph-mon[73755]: osdmap e117: 2 total, 2 up, 2 in
Sep 30 17:42:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e118 e118: 2 total, 2 up, 2 in
Sep 30 17:42:20 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e118: 2 total, 2 up, 2 in
Sep 30 17:42:20 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 118 pg[10.19( v 54'774 (0'0,54'774] local-lis/les=58/59 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=118) [1]/[0] r=0 lpr=118 pi=[58,118)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:20 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 118 pg[10.19( v 54'774 (0'0,54'774] local-lis/les=58/59 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=118) [1]/[0] r=0 lpr=118 pi=[58,118)/1 crt=54'774 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:42:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Sep 30 17:42:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Sep 30 17:42:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:21.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:21.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Sep 30 17:42:21 compute-0 ceph-mon[73755]: osdmap e118: 2 total, 2 up, 2 in
Sep 30 17:42:21 compute-0 ceph-mon[73755]: pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Sep 30 17:42:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Sep 30 17:42:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e119 e119: 2 total, 2 up, 2 in
Sep 30 17:42:21 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e119: 2 total, 2 up, 2 in
Sep 30 17:42:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:42:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:42:22 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 119 pg[10.19( v 54'774 (0'0,54'774] local-lis/les=118/119 n=7 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[58,118)/1 crt=54'774 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:42:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Sep 30 17:42:22 compute-0 ceph-mon[73755]: osdmap e119: 2 total, 2 up, 2 in
Sep 30 17:42:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:42:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Sep 30 17:42:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Sep 30 17:42:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:23.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174223 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:42:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:23.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Sep 30 17:42:23 compute-0 ceph-mon[73755]: pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:42:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Sep 30 17:42:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Sep 30 17:42:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e120 e120: 2 total, 2 up, 2 in
Sep 30 17:42:23 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e120: 2 total, 2 up, 2 in
Sep 30 17:42:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 120 pg[10.19( v 54'774 (0'0,54'774] local-lis/les=118/119 n=7 ec=58/48 lis/c=118/58 les/c/f=119/59/0 sis=120 pruub=14.899576187s) [1] async=[1] r=-1 lpr=120 pi=[58,120)/1 crt=54'774 lcod 0'0 mlcod 0'0 active pruub 250.967346191s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 120 pg[10.19( v 54'774 (0'0,54'774] local-lis/les=118/119 n=7 ec=58/48 lis/c=118/58 les/c/f=119/59/0 sis=120 pruub=14.899506569s) [1] r=-1 lpr=120 pi=[58,120)/1 crt=54'774 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.967346191s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 120 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=93/94 n=2 ec=58/48 lis/c=93/93 les/c/f=94/94/0 sis=120 pruub=15.745737076s) [1] r=-1 lpr=120 pi=[93,120)/1 crt=54'774 mlcod 0'0 active pruub 251.815368652s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:23 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 120 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=93/94 n=2 ec=58/48 lis/c=93/93 les/c/f=94/94/0 sis=120 pruub=15.745699883s) [1] r=-1 lpr=120 pi=[93,120)/1 crt=54'774 mlcod 0'0 unknown NOTIFY pruub 251.815368652s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:42:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Sep 30 17:42:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e121 e121: 2 total, 2 up, 2 in
Sep 30 17:42:24 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e121: 2 total, 2 up, 2 in
Sep 30 17:42:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 121 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=93/94 n=2 ec=58/48 lis/c=93/93 les/c/f=94/94/0 sis=121) [1]/[0] r=0 lpr=121 pi=[93,121)/1 crt=54'774 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:24 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 121 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=93/94 n=2 ec=58/48 lis/c=93/93 les/c/f=94/94/0 sis=121) [1]/[0] r=0 lpr=121 pi=[93,121)/1 crt=54'774 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:42:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Sep 30 17:42:24 compute-0 ceph-mon[73755]: osdmap e120: 2 total, 2 up, 2 in
Sep 30 17:42:24 compute-0 ceph-mon[73755]: osdmap e121: 2 total, 2 up, 2 in
Sep 30 17:42:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 25 B/s, 1 objects/s recovering
Sep 30 17:42:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Sep 30 17:42:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Sep 30 17:42:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:42:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:25.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:42:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Sep 30 17:42:25 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Sep 30 17:42:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e122 e122: 2 total, 2 up, 2 in
Sep 30 17:42:25 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e122: 2 total, 2 up, 2 in
Sep 30 17:42:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:25.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:25 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 122 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=121/122 n=2 ec=58/48 lis/c=93/93 les/c/f=94/94/0 sis=121) [1]/[0] async=[1] r=0 lpr=121 pi=[93,121)/1 crt=54'774 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:42:25 compute-0 ceph-mon[73755]: pgmap v92: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 25 B/s, 1 objects/s recovering
Sep 30 17:42:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Sep 30 17:42:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Sep 30 17:42:25 compute-0 ceph-mon[73755]: osdmap e122: 2 total, 2 up, 2 in
Sep 30 17:42:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Sep 30 17:42:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e123 e123: 2 total, 2 up, 2 in
Sep 30 17:42:26 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e123: 2 total, 2 up, 2 in
Sep 30 17:42:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 123 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=121/122 n=2 ec=58/48 lis/c=121/93 les/c/f=122/94/0 sis=123 pruub=15.147477150s) [1] async=[1] r=-1 lpr=123 pi=[93,123)/1 crt=54'774 mlcod 54'774 active pruub 253.970901489s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:26 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 123 pg[10.1b( v 54'774 (0'0,54'774] local-lis/les=121/122 n=2 ec=58/48 lis/c=121/93 les/c/f=122/94/0 sis=123 pruub=15.147421837s) [1] r=-1 lpr=123 pi=[93,123)/1 crt=54'774 mlcod 0'0 unknown NOTIFY pruub 253.970901489s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 27 B/s, 1 objects/s recovering
Sep 30 17:42:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Sep 30 17:42:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Sep 30 17:42:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:27.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Sep 30 17:42:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Sep 30 17:42:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e124 e124: 2 total, 2 up, 2 in
Sep 30 17:42:27 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e124: 2 total, 2 up, 2 in
Sep 30 17:42:27 compute-0 ceph-mon[73755]: osdmap e123: 2 total, 2 up, 2 in
Sep 30 17:42:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Sep 30 17:42:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:42:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:42:28 compute-0 ceph-mon[73755]: pgmap v95: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 27 B/s, 1 objects/s recovering
Sep 30 17:42:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Sep 30 17:42:28 compute-0 ceph-mon[73755]: osdmap e124: 2 total, 2 up, 2 in
Sep 30 17:42:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:28] "GET /metrics HTTP/1.1" 200 46508 "" "Prometheus/2.51.0"
Sep 30 17:42:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:28] "GET /metrics HTTP/1.1" 200 46508 "" "Prometheus/2.51.0"
Sep 30 17:42:29 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 1.
Sep 30 17:42:29 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:42:29 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.504s CPU time.
Sep 30 17:42:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 227 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Sep 30 17:42:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Sep 30 17:42:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Sep 30 17:42:29 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:42:29 compute-0 podman[108103]: 2025-09-30 17:42:29.422564287 +0000 UTC m=+0.044118332 container create 75deb80397fe5d91ce9250625aa16920022fde4c9d1304b3b5e47e52826b9df6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Sep 30 17:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d132a985a5ab22bb949b074513ab85e3d1ee52271e273c282e4c00b5b9ffae/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d132a985a5ab22bb949b074513ab85e3d1ee52271e273c282e4c00b5b9ffae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d132a985a5ab22bb949b074513ab85e3d1ee52271e273c282e4c00b5b9ffae/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d132a985a5ab22bb949b074513ab85e3d1ee52271e273c282e4c00b5b9ffae/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:29 compute-0 podman[108103]: 2025-09-30 17:42:29.492983943 +0000 UTC m=+0.114537998 container init 75deb80397fe5d91ce9250625aa16920022fde4c9d1304b3b5e47e52826b9df6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 17:42:29 compute-0 podman[108103]: 2025-09-30 17:42:29.399726651 +0000 UTC m=+0.021280736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:42:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:29.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:29 compute-0 podman[108103]: 2025-09-30 17:42:29.498367403 +0000 UTC m=+0.119921438 container start 75deb80397fe5d91ce9250625aa16920022fde4c9d1304b3b5e47e52826b9df6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 17:42:29 compute-0 bash[108103]: 75deb80397fe5d91ce9250625aa16920022fde4c9d1304b3b5e47e52826b9df6
Sep 30 17:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:29 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:29 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:42:29 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:29 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:29 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:29 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:29 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:29 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:29 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:42:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 17:42:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Sep 30 17:42:29 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Sep 30 17:42:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Sep 30 17:42:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e125 e125: 2 total, 2 up, 2 in
Sep 30 17:42:29 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e125: 2 total, 2 up, 2 in
Sep 30 17:42:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:29.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:29 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 125 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=125 pruub=8.207265854s) [1] r=-1 lpr=125 pi=[80,125)/1 crt=54'774 mlcod 0'0 active pruub 250.186050415s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:29 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 125 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=125 pruub=8.207092285s) [1] r=-1 lpr=125 pi=[80,125)/1 crt=54'774 mlcod 0'0 unknown NOTIFY pruub 250.186050415s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Sep 30 17:42:30 compute-0 ceph-mon[73755]: pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 227 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Sep 30 17:42:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Sep 30 17:42:30 compute-0 ceph-mon[73755]: osdmap e125: 2 total, 2 up, 2 in
Sep 30 17:42:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e126 e126: 2 total, 2 up, 2 in
Sep 30 17:42:30 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e126: 2 total, 2 up, 2 in
Sep 30 17:42:30 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 126 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=126) [1]/[0] r=0 lpr=126 pi=[80,126)/1 crt=54'774 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:30 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 126 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=80/81 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=126) [1]/[0] r=0 lpr=126 pi=[80,126)/1 crt=54'774 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 17:42:30 compute-0 sudo[108162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:42:30 compute-0 sudo[108162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:30 compute-0 sudo[108162]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:31 compute-0 sudo[108187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:42:31 compute-0 sudo[108187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 227 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Sep 30 17:42:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 17:42:31 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:42:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:31.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:31 compute-0 podman[108288]: 2025-09-30 17:42:31.57464834 +0000 UTC m=+0.053248559 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 17:42:31 compute-0 podman[108288]: 2025-09-30 17:42:31.668702762 +0000 UTC m=+0.147302971 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 17:42:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Sep 30 17:42:31 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:42:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e127 e127: 2 total, 2 up, 2 in
Sep 30 17:42:31 compute-0 ceph-mon[73755]: osdmap e126: 2 total, 2 up, 2 in
Sep 30 17:42:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 17:42:31 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e127: 2 total, 2 up, 2 in
Sep 30 17:42:31 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 127 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=126/127 n=5 ec=58/48 lis/c=80/80 les/c/f=81/81/0 sis=126) [1]/[0] async=[1] r=0 lpr=126 pi=[80,126)/1 crt=54'774 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 17:42:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:31.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:32 compute-0 podman[108406]: 2025-09-30 17:42:32.127084664 +0000 UTC m=+0.060056057 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:42:32 compute-0 podman[108406]: 2025-09-30 17:42:32.142424204 +0000 UTC m=+0.075395617 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:42:32 compute-0 podman[108498]: 2025-09-30 17:42:32.472611654 +0000 UTC m=+0.045957380 container exec 75deb80397fe5d91ce9250625aa16920022fde4c9d1304b3b5e47e52826b9df6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:42:32 compute-0 podman[108498]: 2025-09-30 17:42:32.486689541 +0000 UTC m=+0.060035247 container exec_died 75deb80397fe5d91ce9250625aa16920022fde4c9d1304b3b5e47e52826b9df6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:42:32 compute-0 podman[108559]: 2025-09-30 17:42:32.705897747 +0000 UTC m=+0.054154793 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:42:32 compute-0 podman[108559]: 2025-09-30 17:42:32.713468574 +0000 UTC m=+0.061725590 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:42:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Sep 30 17:42:32 compute-0 ceph-mon[73755]: pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 99 MiB used, 40 GiB / 40 GiB avail; 227 B/s rd, 0 op/s; 24 B/s, 0 objects/s recovering
Sep 30 17:42:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 17:42:32 compute-0 ceph-mon[73755]: osdmap e127: 2 total, 2 up, 2 in
Sep 30 17:42:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e128 e128: 2 total, 2 up, 2 in
Sep 30 17:42:32 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e128: 2 total, 2 up, 2 in
Sep 30 17:42:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 128 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=126/127 n=5 ec=58/48 lis/c=126/80 les/c/f=127/81/0 sis=128 pruub=14.987646103s) [1] async=[1] r=-1 lpr=128 pi=[80,128)/1 crt=54'774 mlcod 54'774 active pruub 259.931488037s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 17:42:32 compute-0 ceph-osd[82241]: osd.0 pg_epoch: 128 pg[10.1e( v 54'774 (0'0,54'774] local-lis/les=126/127 n=5 ec=58/48 lis/c=126/80 les/c/f=127/81/0 sis=128 pruub=14.987600327s) [1] r=-1 lpr=128 pi=[80,128)/1 crt=54'774 mlcod 0'0 unknown NOTIFY pruub 259.931488037s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 17:42:32 compute-0 podman[108621]: 2025-09-30 17:42:32.952802114 +0000 UTC m=+0.055429876 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9)
Sep 30 17:42:32 compute-0 podman[108621]: 2025-09-30 17:42:32.961275935 +0000 UTC m=+0.063903657 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, release=1793, architecture=x86_64, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 17:42:33 compute-0 podman[108685]: 2025-09-30 17:42:33.172083942 +0000 UTC m=+0.048458124 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:42:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 0 B/s rd, 0 op/s
Sep 30 17:42:33 compute-0 podman[108685]: 2025-09-30 17:42:33.199675402 +0000 UTC m=+0.076049564 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:42:33 compute-0 podman[108761]: 2025-09-30 17:42:33.374527971 +0000 UTC m=+0.042354586 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:42:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:33.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:33 compute-0 podman[108761]: 2025-09-30 17:42:33.516980075 +0000 UTC m=+0.184806680 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:42:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Sep 30 17:42:33 compute-0 ceph-mon[73755]: osdmap e128: 2 total, 2 up, 2 in
Sep 30 17:42:33 compute-0 ceph-mon[73755]: pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 0 B/s rd, 0 op/s
Sep 30 17:42:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:33.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 e129: 2 total, 2 up, 2 in
Sep 30 17:42:33 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e129: 2 total, 2 up, 2 in
Sep 30 17:42:33 compute-0 podman[108871]: 2025-09-30 17:42:33.881038737 +0000 UTC m=+0.068311672 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:42:33 compute-0 podman[108871]: 2025-09-30 17:42:33.915614898 +0000 UTC m=+0.102887783 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:42:33 compute-0 sudo[108187]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:42:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:34 compute-0 sudo[108913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:42:34 compute-0 sudo[108913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:34 compute-0 sudo[108913]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:34 compute-0 sudo[108938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:42:34 compute-0 sudo[108938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:34 compute-0 sudo[108938]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:42:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:42:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:42:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:42:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:42:34 compute-0 sudo[108996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:42:34 compute-0 sudo[108996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:34 compute-0 sudo[108996]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:42:34 compute-0 sudo[109021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:42:34 compute-0 sudo[109021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:34 compute-0 ceph-mon[73755]: osdmap e129: 2 total, 2 up, 2 in
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:42:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:42:35 compute-0 podman[109085]: 2025-09-30 17:42:35.034305037 +0000 UTC m=+0.034393397 container create d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cohen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:42:35 compute-0 systemd[1]: Started libpod-conmon-d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9.scope.
Sep 30 17:42:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:42:35 compute-0 podman[109085]: 2025-09-30 17:42:35.091365175 +0000 UTC m=+0.091453545 container init d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cohen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 17:42:35 compute-0 podman[109085]: 2025-09-30 17:42:35.097657599 +0000 UTC m=+0.097745959 container start d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cohen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:42:35 compute-0 podman[109085]: 2025-09-30 17:42:35.101466679 +0000 UTC m=+0.101555029 container attach d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cohen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Sep 30 17:42:35 compute-0 systemd[1]: libpod-d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9.scope: Deactivated successfully.
Sep 30 17:42:35 compute-0 priceless_cohen[109101]: 167 167
Sep 30 17:42:35 compute-0 conmon[109101]: conmon d6de5a251dc157116fcd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9.scope/container/memory.events
Sep 30 17:42:35 compute-0 podman[109085]: 2025-09-30 17:42:35.107519236 +0000 UTC m=+0.107607596 container died d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:42:35 compute-0 podman[109085]: 2025-09-30 17:42:35.018996828 +0000 UTC m=+0.019085208 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fd0e94a61d111cb7b8fc65faba31dc128ec9fe3e0a5e800f7e4c88ebcefa9eb-merged.mount: Deactivated successfully.
Sep 30 17:42:35 compute-0 podman[109085]: 2025-09-30 17:42:35.145770364 +0000 UTC m=+0.145858724 container remove d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_cohen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 17:42:35 compute-0 systemd[1]: libpod-conmon-d6de5a251dc157116fcd60163450146070406af162ad1b1387807b2ddb25a0b9.scope: Deactivated successfully.
Sep 30 17:42:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 695 B/s wr, 2 op/s; 0 B/s, 1 objects/s recovering
Sep 30 17:42:35 compute-0 podman[109126]: 2025-09-30 17:42:35.276203765 +0000 UTC m=+0.036265977 container create 27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 17:42:35 compute-0 systemd[1]: Started libpod-conmon-27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb.scope.
Sep 30 17:42:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c16e69a495b4bdb90949ce533e01309b546cb3e135406b99b061f98ce19fb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c16e69a495b4bdb90949ce533e01309b546cb3e135406b99b061f98ce19fb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c16e69a495b4bdb90949ce533e01309b546cb3e135406b99b061f98ce19fb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c16e69a495b4bdb90949ce533e01309b546cb3e135406b99b061f98ce19fb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c16e69a495b4bdb90949ce533e01309b546cb3e135406b99b061f98ce19fb9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:35 compute-0 podman[109126]: 2025-09-30 17:42:35.345025679 +0000 UTC m=+0.105087931 container init 27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_shtern, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:42:35 compute-0 podman[109126]: 2025-09-30 17:42:35.352679379 +0000 UTC m=+0.112741591 container start 27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:42:35 compute-0 podman[109126]: 2025-09-30 17:42:35.259837738 +0000 UTC m=+0.019899970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:42:35 compute-0 podman[109126]: 2025-09-30 17:42:35.356660283 +0000 UTC m=+0.116722495 container attach 27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:42:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:35.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:35 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:42:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:35 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:42:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:35 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 17:42:35 compute-0 musing_shtern[109144]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:42:35 compute-0 musing_shtern[109144]: --> All data devices are unavailable
Sep 30 17:42:35 compute-0 systemd[1]: libpod-27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb.scope: Deactivated successfully.
Sep 30 17:42:35 compute-0 podman[109126]: 2025-09-30 17:42:35.6917434 +0000 UTC m=+0.451805632 container died 27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 17:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-81c16e69a495b4bdb90949ce533e01309b546cb3e135406b99b061f98ce19fb9-merged.mount: Deactivated successfully.
Sep 30 17:42:35 compute-0 podman[109126]: 2025-09-30 17:42:35.741528548 +0000 UTC m=+0.501590760 container remove 27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:42:35 compute-0 systemd[1]: libpod-conmon-27fc58ae81409664f14724e8c29a857ad0c04e588af2361011d0f823724897fb.scope: Deactivated successfully.
Sep 30 17:42:35 compute-0 sudo[109021]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:35.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:35 compute-0 sudo[109172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:42:35 compute-0 sudo[109172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:35 compute-0 sudo[109172]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:35 compute-0 ceph-mon[73755]: pgmap v105: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 695 B/s wr, 2 op/s; 0 B/s, 1 objects/s recovering
Sep 30 17:42:35 compute-0 sudo[109197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:42:35 compute-0 sudo[109197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:36 compute-0 podman[109261]: 2025-09-30 17:42:36.294554658 +0000 UTC m=+0.035804025 container create 9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:42:36 compute-0 systemd[1]: Started libpod-conmon-9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2.scope.
Sep 30 17:42:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:42:36 compute-0 podman[109261]: 2025-09-30 17:42:36.37328002 +0000 UTC m=+0.114529387 container init 9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:42:36 compute-0 podman[109261]: 2025-09-30 17:42:36.279885925 +0000 UTC m=+0.021135312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:42:36 compute-0 podman[109261]: 2025-09-30 17:42:36.380324464 +0000 UTC m=+0.121573831 container start 9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:42:36 compute-0 reverent_ramanujan[109277]: 167 167
Sep 30 17:42:36 compute-0 podman[109261]: 2025-09-30 17:42:36.384492553 +0000 UTC m=+0.125741940 container attach 9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 17:42:36 compute-0 systemd[1]: libpod-9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2.scope: Deactivated successfully.
Sep 30 17:42:36 compute-0 podman[109261]: 2025-09-30 17:42:36.385819857 +0000 UTC m=+0.127069224 container died 9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Sep 30 17:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f43b98f3d727addb3a1737f2ae71a0cab3d39a05a27db4e210b928c1857d087e-merged.mount: Deactivated successfully.
Sep 30 17:42:36 compute-0 podman[109261]: 2025-09-30 17:42:36.428265904 +0000 UTC m=+0.169515271 container remove 9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:42:36 compute-0 systemd[1]: libpod-conmon-9629631bd7fb4725d820820166d471bb9010fb87d0fd8e7982b1d1dd4dee0af2.scope: Deactivated successfully.
Sep 30 17:42:36 compute-0 podman[109301]: 2025-09-30 17:42:36.559066695 +0000 UTC m=+0.036525534 container create ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:42:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174236 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:42:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [NOTICE] 272/174236 (4) : haproxy version is 2.3.17-d1c9119
Sep 30 17:42:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [NOTICE] 272/174236 (4) : path to executable is /usr/local/sbin/haproxy
Sep 30 17:42:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [ALERT] 272/174236 (4) : backend 'backend' has no server available!
Sep 30 17:42:36 compute-0 systemd[1]: Started libpod-conmon-ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3.scope.
Sep 30 17:42:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087a45019f26db0f317b56f24ea2fcc35a25d5a88b2fe574cdb88550033013a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087a45019f26db0f317b56f24ea2fcc35a25d5a88b2fe574cdb88550033013a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087a45019f26db0f317b56f24ea2fcc35a25d5a88b2fe574cdb88550033013a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087a45019f26db0f317b56f24ea2fcc35a25d5a88b2fe574cdb88550033013a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:36 compute-0 podman[109301]: 2025-09-30 17:42:36.634597264 +0000 UTC m=+0.112056143 container init ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 17:42:36 compute-0 podman[109301]: 2025-09-30 17:42:36.640397035 +0000 UTC m=+0.117855874 container start ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:42:36 compute-0 podman[109301]: 2025-09-30 17:42:36.544145586 +0000 UTC m=+0.021604445 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:42:36 compute-0 podman[109301]: 2025-09-30 17:42:36.643698431 +0000 UTC m=+0.121157320 container attach ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 17:42:36 compute-0 focused_edison[109318]: {
Sep 30 17:42:36 compute-0 focused_edison[109318]:     "0": [
Sep 30 17:42:36 compute-0 focused_edison[109318]:         {
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "devices": [
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "/dev/loop3"
Sep 30 17:42:36 compute-0 focused_edison[109318]:             ],
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "lv_name": "ceph_lv0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "lv_size": "21470642176",
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "name": "ceph_lv0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "tags": {
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.cluster_name": "ceph",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.crush_device_class": "",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.encrypted": "0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.osd_id": "0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.type": "block",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.vdo": "0",
Sep 30 17:42:36 compute-0 focused_edison[109318]:                 "ceph.with_tpm": "0"
Sep 30 17:42:36 compute-0 focused_edison[109318]:             },
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "type": "block",
Sep 30 17:42:36 compute-0 focused_edison[109318]:             "vg_name": "ceph_vg0"
Sep 30 17:42:36 compute-0 focused_edison[109318]:         }
Sep 30 17:42:36 compute-0 focused_edison[109318]:     ]
Sep 30 17:42:36 compute-0 focused_edison[109318]: }
Sep 30 17:42:36 compute-0 systemd[1]: libpod-ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3.scope: Deactivated successfully.
Sep 30 17:42:36 compute-0 podman[109301]: 2025-09-30 17:42:36.906524914 +0000 UTC m=+0.383983753 container died ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-087a45019f26db0f317b56f24ea2fcc35a25d5a88b2fe574cdb88550033013a2-merged.mount: Deactivated successfully.
Sep 30 17:42:36 compute-0 podman[109301]: 2025-09-30 17:42:36.947553284 +0000 UTC m=+0.425012123 container remove ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_edison, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 17:42:36 compute-0 systemd[1]: libpod-conmon-ae3c72d49145e1eb4addcabf3f0345028d8e98e63502f543afd2f0def5f0f7d3.scope: Deactivated successfully.
Sep 30 17:42:36 compute-0 sudo[109197]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:37 compute-0 sudo[109337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:42:37 compute-0 sudo[109337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:37 compute-0 sudo[109337]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:37 compute-0 sudo[109362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:42:37 compute-0 sudo[109362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s; 0 B/s, 0 objects/s recovering
Sep 30 17:42:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:42:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:42:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7faa860d23a0>)]
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7faa860d20d0>)]
Sep 30 17:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Sep 30 17:42:37 compute-0 podman[109427]: 2025-09-30 17:42:37.48054292 +0000 UTC m=+0.035981009 container create cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:42:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:37.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:37 compute-0 systemd[1]: Started libpod-conmon-cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923.scope.
Sep 30 17:42:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:42:37 compute-0 podman[109427]: 2025-09-30 17:42:37.551545552 +0000 UTC m=+0.106983661 container init cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_heisenberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:42:37 compute-0 podman[109427]: 2025-09-30 17:42:37.557642601 +0000 UTC m=+0.113080690 container start cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_heisenberg, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:42:37 compute-0 laughing_heisenberg[109444]: 167 167
Sep 30 17:42:37 compute-0 podman[109427]: 2025-09-30 17:42:37.560527576 +0000 UTC m=+0.115965665 container attach cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 17:42:37 compute-0 systemd[1]: libpod-cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923.scope: Deactivated successfully.
Sep 30 17:42:37 compute-0 podman[109427]: 2025-09-30 17:42:37.466018252 +0000 UTC m=+0.021456371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:42:37 compute-0 podman[109427]: 2025-09-30 17:42:37.562453826 +0000 UTC m=+0.117891915 container died cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_heisenberg, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9dbe78068e464ff603380d2fca49c57b1cf78fe84b1794219879ab898bdd075-merged.mount: Deactivated successfully.
Sep 30 17:42:37 compute-0 podman[109427]: 2025-09-30 17:42:37.595637341 +0000 UTC m=+0.151075430 container remove cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_heisenberg, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 17:42:37 compute-0 systemd[1]: libpod-conmon-cc9de0902c06c876843d9cb1fa1ab08b702c574e2992b3acabf1ea3828734923.scope: Deactivated successfully.
Sep 30 17:42:37 compute-0 podman[109470]: 2025-09-30 17:42:37.743473846 +0000 UTC m=+0.039993504 container create 745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_euler, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:42:37 compute-0 systemd[1]: Started libpod-conmon-745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2.scope.
Sep 30 17:42:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e731c5a1328f5b540b2ec4bc9830314efc42dfed850f5871b076205bed7aa09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e731c5a1328f5b540b2ec4bc9830314efc42dfed850f5871b076205bed7aa09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e731c5a1328f5b540b2ec4bc9830314efc42dfed850f5871b076205bed7aa09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e731c5a1328f5b540b2ec4bc9830314efc42dfed850f5871b076205bed7aa09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:42:37 compute-0 podman[109470]: 2025-09-30 17:42:37.725942209 +0000 UTC m=+0.022461897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:42:37 compute-0 podman[109470]: 2025-09-30 17:42:37.831219684 +0000 UTC m=+0.127739352 container init 745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Sep 30 17:42:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:37.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:37 compute-0 podman[109470]: 2025-09-30 17:42:37.837515888 +0000 UTC m=+0.134035546 container start 745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_euler, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:42:37 compute-0 podman[109470]: 2025-09-30 17:42:37.840868816 +0000 UTC m=+0.137388464 container attach 745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_euler, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 17:42:38 compute-0 ceph-mon[73755]: pgmap v106: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s; 0 B/s, 0 objects/s recovering
Sep 30 17:42:38 compute-0 lvm[109561]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:42:38 compute-0 lvm[109561]: VG ceph_vg0 finished
Sep 30 17:42:38 compute-0 goofy_euler[109487]: {}
Sep 30 17:42:38 compute-0 systemd[1]: libpod-745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2.scope: Deactivated successfully.
Sep 30 17:42:38 compute-0 podman[109470]: 2025-09-30 17:42:38.4903277 +0000 UTC m=+0.786847368 container died 745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 17:42:38 compute-0 systemd[1]: libpod-745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2.scope: Consumed 1.004s CPU time.
Sep 30 17:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e731c5a1328f5b540b2ec4bc9830314efc42dfed850f5871b076205bed7aa09-merged.mount: Deactivated successfully.
Sep 30 17:42:38 compute-0 podman[109470]: 2025-09-30 17:42:38.535239721 +0000 UTC m=+0.831759369 container remove 745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_euler, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 17:42:38 compute-0 systemd[1]: libpod-conmon-745a7f1498b1501b75eb7ea2f335499a1ac83d03bbc29f6ea21c97a8747082b2.scope: Deactivated successfully.
Sep 30 17:42:38 compute-0 sudo[109362]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:42:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:42:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:38 compute-0 sudo[109577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:42:38 compute-0 sudo[109577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:38 compute-0 sudo[109577]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:38] "GET /metrics HTTP/1.1" 200 46492 "" "Prometheus/2.51.0"
Sep 30 17:42:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:38] "GET /metrics HTTP/1.1" 200 46492 "" "Prometheus/2.51.0"
Sep 30 17:42:38 compute-0 sudo[107772]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 830 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Sep 30 17:42:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:39.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:42:39 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.efvthf(active, since 92s), standbys: compute-1.glbusf
Sep 30 17:42:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:42:39 compute-0 sudo[109628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:42:39 compute-0 sudo[109628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:39 compute-0 sudo[109628]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:39.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:39 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:42:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:39 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:42:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:39 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:42:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:40 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 17:42:40 compute-0 ceph-mon[73755]: pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 830 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Sep 30 17:42:40 compute-0 ceph-mon[73755]: mgrmap e31: compute-0.efvthf(active, since 92s), standbys: compute-1.glbusf
Sep 30 17:42:41 compute-0 sudo[109779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feuujhrigzxnclljhduumqcfocqzghlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254160.9015405-235-151957505898937/AnsiballZ_command.py'
Sep 30 17:42:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 733 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Sep 30 17:42:41 compute-0 sudo[109779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:41 compute-0 python3.9[109781]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:42:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:41.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:42:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:41.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:42:42 compute-0 sudo[109779]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:42 compute-0 ceph-mon[73755]: pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 733 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Sep 30 17:42:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:43 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:42:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:43 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:42:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:43 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:42:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 614 B/s wr, 1 op/s; 0 B/s, 0 objects/s recovering
Sep 30 17:42:43 compute-0 sudo[110068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqqiokcipjqfyvwfmnloaekcbbvnueq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254162.620882-251-226126333251177/AnsiballZ_selinux.py'
Sep 30 17:42:43 compute-0 sudo[110068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:43 compute-0 python3.9[110070]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Sep 30 17:42:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:43.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:43 compute-0 sudo[110068]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:42:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:43.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:42:44 compute-0 sudo[110222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgkhsvmonemopmozftsvczhkfwalnxoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254164.0140288-273-103531014167896/AnsiballZ_command.py'
Sep 30 17:42:44 compute-0 sudo[110222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:44 compute-0 python3.9[110224]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Sep 30 17:42:44 compute-0 sudo[110222]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:42:44 compute-0 ceph-mon[73755]: pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 614 B/s wr, 1 op/s; 0 B/s, 0 objects/s recovering
Sep 30 17:42:45 compute-0 sudo[110374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tognjijisciryyhsnahqcmjhbwkuxvio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254164.7990246-289-250709223000260/AnsiballZ_file.py'
Sep 30 17:42:45 compute-0 sudo[110374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 541 B/s wr, 1 op/s
Sep 30 17:42:45 compute-0 python3.9[110376]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:42:45 compute-0 sudo[110374]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:45.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:42:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:45.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:42:46 compute-0 sudo[110528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niakzzdxpulrarhepczejvjlrgbdixhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254165.664849-305-107830374874167/AnsiballZ_mount.py'
Sep 30 17:42:46 compute-0 sudo[110528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:46 compute-0 python3.9[110530]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Sep 30 17:42:46 compute-0 sudo[110528]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:46 compute-0 ceph-mon[73755]: pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 541 B/s wr, 1 op/s
Sep 30 17:42:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:42:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:47.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:47 compute-0 sudo[110681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gehambgdmrjhejuulnqbkpentquhdses ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254167.2556987-361-84598989891647/AnsiballZ_file.py'
Sep 30 17:42:47 compute-0 sudo[110681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:47 compute-0 python3.9[110683]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:42:47 compute-0 sudo[110681]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:47.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:48 compute-0 sudo[110834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dneoqfxwvwowpyqhrutrxxcpjmnziynv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254168.03211-377-169398567793194/AnsiballZ_stat.py'
Sep 30 17:42:48 compute-0 sudo[110834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:48 compute-0 python3.9[110836]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:42:48 compute-0 sudo[110834]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:48 compute-0 sudo[110912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkiaraofuynxqholsqugwudaaxpqtasg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254168.03211-377-169398567793194/AnsiballZ_file.py'
Sep 30 17:42:48 compute-0 sudo[110912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:48 compute-0 ceph-mon[73755]: pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:42:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:48] "GET /metrics HTTP/1.1" 200 46492 "" "Prometheus/2.51.0"
Sep 30 17:42:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:48] "GET /metrics HTTP/1.1" 200 46492 "" "Prometheus/2.51.0"
Sep 30 17:42:48 compute-0 python3.9[110914]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:42:48 compute-0 sudo[110912]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:42:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:49 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:42:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:49.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:50 compute-0 sudo[111081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yapcywgypvnjwbudxmlavmtsubecrnzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254169.9698467-425-136761397215608/AnsiballZ_getent.py'
Sep 30 17:42:50 compute-0 sudo[111081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:50 compute-0 python3.9[111083]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Sep 30 17:42:50 compute-0 sudo[111081]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:50 compute-0 ceph-mon[73755]: pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:42:51 compute-0 sudo[111234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sudlzbgnrgbmficdqbjmkeqinrofztbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254170.957688-445-271826820662235/AnsiballZ_getent.py'
Sep 30 17:42:51 compute-0 sudo[111234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:42:51 compute-0 python3.9[111236]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Sep 30 17:42:51 compute-0 sudo[111234]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174251 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:42:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:51 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:51.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:51 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33e8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:51.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:52 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:42:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:52 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:42:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:42:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:42:52 compute-0 sudo[111389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dafxvglggbfwocnztfogrdfrfrzlipal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254171.8091655-461-227095343459592/AnsiballZ_group.py'
Sep 30 17:42:52 compute-0 sudo[111389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:52 compute-0 python3.9[111391]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 17:42:52 compute-0 sudo[111389]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:52 compute-0 ceph-mon[73755]: pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:42:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:42:53 compute-0 sudo[111541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkjxuzcrujwqovopevmesbdpgbghbgwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254172.7736742-479-115067123292998/AnsiballZ_file.py'
Sep 30 17:42:53 compute-0 sudo[111541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:42:53 compute-0 python3.9[111543]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Sep 30 17:42:53 compute-0 sudo[111541]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:53 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f4001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:53 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:53 compute-0 ceph-mon[73755]: pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:42:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:53.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:53 compute-0 sudo[111695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adsxtndabamsfcysrtmxudlnbxjxnaoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254173.71467-501-78017660331167/AnsiballZ_dnf.py'
Sep 30 17:42:53 compute-0 sudo[111695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:54 compute-0 python3.9[111697]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:42:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:42:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:55 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:42:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 17:42:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:55 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:55.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:55 compute-0 sudo[111695]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:55 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33e8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:42:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:55.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:42:56 compute-0 sudo[111850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oflhoklzmaajsrokjoafjyymudzlrkqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254175.9103198-517-67586497224402/AnsiballZ_file.py'
Sep 30 17:42:56 compute-0 sudo[111850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:56 compute-0 ceph-mon[73755]: pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 17:42:56 compute-0 python3.9[111852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:42:56 compute-0 sudo[111850]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:56 compute-0 sudo[112002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttannvthrzlszjabqedegviwfocxkxrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254176.6832542-533-176017786500345/AnsiballZ_stat.py'
Sep 30 17:42:56 compute-0 sudo[112002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:57 compute-0 ceph-mgr[74051]: [dashboard INFO request] [192.168.122.100:35584] [POST] [200] [0.115s] [4.0B] [44a94888-e160-4447-9c24-57cd7b024325] /api/prometheus_receiver
Sep 30 17:42:57 compute-0 python3.9[112004]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:42:57 compute-0 sudo[112002]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:42:57 compute-0 sudo[112081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grukopyhlpbxypfmpartbdkwteovrbuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254176.6832542-533-176017786500345/AnsiballZ_file.py'
Sep 30 17:42:57 compute-0 sudo[112081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:57 compute-0 python3.9[112083]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:42:57 compute-0 sudo[112081]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:57 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f4001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:57.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:57 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:58 compute-0 sudo[112234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuxghfosdnmbkdbnzvontfrauhhzaipe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254178.0036833-559-131025428742900/AnsiballZ_stat.py'
Sep 30 17:42:58 compute-0 sudo[112234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:58 compute-0 ceph-mon[73755]: pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:42:58 compute-0 python3.9[112236]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:42:58 compute-0 sudo[112234]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174258 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:42:58 compute-0 sudo[112312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faereetfgyoqaafmxasopwbkbjcwlrsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254178.0036833-559-131025428742900/AnsiballZ_file.py'
Sep 30 17:42:58 compute-0 sudo[112312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:58] "GET /metrics HTTP/1.1" 200 46500 "" "Prometheus/2.51.0"
Sep 30 17:42:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:42:58] "GET /metrics HTTP/1.1" 200 46500 "" "Prometheus/2.51.0"
Sep 30 17:42:58 compute-0 python3.9[112314]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:42:58 compute-0 sudo[112312]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:42:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:59 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:42:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:42:59.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:42:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:42:59 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33e8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:42:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:42:59 compute-0 sudo[112466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmlqhwewhhwrdyrgjtgkkughagbmkqvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254179.5130196-589-57036537831214/AnsiballZ_dnf.py'
Sep 30 17:42:59 compute-0 sudo[112466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:42:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:42:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:42:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:42:59.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:42:59 compute-0 sudo[112469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:42:59 compute-0 sudo[112469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:42:59 compute-0 sudo[112469]: pam_unix(sudo:session): session closed for user root
Sep 30 17:42:59 compute-0 python3.9[112468]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:43:00 compute-0 ceph-mon[73755]: pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:43:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:43:01 compute-0 sudo[112466]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:01 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:01.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:01 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:43:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:01.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:43:02 compute-0 python3.9[112646]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:43:02 compute-0 ceph-mon[73755]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:43:03 compute-0 python3.9[112798]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Sep 30 17:43:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:43:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:03 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:03.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:03 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33e8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:03 compute-0 python3.9[112950]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:43:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:03.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:04 compute-0 ceph-mon[73755]: pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:43:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:05 compute-0 sudo[113100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdndrkcalsygfhneobcaznbtxpqquzbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254184.434593-671-254098993810216/AnsiballZ_systemd.py'
Sep 30 17:43:05 compute-0 sudo[113100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:43:05 compute-0 python3.9[113102]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:43:05 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Sep 30 17:43:05 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Sep 30 17:43:05 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Sep 30 17:43:05 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Sep 30 17:43:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:05 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f40089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:05.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:05 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:05 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Sep 30 17:43:05 compute-0 sudo[113100]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:06 compute-0 ceph-mon[73755]: pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:43:06 compute-0 python3.9[113265]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Sep 30 17:43:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:06.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:43:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:43:07
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', '.rgw.root', '.nfs', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:43:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:43:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:07 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33d0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:07.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:07 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33e8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:07.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:08 compute-0 ceph-mon[73755]: pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:43:08 compute-0 sudo[113417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzjaoelhhseuwdcrqxnadjshyoxsgmna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254188.4677744-785-36908186706258/AnsiballZ_systemd.py'
Sep 30 17:43:08 compute-0 sudo[113417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:08] "GET /metrics HTTP/1.1" 200 46499 "" "Prometheus/2.51.0"
Sep 30 17:43:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:08] "GET /metrics HTTP/1.1" 200 46499 "" "Prometheus/2.51.0"
Sep 30 17:43:08 compute-0 python3.9[113419]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:43:09 compute-0 sudo[113417]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:43:09 compute-0 sudo[113572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfaollmuxckbbxhxjqdcyducntlgybgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254189.1683872-785-109497382886373/AnsiballZ_systemd.py'
Sep 30 17:43:09 compute-0 sudo[113572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:09 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f40096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:09.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:09 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:09 compute-0 python3.9[113574]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:43:09 compute-0 sudo[113572]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:09.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:10 compute-0 ceph-mon[73755]: pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:43:10 compute-0 sshd-session[105992]: Connection closed by 192.168.122.30 port 55072
Sep 30 17:43:10 compute-0 sshd-session[105989]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:43:10 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Sep 30 17:43:10 compute-0 systemd[1]: session-40.scope: Consumed 1min 898ms CPU time.
Sep 30 17:43:10 compute-0 systemd-logind[811]: Session 40 logged out. Waiting for processes to exit.
Sep 30 17:43:10 compute-0 systemd-logind[811]: Removed session 40.
Sep 30 17:43:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:11 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33d0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:11.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:11 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33e8003da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:43:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:11.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:43:12 compute-0 ceph-mon[73755]: pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:13 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f40096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:13.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:13 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:14 compute-0 ceph-mon[73755]: pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:15 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33d0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:15.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:15 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33e8003da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:43:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:15.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:43:15 compute-0 sshd-session[113608]: Accepted publickey for zuul from 192.168.122.30 port 57252 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:43:15 compute-0 systemd-logind[811]: New session 41 of user zuul.
Sep 30 17:43:15 compute-0 systemd[1]: Started Session 41 of User zuul.
Sep 30 17:43:15 compute-0 sshd-session[113608]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:43:16 compute-0 ceph-mon[73755]: pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:16.951Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:43:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:16.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:43:17 compute-0 python3.9[113761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:43:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:17 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f40096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:17.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:17 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:17.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:18 compute-0 sudo[113917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjawixqconewtygqmicivbmwlnizurnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254197.742913-52-134775273256461/AnsiballZ_getent.py'
Sep 30 17:43:18 compute-0 sudo[113917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:18 compute-0 python3.9[113919]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Sep 30 17:43:18 compute-0 sudo[113917]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:18 compute-0 ceph-mon[73755]: pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:18] "GET /metrics HTTP/1.1" 200 46499 "" "Prometheus/2.51.0"
Sep 30 17:43:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:18] "GET /metrics HTTP/1.1" 200 46499 "" "Prometheus/2.51.0"
Sep 30 17:43:19 compute-0 sudo[114070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qshfunjmksyxejpyvsdcwoiommapumjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254198.8865902-76-43075623259433/AnsiballZ_setup.py'
Sep 30 17:43:19 compute-0 sudo[114070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:43:19 compute-0 python3.9[114072]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:43:19 compute-0 kernel: ganesha.nfsd[110941]: segfault at 50 ip 00007f34a3eee32e sp 00007f3472ffc210 error 4 in libntirpc.so.5.8[7f34a3ed3000+2c000] likely on CPU 7 (core 0, socket 7)
Sep 30 17:43:19 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:43:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[108119]: 30/09/2025 17:43:19 : epoch 68dc1685 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33f0002740 fd 38 proxy ignored for local
Sep 30 17:43:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:19.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:19 compute-0 systemd[1]: Started Process Core Dump (PID 114080/UID 0).
Sep 30 17:43:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:19 compute-0 sudo[114070]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:19.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:20 compute-0 sudo[114085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:43:20 compute-0 sudo[114085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:20 compute-0 sudo[114085]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:20 compute-0 sudo[114183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxdaqevbzoebhhxiizplcstmokstwwbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254198.8865902-76-43075623259433/AnsiballZ_dnf.py'
Sep 30 17:43:20 compute-0 sudo[114183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:20 compute-0 python3.9[114185]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Sep 30 17:43:20 compute-0 ceph-mon[73755]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:43:20 compute-0 systemd-coredump[114081]: Process 108123 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f34a3eee32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:43:20 compute-0 systemd[1]: systemd-coredump@1-114080-0.service: Deactivated successfully.
Sep 30 17:43:20 compute-0 systemd[1]: systemd-coredump@1-114080-0.service: Consumed 1.168s CPU time.
Sep 30 17:43:20 compute-0 podman[114191]: 2025-09-30 17:43:20.876783923 +0000 UTC m=+0.040181592 container died 75deb80397fe5d91ce9250625aa16920022fde4c9d1304b3b5e47e52826b9df6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6d132a985a5ab22bb949b074513ab85e3d1ee52271e273c282e4c00b5b9ffae-merged.mount: Deactivated successfully.
Sep 30 17:43:20 compute-0 podman[114191]: 2025-09-30 17:43:20.920178961 +0000 UTC m=+0.083576640 container remove 75deb80397fe5d91ce9250625aa16920022fde4c9d1304b3b5e47e52826b9df6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:43:20 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:43:21 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:43:21 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.453s CPU time.
Sep 30 17:43:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:21.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:21 compute-0 sudo[114183]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:21.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:43:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:43:22 compute-0 sudo[114387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnrtcoydwbgwthiwqybeehwgrkymoupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254202.2766364-104-261415675878913/AnsiballZ_dnf.py'
Sep 30 17:43:22 compute-0 sudo[114387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:22 compute-0 ceph-mon[73755]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:43:22 compute-0 python3.9[114389]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:43:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:23.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:23.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:24 compute-0 sudo[114387]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:24 compute-0 ceph-mon[73755]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:25 compute-0 sudo[114542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxofnjvfnxxholyquuqwwukgsuymbzvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254204.4346766-120-172817850223314/AnsiballZ_systemd.py'
Sep 30 17:43:25 compute-0 sudo[114542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:25 compute-0 python3.9[114544]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:43:25 compute-0 sudo[114542]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174325 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:43:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:25.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:25.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:26 compute-0 python3.9[114699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:43:26 compute-0 ceph-mon[73755]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:26.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:43:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:26.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:43:27 compute-0 sudo[114849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bijjkxfhuavwusmicpqzelzavgzbhmzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254206.7023523-156-128833146086381/AnsiballZ_sefcontext.py'
Sep 30 17:43:27 compute-0 sudo[114849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:27 compute-0 python3.9[114851]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Sep 30 17:43:27 compute-0 sudo[114849]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:27.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:27.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:28 compute-0 python3.9[115003]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:43:28 compute-0 ceph-mon[73755]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:43:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:28] "GET /metrics HTTP/1.1" 200 46514 "" "Prometheus/2.51.0"
Sep 30 17:43:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:28] "GET /metrics HTTP/1.1" 200 46514 "" "Prometheus/2.51.0"
Sep 30 17:43:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:43:29 compute-0 sudo[115160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihkbdezsqiagqowxgwvwtlwffoohkjyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254209.208535-192-120952532825337/AnsiballZ_dnf.py'
Sep 30 17:43:29 compute-0 sudo[115160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:29.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:29 compute-0 python3.9[115162]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:43:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:29.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:30 compute-0 ceph-mon[73755]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:43:30 compute-0 sudo[115160]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:31 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 2.
Sep 30 17:43:31 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:43:31 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.453s CPU time.
Sep 30 17:43:31 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:43:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:43:31 compute-0 podman[115287]: 2025-09-30 17:43:31.502770453 +0000 UTC m=+0.046040111 container create 2a8a06a3a1263af2fcd20dc9078c52f0b40c9abbd27746f4b98d7c9207875722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4c0047b42c3bb1be7f2619385ae741fe443b6d3dddc782d21143e14e6e508/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4c0047b42c3bb1be7f2619385ae741fe443b6d3dddc782d21143e14e6e508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4c0047b42c3bb1be7f2619385ae741fe443b6d3dddc782d21143e14e6e508/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4c0047b42c3bb1be7f2619385ae741fe443b6d3dddc782d21143e14e6e508/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:31 compute-0 podman[115287]: 2025-09-30 17:43:31.555100083 +0000 UTC m=+0.098369761 container init 2a8a06a3a1263af2fcd20dc9078c52f0b40c9abbd27746f4b98d7c9207875722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:43:31 compute-0 podman[115287]: 2025-09-30 17:43:31.559588455 +0000 UTC m=+0.102858113 container start 2a8a06a3a1263af2fcd20dc9078c52f0b40c9abbd27746f4b98d7c9207875722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:43:31 compute-0 bash[115287]: 2a8a06a3a1263af2fcd20dc9078c52f0b40c9abbd27746f4b98d7c9207875722
Sep 30 17:43:31 compute-0 podman[115287]: 2025-09-30 17:43:31.482730658 +0000 UTC m=+0.026000356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:43:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:31 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:43:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:31 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:43:31 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:43:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:31.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:31 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:43:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:31 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:43:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:31 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:43:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:31 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:43:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:31 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:43:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:31 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:43:31 compute-0 sudo[115419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nilvlfajvxsdkgvogpdjykvobqnyrkpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254211.3703632-208-39722544483110/AnsiballZ_command.py'
Sep 30 17:43:31 compute-0 sudo[115419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:31 compute-0 python3.9[115421]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:43:32 compute-0 ceph-mon[73755]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:43:32 compute-0 sudo[115419]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:43:33 compute-0 sudo[115708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwheqgqkwnsfndeeuvkaldjeyucjpzrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254213.1340203-224-56029249071177/AnsiballZ_file.py'
Sep 30 17:43:33 compute-0 sudo[115708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:33.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:33 compute-0 python3.9[115710]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 17:43:33 compute-0 sudo[115708]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:34 compute-0 python3.9[115861]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:43:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:34 compute-0 ceph-mon[73755]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:43:35 compute-0 sudo[116013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skesibvlltvqkssbmjnlxogcvhghyjrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254214.931384-256-64744868054840/AnsiballZ_dnf.py'
Sep 30 17:43:35 compute-0 sudo[116013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:35 compute-0 python3.9[116015]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:43:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:35.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:35.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:36 compute-0 sudo[116013]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:36 compute-0 ceph-mon[73755]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:36.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:43:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:43:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:43:37 compute-0 sudo[116169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgdgbzmibigpmrgwcddcdsftwiavrsrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254217.1753833-274-131240808695846/AnsiballZ_dnf.py'
Sep 30 17:43:37 compute-0 sudo[116169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:37.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:37 compute-0 python3.9[116171]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:43:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:37 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:43:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:37 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:43:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:43:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:37.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:38 compute-0 ceph-mon[73755]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:38] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:43:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:38] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:43:38 compute-0 sudo[116174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:43:38 compute-0 sudo[116174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:38 compute-0 sudo[116174]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:38 compute-0 sudo[116199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:43:38 compute-0 sudo[116199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:38 compute-0 sudo[116169]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:43:39 compute-0 sudo[116199]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:39.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:43:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:43:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:43:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:43:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:43:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:43:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:43:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:43:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:43:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:43:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:43:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:43:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:43:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:43:39 compute-0 sudo[116333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:43:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:39 compute-0 sudo[116333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:39 compute-0 sudo[116333]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:39 compute-0 sudo[116381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:43:39 compute-0 sudo[116381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:43:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:43:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:43:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:43:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:43:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:43:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:43:39 compute-0 sudo[116456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roimslzapcqsmzgxhucezhzkeguwcdgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254219.5596895-298-37253457702786/AnsiballZ_stat.py'
Sep 30 17:43:39 compute-0 sudo[116456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:39.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:39 compute-0 python3.9[116458]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:43:40 compute-0 sudo[116456]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:40 compute-0 sudo[116506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:43:40 compute-0 sudo[116506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:40 compute-0 sudo[116506]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:40 compute-0 podman[116505]: 2025-09-30 17:43:40.092885363 +0000 UTC m=+0.039447941 container create f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 17:43:40 compute-0 systemd[1]: Started libpod-conmon-f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9.scope.
Sep 30 17:43:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:43:40 compute-0 podman[116505]: 2025-09-30 17:43:40.168790974 +0000 UTC m=+0.115353572 container init f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bassi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:43:40 compute-0 podman[116505]: 2025-09-30 17:43:40.075467581 +0000 UTC m=+0.022030189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:43:40 compute-0 podman[116505]: 2025-09-30 17:43:40.174865438 +0000 UTC m=+0.121428016 container start f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:43:40 compute-0 podman[116505]: 2025-09-30 17:43:40.17787686 +0000 UTC m=+0.124439488 container attach f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:43:40 compute-0 compassionate_bassi[116566]: 167 167
Sep 30 17:43:40 compute-0 systemd[1]: libpod-f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9.scope: Deactivated successfully.
Sep 30 17:43:40 compute-0 conmon[116566]: conmon f2dda206b6f387b6126d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9.scope/container/memory.events
Sep 30 17:43:40 compute-0 podman[116505]: 2025-09-30 17:43:40.181308553 +0000 UTC m=+0.127871141 container died f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e169d6c124b4cc9bcd6ebd3abc6153a69f5176013125900ea288a8630873b38-merged.mount: Deactivated successfully.
Sep 30 17:43:40 compute-0 podman[116505]: 2025-09-30 17:43:40.213718313 +0000 UTC m=+0.160280891 container remove f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_bassi, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:43:40 compute-0 systemd[1]: libpod-conmon-f2dda206b6f387b6126df73e0a21c9f81cf83094753e0f58d9a5ff5cff3af6c9.scope: Deactivated successfully.
Sep 30 17:43:40 compute-0 podman[116589]: 2025-09-30 17:43:40.344491172 +0000 UTC m=+0.033300834 container create 0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:43:40 compute-0 systemd[1]: Started libpod-conmon-0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7.scope.
Sep 30 17:43:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2374d2b59a8f45493a8b9ea12003e191474406ed1434668b517a9dfbeb9dce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2374d2b59a8f45493a8b9ea12003e191474406ed1434668b517a9dfbeb9dce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2374d2b59a8f45493a8b9ea12003e191474406ed1434668b517a9dfbeb9dce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2374d2b59a8f45493a8b9ea12003e191474406ed1434668b517a9dfbeb9dce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a2374d2b59a8f45493a8b9ea12003e191474406ed1434668b517a9dfbeb9dce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:40 compute-0 podman[116589]: 2025-09-30 17:43:40.410158755 +0000 UTC m=+0.098968427 container init 0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:43:40 compute-0 podman[116589]: 2025-09-30 17:43:40.417413022 +0000 UTC m=+0.106222684 container start 0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:43:40 compute-0 podman[116589]: 2025-09-30 17:43:40.420468035 +0000 UTC m=+0.109277697 container attach 0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:43:40 compute-0 podman[116589]: 2025-09-30 17:43:40.330165674 +0000 UTC m=+0.018975356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:43:40 compute-0 elastic_sammet[116627]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:43:40 compute-0 elastic_sammet[116627]: --> All data devices are unavailable
Sep 30 17:43:40 compute-0 systemd[1]: libpod-0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7.scope: Deactivated successfully.
Sep 30 17:43:40 compute-0 podman[116589]: 2025-09-30 17:43:40.754428989 +0000 UTC m=+0.443238661 container died 0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:43:40 compute-0 ceph-mon[73755]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a2374d2b59a8f45493a8b9ea12003e191474406ed1434668b517a9dfbeb9dce-merged.mount: Deactivated successfully.
Sep 30 17:43:40 compute-0 podman[116589]: 2025-09-30 17:43:40.799436841 +0000 UTC m=+0.488246503 container remove 0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:43:40 compute-0 systemd[1]: libpod-conmon-0d53c07ecddbe4200d8526cfda549133cbdef24632729ef16f44ac40697283b7.scope: Deactivated successfully.
Sep 30 17:43:40 compute-0 sudo[116759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljyqcmniqtezggdlyvyvutinnzdopuyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254220.390318-314-103910321649970/AnsiballZ_slurp.py'
Sep 30 17:43:40 compute-0 sudo[116759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:40 compute-0 sudo[116381]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:40 compute-0 sudo[116762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:43:40 compute-0 sudo[116762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:40 compute-0 sudo[116762]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:40 compute-0 sudo[116787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:43:40 compute-0 sudo[116787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:41 compute-0 python3.9[116761]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Sep 30 17:43:41 compute-0 sudo[116759]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:43:41 compute-0 podman[116877]: 2025-09-30 17:43:41.331000838 +0000 UTC m=+0.038195417 container create 6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 17:43:41 compute-0 systemd[1]: Started libpod-conmon-6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2.scope.
Sep 30 17:43:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:43:41 compute-0 podman[116877]: 2025-09-30 17:43:41.397233006 +0000 UTC m=+0.104427605 container init 6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:43:41 compute-0 podman[116877]: 2025-09-30 17:43:41.404956876 +0000 UTC m=+0.112151485 container start 6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_goldwasser, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:43:41 compute-0 podman[116877]: 2025-09-30 17:43:41.409016496 +0000 UTC m=+0.116211095 container attach 6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 17:43:41 compute-0 podman[116877]: 2025-09-30 17:43:41.313908494 +0000 UTC m=+0.021103093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:43:41 compute-0 zen_goldwasser[116895]: 167 167
Sep 30 17:43:41 compute-0 systemd[1]: libpod-6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2.scope: Deactivated successfully.
Sep 30 17:43:41 compute-0 conmon[116895]: conmon 6aa15d095eb2a7a980c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2.scope/container/memory.events
Sep 30 17:43:41 compute-0 podman[116877]: 2025-09-30 17:43:41.413170629 +0000 UTC m=+0.120365208 container died 6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-096335c8813be7189a53053a5d6376ec79378083d013d33bcf7a80b346bc3c4a-merged.mount: Deactivated successfully.
Sep 30 17:43:41 compute-0 podman[116877]: 2025-09-30 17:43:41.45007249 +0000 UTC m=+0.157267069 container remove 6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:43:41 compute-0 systemd[1]: libpod-conmon-6aa15d095eb2a7a980c74ad6e458f0328f5dd322c889278352eda280a2057cd2.scope: Deactivated successfully.
Sep 30 17:43:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:41.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:41 compute-0 podman[116918]: 2025-09-30 17:43:41.612889229 +0000 UTC m=+0.046879753 container create 5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:43:41 compute-0 systemd[1]: Started libpod-conmon-5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84.scope.
Sep 30 17:43:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f0fc36be881530473ae9bf713ad6639034d50832348b805f575ce11103905e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f0fc36be881530473ae9bf713ad6639034d50832348b805f575ce11103905e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f0fc36be881530473ae9bf713ad6639034d50832348b805f575ce11103905e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f0fc36be881530473ae9bf713ad6639034d50832348b805f575ce11103905e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:41 compute-0 podman[116918]: 2025-09-30 17:43:41.596007571 +0000 UTC m=+0.029998115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:43:41 compute-0 podman[116918]: 2025-09-30 17:43:41.691617915 +0000 UTC m=+0.125608439 container init 5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bose, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 17:43:41 compute-0 podman[116918]: 2025-09-30 17:43:41.701578266 +0000 UTC m=+0.135568790 container start 5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bose, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:43:41 compute-0 podman[116918]: 2025-09-30 17:43:41.706250472 +0000 UTC m=+0.140240996 container attach 5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bose, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:43:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:41.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:41 compute-0 affectionate_bose[116936]: {
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:     "0": [
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:         {
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "devices": [
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "/dev/loop3"
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             ],
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "lv_name": "ceph_lv0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "lv_size": "21470642176",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "name": "ceph_lv0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "tags": {
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.cluster_name": "ceph",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.crush_device_class": "",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.encrypted": "0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.osd_id": "0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.type": "block",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.vdo": "0",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:                 "ceph.with_tpm": "0"
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             },
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "type": "block",
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:             "vg_name": "ceph_vg0"
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:         }
Sep 30 17:43:41 compute-0 affectionate_bose[116936]:     ]
Sep 30 17:43:41 compute-0 affectionate_bose[116936]: }
Sep 30 17:43:41 compute-0 systemd[1]: libpod-5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84.scope: Deactivated successfully.
Sep 30 17:43:41 compute-0 sshd-session[113611]: Connection closed by 192.168.122.30 port 57252
Sep 30 17:43:42 compute-0 sshd-session[113608]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:43:42 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Sep 30 17:43:42 compute-0 systemd[1]: session-41.scope: Consumed 17.223s CPU time.
Sep 30 17:43:42 compute-0 systemd-logind[811]: Session 41 logged out. Waiting for processes to exit.
Sep 30 17:43:42 compute-0 systemd-logind[811]: Removed session 41.
Sep 30 17:43:42 compute-0 podman[116946]: 2025-09-30 17:43:42.012154315 +0000 UTC m=+0.025242546 container died 5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bose, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f0fc36be881530473ae9bf713ad6639034d50832348b805f575ce11103905e8-merged.mount: Deactivated successfully.
Sep 30 17:43:42 compute-0 podman[116946]: 2025-09-30 17:43:42.045151811 +0000 UTC m=+0.058240022 container remove 5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 17:43:42 compute-0 systemd[1]: libpod-conmon-5020b6d45fb6847ea3f538c398aedb7ded1483ec347f61843a0fe6bc30fe3e84.scope: Deactivated successfully.
Sep 30 17:43:42 compute-0 sudo[116787]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:42 compute-0 sudo[116960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:43:42 compute-0 sudo[116960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:42 compute-0 sudo[116960]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:42 compute-0 sudo[116985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:43:42 compute-0 sudo[116985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:42 compute-0 podman[117050]: 2025-09-30 17:43:42.579842283 +0000 UTC m=+0.037681853 container create 91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:43:42 compute-0 systemd[1]: Started libpod-conmon-91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260.scope.
Sep 30 17:43:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:43:42 compute-0 podman[117050]: 2025-09-30 17:43:42.637786486 +0000 UTC m=+0.095626076 container init 91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:43:42 compute-0 podman[117050]: 2025-09-30 17:43:42.643937263 +0000 UTC m=+0.101776833 container start 91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:43:42 compute-0 podman[117050]: 2025-09-30 17:43:42.646956455 +0000 UTC m=+0.104796025 container attach 91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:43:42 compute-0 intelligent_austin[117066]: 167 167
Sep 30 17:43:42 compute-0 systemd[1]: libpod-91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260.scope: Deactivated successfully.
Sep 30 17:43:42 compute-0 podman[117050]: 2025-09-30 17:43:42.649660869 +0000 UTC m=+0.107500439 container died 91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:43:42 compute-0 podman[117050]: 2025-09-30 17:43:42.562550574 +0000 UTC m=+0.020390164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-714f39b7bd8690809b2202a8ce08fa4249e69c2e2eb4f7c7f23b72dc348db6c0-merged.mount: Deactivated successfully.
Sep 30 17:43:42 compute-0 podman[117050]: 2025-09-30 17:43:42.687374802 +0000 UTC m=+0.145214372 container remove 91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_austin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Sep 30 17:43:42 compute-0 systemd[1]: libpod-conmon-91719d5b4fb9c899c626288ce6f18ad9f0618ff5f0cff4ce0a4c9c70d5844260.scope: Deactivated successfully.
Sep 30 17:43:42 compute-0 ceph-mon[73755]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:43:42 compute-0 podman[117091]: 2025-09-30 17:43:42.840160769 +0000 UTC m=+0.046015770 container create d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:43:42 compute-0 systemd[1]: Started libpod-conmon-d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097.scope.
Sep 30 17:43:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:43:42 compute-0 podman[117091]: 2025-09-30 17:43:42.816676892 +0000 UTC m=+0.022531923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93692a34d34bbf94d250c74293c9992ce2db686eefee2846269b4b1b346160bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93692a34d34bbf94d250c74293c9992ce2db686eefee2846269b4b1b346160bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93692a34d34bbf94d250c74293c9992ce2db686eefee2846269b4b1b346160bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93692a34d34bbf94d250c74293c9992ce2db686eefee2846269b4b1b346160bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:43:42 compute-0 podman[117091]: 2025-09-30 17:43:42.932118965 +0000 UTC m=+0.137973976 container init d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:43:42 compute-0 podman[117091]: 2025-09-30 17:43:42.940062501 +0000 UTC m=+0.145917512 container start d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_euler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:43:42 compute-0 podman[117091]: 2025-09-30 17:43:42.943228467 +0000 UTC m=+0.149083478 container attach d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_euler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:43:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:43:43 compute-0 lvm[117183]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:43:43 compute-0 lvm[117183]: VG ceph_vg0 finished
Sep 30 17:43:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:43.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:43 compute-0 wizardly_euler[117107]: {}
Sep 30 17:43:43 compute-0 systemd[1]: libpod-d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097.scope: Deactivated successfully.
Sep 30 17:43:43 compute-0 podman[117091]: 2025-09-30 17:43:43.648281013 +0000 UTC m=+0.854136034 container died d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_euler, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:43:43 compute-0 systemd[1]: libpod-d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097.scope: Consumed 1.049s CPU time.
Sep 30 17:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-93692a34d34bbf94d250c74293c9992ce2db686eefee2846269b4b1b346160bc-merged.mount: Deactivated successfully.
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:43:43 compute-0 podman[117091]: 2025-09-30 17:43:43.695565267 +0000 UTC m=+0.901420268 container remove d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:43:43 compute-0 systemd[1]: libpod-conmon-d9023856cb4749d1e2072c4f87bf2e8949cdc46217e33c2d600428d0c610d097.scope: Deactivated successfully.
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:43 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:43:43 compute-0 sudo[116985]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:43:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:43:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:43:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:43:43 compute-0 sudo[117213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:43:43 compute-0 sudo[117213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:43:43 compute-0 sudo[117213]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:43.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:44 compute-0 ceph-mon[73755]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:43:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:43:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:43:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:43:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:45 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cd4000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:45.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:45 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cbc000da0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:45.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:46 compute-0 ceph-mon[73755]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:43:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:46.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:43:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:46.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:43:47 compute-0 sshd-session[117240]: Accepted publickey for zuul from 192.168.122.30 port 45300 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:43:47 compute-0 systemd-logind[811]: New session 42 of user zuul.
Sep 30 17:43:47 compute-0 systemd[1]: Started Session 42 of User zuul.
Sep 30 17:43:47 compute-0 sshd-session[117240]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:43:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:43:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174347 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:43:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:47 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cbc000da0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:47.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:47 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cbc000da0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:47.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:48 compute-0 python3.9[117396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:43:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:48] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:43:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:48] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:43:48 compute-0 ceph-mon[73755]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:43:49 compute-0 python3.9[117550]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:43:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:43:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:49 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ca4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:49.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:49 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cc8001970 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:49 compute-0 ceph-mon[73755]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:43:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:49.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:50 compute-0 python3.9[117745]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:43:51 compute-0 sshd-session[117243]: Connection closed by 192.168.122.30 port 45300
Sep 30 17:43:51 compute-0 sshd-session[117240]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:43:51 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Sep 30 17:43:51 compute-0 systemd[1]: session-42.scope: Consumed 2.216s CPU time.
Sep 30 17:43:51 compute-0 systemd-logind[811]: Session 42 logged out. Waiting for processes to exit.
Sep 30 17:43:51 compute-0 systemd-logind[811]: Removed session 42.
Sep 30 17:43:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:51 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ca8000ea0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:43:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:51.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:43:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:51 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cbc002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:43:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:43:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:43:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:43:52 compute-0 ceph-mon[73755]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:43:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:53 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ca40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:53.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:53 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cc8002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:53.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:54 compute-0 ceph-mon[73755]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:55 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ca80019c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:55 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cbc002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:55.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:56 compute-0 sshd-session[117777]: Accepted publickey for zuul from 192.168.122.30 port 45304 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:43:56 compute-0 systemd-logind[811]: New session 43 of user zuul.
Sep 30 17:43:56 compute-0 systemd[1]: Started Session 43 of User zuul.
Sep 30 17:43:56 compute-0 sshd-session[117777]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:43:56 compute-0 ceph-mon[73755]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:43:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:43:56.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:43:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:43:57 compute-0 python3.9[117930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:43:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:57 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cbc002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:57 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cbc002290 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:43:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:57.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:58 compute-0 python3.9[118086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:43:58 compute-0 ceph-mon[73755]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:43:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:58] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:43:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:43:58] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:43:59 compute-0 sudo[118240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmcvdfzeaxosmfczkaqwymicxjxtqyaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254238.8818042-60-112622039810778/AnsiballZ_setup.py'
Sep 30 17:43:59 compute-0 sudo[118240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:43:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:43:59 compute-0 python3.9[118242]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:43:59 compute-0 kernel: ganesha.nfsd[117297]: segfault at 50 ip 00007f8d7dd1a32e sp 00007f8d367fb210 error 4 in libntirpc.so.5.8[7f8d7dcff000+2c000] likely on CPU 3 (core 0, socket 3)
Sep 30 17:43:59 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:43:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[115303]: 30/09/2025 17:43:59 : epoch 68dc16c3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8cd4000df0 fd 37 proxy ignored for local
Sep 30 17:43:59 compute-0 systemd[1]: Started Process Core Dump (PID 118250/UID 0).
Sep 30 17:43:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:43:59.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:43:59 compute-0 sudo[118240]: pam_unix(sudo:session): session closed for user root
Sep 30 17:43:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:43:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:43:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:43:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:43:59.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:00 compute-0 sudo[118328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjplmzsvanqaitzbzacakzfnlxtxpoxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254238.8818042-60-112622039810778/AnsiballZ_dnf.py'
Sep 30 17:44:00 compute-0 sudo[118328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:00 compute-0 sudo[118331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:44:00 compute-0 sudo[118331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:00 compute-0 sudo[118331]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:00 compute-0 python3.9[118330]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:44:00 compute-0 ceph-mon[73755]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:44:00 compute-0 systemd-coredump[118251]: Process 115307 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f8d7dd1a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:44:00 compute-0 systemd[1]: systemd-coredump@2-118250-0.service: Deactivated successfully.
Sep 30 17:44:00 compute-0 systemd[1]: systemd-coredump@2-118250-0.service: Consumed 1.177s CPU time.
Sep 30 17:44:00 compute-0 podman[118361]: 2025-09-30 17:44:00.872499368 +0000 UTC m=+0.024123605 container died 2a8a06a3a1263af2fcd20dc9078c52f0b40c9abbd27746f4b98d7c9207875722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Sep 30 17:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c4c0047b42c3bb1be7f2619385ae741fe443b6d3dddc782d21143e14e6e508-merged.mount: Deactivated successfully.
Sep 30 17:44:00 compute-0 podman[118361]: 2025-09-30 17:44:00.909646067 +0000 UTC m=+0.061270284 container remove 2a8a06a3a1263af2fcd20dc9078c52f0b40c9abbd27746f4b98d7c9207875722 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:44:00 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:44:01 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:44:01 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.440s CPU time.
Sep 30 17:44:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:44:01 compute-0 sudo[118328]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:44:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:44:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:01.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:02 compute-0 sudo[118556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqsdmvjcqnxmpsbtcrksrqxibrlvkjsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254241.8586822-84-148990112955107/AnsiballZ_setup.py'
Sep 30 17:44:02 compute-0 sudo[118556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:02 compute-0 ceph-mon[73755]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:44:02 compute-0 python3.9[118558]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:44:02 compute-0 sudo[118556]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:44:03 compute-0 sudo[118752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-milrckloboqvmgysdsoxponzlndljgep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254243.1728272-106-135507050313286/AnsiballZ_file.py'
Sep 30 17:44:03 compute-0 sudo[118752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:03 compute-0 python3.9[118754]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:03 compute-0 sudo[118752]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:03.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:04 compute-0 ceph-mon[73755]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:44:04 compute-0 sudo[118905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnsgrcxidnnzfgabiknqyxxbewhpahra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254244.109901-122-194262232513981/AnsiballZ_command.py'
Sep 30 17:44:04 compute-0 sudo[118905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:04 compute-0 python3.9[118907]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:44:04 compute-0 sudo[118905]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:44:05 compute-0 sudo[119071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dskyhklviwkdpkyfculkwkjiccnngrjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254245.0707846-138-57496257289838/AnsiballZ_stat.py'
Sep 30 17:44:05 compute-0 sudo[119071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174405 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:44:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:44:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:05.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:44:05 compute-0 python3.9[119073]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:05 compute-0 sudo[119071]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:05 compute-0 sudo[119150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnuysslrzdozrwcpruuccmgytcidkbtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254245.0707846-138-57496257289838/AnsiballZ_file.py'
Sep 30 17:44:05 compute-0 sudo[119150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:44:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:05.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:44:06 compute-0 python3.9[119152]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:06 compute-0 sudo[119150]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:06 compute-0 ceph-mon[73755]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:44:06 compute-0 sudo[119302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrdtdironbpluwxhmgiitsberutpyudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254246.511115-162-259432990724715/AnsiballZ_stat.py'
Sep 30 17:44:06 compute-0 sudo[119302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:44:06.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:44:06 compute-0 python3.9[119304]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:07 compute-0 sudo[119302]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:07 compute-0 sudo[119380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfnhyljyuvtwlsfxcywcujdcfmgtszr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254246.511115-162-259432990724715/AnsiballZ_file.py'
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:44:07 compute-0 sudo[119380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:44:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:44:07
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.nfs', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.data']
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:44:07 compute-0 python3.9[119382]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:44:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:44:07 compute-0 sudo[119380]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:07.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:08 compute-0 sudo[119534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxesrvcwfewzlhdblrtaiautekxylmia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254247.8683972-188-151809477421034/AnsiballZ_ini_file.py'
Sep 30 17:44:08 compute-0 sudo[119534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:08 compute-0 python3.9[119536]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:44:08 compute-0 ceph-mon[73755]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:44:08 compute-0 sudo[119534]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:08 compute-0 sudo[119686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdunwhhqfnwjmwyhvfkklhfmqzvbuwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254248.5624313-188-92989980199110/AnsiballZ_ini_file.py'
Sep 30 17:44:08 compute-0 sudo[119686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:08] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:44:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:08] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:44:08 compute-0 python3.9[119688]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:44:08 compute-0 sudo[119686]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:44:09 compute-0 sudo[119838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njwdfekeapvujgphwasbxeiidlafznjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254249.0677855-188-258493489527074/AnsiballZ_ini_file.py'
Sep 30 17:44:09 compute-0 sudo[119838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:09 compute-0 python3.9[119841]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:44:09 compute-0 sudo[119838]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:09.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:09 compute-0 sudo[119992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsqmkqfrlkuubmfxfhbhjgdxkvovsxnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254249.6417122-188-252950858873713/AnsiballZ_ini_file.py'
Sep 30 17:44:09 compute-0 sudo[119992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:09.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:10 compute-0 python3.9[119994]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:44:10 compute-0 sudo[119992]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:10 compute-0 ceph-mon[73755]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:44:11 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 3.
Sep 30 17:44:11 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:44:11 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.440s CPU time.
Sep 30 17:44:11 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:44:11 compute-0 sudo[120156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smcerykzdatyicoybtholxyinwtckexe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254250.9644108-250-67798950758270/AnsiballZ_dnf.py'
Sep 30 17:44:11 compute-0 sudo[120156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:44:11 compute-0 podman[120195]: 2025-09-30 17:44:11.346593574 +0000 UTC m=+0.037635342 container create e9fc40bdae9459facda7ee0819cec90dbd5ff33a7266512c7605ada1f0350531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 17:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5ec8bd30f5f0bd53843b4709b393929e122d667fb5b223155433352d243197/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5ec8bd30f5f0bd53843b4709b393929e122d667fb5b223155433352d243197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5ec8bd30f5f0bd53843b4709b393929e122d667fb5b223155433352d243197/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5ec8bd30f5f0bd53843b4709b393929e122d667fb5b223155433352d243197/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:11 compute-0 podman[120195]: 2025-09-30 17:44:11.421816046 +0000 UTC m=+0.112857834 container init e9fc40bdae9459facda7ee0819cec90dbd5ff33a7266512c7605ada1f0350531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:44:11 compute-0 podman[120195]: 2025-09-30 17:44:11.330512948 +0000 UTC m=+0.021554736 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:44:11 compute-0 podman[120195]: 2025-09-30 17:44:11.429773942 +0000 UTC m=+0.120815710 container start e9fc40bdae9459facda7ee0819cec90dbd5ff33a7266512c7605ada1f0350531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 17:44:11 compute-0 python3.9[120159]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:44:11 compute-0 bash[120195]: e9fc40bdae9459facda7ee0819cec90dbd5ff33a7266512c7605ada1f0350531
Sep 30 17:44:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:11 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:44:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:11 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:44:11 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:44:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:11 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:44:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:11 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:44:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:11 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:44:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:11 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:44:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:11 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:44:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:11 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:44:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:11.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:11.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:12 compute-0 ceph-mon[73755]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:44:12 compute-0 sudo[120156]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:44:13 compute-0 sudo[120404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuuxlcjmxeyvmjqasbzxhpirlpabavxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254253.2929034-272-222354006158187/AnsiballZ_setup.py'
Sep 30 17:44:13 compute-0 sudo[120404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:44:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:13.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:44:13 compute-0 python3.9[120406]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:44:13 compute-0 sudo[120404]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:14 compute-0 sudo[120559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dstiounppnezospcnlgzxpbmgtxfhlbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254254.247215-288-168731968874983/AnsiballZ_stat.py'
Sep 30 17:44:14 compute-0 sudo[120559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:14 compute-0 ceph-mon[73755]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:44:14 compute-0 python3.9[120561]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:44:14 compute-0 sudo[120559]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:44:15 compute-0 sudo[120712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-escyrygertuylqknsblrjdylurshfdgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254255.172493-306-60264725425561/AnsiballZ_stat.py'
Sep 30 17:44:15 compute-0 sudo[120712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:15 compute-0 python3.9[120714]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:44:15 compute-0 sudo[120712]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:15.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:44:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:15.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:44:16 compute-0 ceph-mon[73755]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:44:16 compute-0 sudo[120865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigvdsoeqvyteyomvqwewlkohrywqlbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254256.1117148-326-189080059090701/AnsiballZ_service_facts.py'
Sep 30 17:44:16 compute-0 sudo[120865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:16 compute-0 python3.9[120867]: ansible-service_facts Invoked
Sep 30 17:44:16 compute-0 network[120884]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:44:16 compute-0 network[120885]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:44:16 compute-0 network[120886]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:44:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:44:16.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:44:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:44:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:17 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:44:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:17 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:44:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:17 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 17:44:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:44:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:17.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:44:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:17.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:18 compute-0 ceph-mon[73755]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:44:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174418 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:44:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [ALERT] 272/174418 (4) : backend 'backend' has no server available!
Sep 30 17:44:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:18] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:44:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:18] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:44:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:19 compute-0 sudo[120865]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:19.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:20 compute-0 sudo[121028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:44:20 compute-0 sudo[121028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:20 compute-0 sudo[121028]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:20 compute-0 ceph-mon[73755]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:21 compute-0 sudo[121201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psenfvbzovxlapsoqvihjivwweslqwjl ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759254260.8547194-352-213291138703548/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759254260.8547194-352-213291138703548/args'
Sep 30 17:44:21 compute-0 sudo[121201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:21 compute-0 sudo[121201]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:21.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:21 compute-0 sudo[121370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smyevhbqkqpthkarpwwlgnhrlvwjlnsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254261.6203265-374-75649933662453/AnsiballZ_dnf.py'
Sep 30 17:44:21 compute-0 sudo[121370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:21 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:44:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:21 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:44:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:21 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:44:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:22 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 17:44:22 compute-0 python3.9[121372]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:44:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:44:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:44:22 compute-0 ceph-mon[73755]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:44:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:23 compute-0 sudo[121370]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:23.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.748535) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254263748602, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2498, "num_deletes": 251, "total_data_size": 4390343, "memory_usage": 4444736, "flush_reason": "Manual Compaction"}
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254263777898, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 4272896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7998, "largest_seqno": 10495, "table_properties": {"data_size": 4261437, "index_size": 7249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26352, "raw_average_key_size": 21, "raw_value_size": 4237259, "raw_average_value_size": 3411, "num_data_blocks": 321, "num_entries": 1242, "num_filter_entries": 1242, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759254086, "oldest_key_time": 1759254086, "file_creation_time": 1759254263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 29397 microseconds, and 10849 cpu microseconds.
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.777942) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 4272896 bytes OK
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.777959) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.780288) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.780302) EVENT_LOG_v1 {"time_micros": 1759254263780297, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.780318) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 4379338, prev total WAL file size 4379338, number of live WAL files 2.
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.781215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(4172KB)], [23(10MB)]
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254263781243, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 15367607, "oldest_snapshot_seqno": -1}
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3868 keys, 12225139 bytes, temperature: kUnknown
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254263871901, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 12225139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12193698, "index_size": 20643, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 98210, "raw_average_key_size": 25, "raw_value_size": 12117543, "raw_average_value_size": 3132, "num_data_blocks": 889, "num_entries": 3868, "num_filter_entries": 3868, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759254263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.872394) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 12225139 bytes
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.873751) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.0 rd, 134.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 10.6 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(6.5) write-amplify(2.9) OK, records in: 4391, records dropped: 523 output_compression: NoCompression
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.873768) EVENT_LOG_v1 {"time_micros": 1759254263873760, "job": 8, "event": "compaction_finished", "compaction_time_micros": 90948, "compaction_time_cpu_micros": 38542, "output_level": 6, "num_output_files": 1, "total_output_size": 12225139, "num_input_records": 4391, "num_output_records": 3868, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254263874552, "job": 8, "event": "table_file_deletion", "file_number": 25}
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254263876314, "job": 8, "event": "table_file_deletion", "file_number": 23}
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.781174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.876375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.876379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.876381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.876383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:44:23 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:44:23.876385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:44:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:23.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:24 compute-0 sudo[121525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzxzpuhznoluihsromqqprdeaeouponz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254263.9237707-400-210786131643502/AnsiballZ_package_facts.py'
Sep 30 17:44:24 compute-0 sudo[121525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:24 compute-0 ceph-mon[73755]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:24 compute-0 python3.9[121527]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Sep 30 17:44:25 compute-0 sudo[121525]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:25 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:44:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:25 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:44:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:25 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:44:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:25.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:25 compute-0 sudo[121679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evgsxyagyfksoxuwilmocdbufuxwqayp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254265.6309848-420-280611284338625/AnsiballZ_stat.py'
Sep 30 17:44:25 compute-0 sudo[121679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:25.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:26 compute-0 python3.9[121681]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:26 compute-0 sudo[121679]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:26 compute-0 sudo[121757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cttbrmchsneiigopuitzyzuqgehrjgjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254265.6309848-420-280611284338625/AnsiballZ_file.py'
Sep 30 17:44:26 compute-0 sudo[121757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:26 compute-0 python3.9[121759]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:26 compute-0 sudo[121757]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:26 compute-0 ceph-mon[73755]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:44:26.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:44:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 170 B/s wr, 0 op/s
Sep 30 17:44:27 compute-0 sudo[121909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dobaezcqjotgpsqqrnxxdfbibwuyqsge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254266.9886458-444-56513166816255/AnsiballZ_stat.py'
Sep 30 17:44:27 compute-0 sudo[121909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:27 compute-0 python3.9[121911]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:27 compute-0 sudo[121909]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:27 compute-0 sudo[121989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doccfnhnwtnvnvpgkrordsrhdizrwrkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254266.9886458-444-56513166816255/AnsiballZ_file.py'
Sep 30 17:44:27 compute-0 sudo[121989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:27.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:27 compute-0 python3.9[121991]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:27 compute-0 sudo[121989]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:44:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:27.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:44:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:28] "GET /metrics HTTP/1.1" 200 46507 "" "Prometheus/2.51.0"
Sep 30 17:44:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:28] "GET /metrics HTTP/1.1" 200 46507 "" "Prometheus/2.51.0"
Sep 30 17:44:28 compute-0 ceph-mon[73755]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 170 B/s wr, 0 op/s
Sep 30 17:44:29 compute-0 sudo[122141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnfcfbqfvlimgoynzoeojsnxackwcnws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254268.722552-480-106066868396618/AnsiballZ_lineinfile.py'
Sep 30 17:44:29 compute-0 sudo[122141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 17:44:29 compute-0 python3.9[122143]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:29 compute-0 sudo[122141]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:29.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:29 compute-0 ceph-mon[73755]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 17:44:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:29.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:30 compute-0 sudo[122295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpspibncgzmmdwfhjtwixelkrmxprfvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254270.1867747-510-262190709591449/AnsiballZ_setup.py'
Sep 30 17:44:30 compute-0 sudo[122295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:30 compute-0 python3.9[122297]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:44:30 compute-0 sudo[122295]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:44:31 compute-0 sudo[122392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzpmjcrttfnhuujmeabbdxohajidsiay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254270.1867747-510-262190709591449/AnsiballZ_systemd.py'
Sep 30 17:44:31 compute-0 sudo[122392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac40000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:44:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:31.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:31 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac34001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:44:31 compute-0 python3.9[122394]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:44:31 compute-0 sudo[122392]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:31.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:32 compute-0 ceph-mon[73755]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:44:32 compute-0 sshd-session[117780]: Connection closed by 192.168.122.30 port 45304
Sep 30 17:44:32 compute-0 sshd-session[117777]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:44:32 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Sep 30 17:44:32 compute-0 systemd[1]: session-43.scope: Consumed 21.445s CPU time.
Sep 30 17:44:32 compute-0 systemd-logind[811]: Session 43 logged out. Waiting for processes to exit.
Sep 30 17:44:32 compute-0 systemd-logind[811]: Removed session 43.
Sep 30 17:44:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:44:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174433 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:44:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:33 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac34001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:44:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:33.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:33 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac14000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:44:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:44:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:33.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:44:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:34 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:44:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:34 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:44:34 compute-0 ceph-mon[73755]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:44:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 17:44:35 compute-0 kernel: ganesha.nfsd[122311]: segfault at 50 ip 00007facecce332e sp 00007facb27fb210 error 4 in libntirpc.so.5.8[7faceccc8000+2c000] likely on CPU 6 (core 0, socket 6)
Sep 30 17:44:35 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:44:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[120210]: 30/09/2025 17:44:35 : epoch 68dc16eb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac20000fa0 fd 38 proxy ignored for local
Sep 30 17:44:35 compute-0 systemd[1]: Started Process Core Dump (PID 122429/UID 0).
Sep 30 17:44:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:35.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:44:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:44:36 compute-0 ceph-mon[73755]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 17:44:36 compute-0 systemd-coredump[122431]: Process 120214 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 45:
                                                    #0  0x00007facecce332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:44:36 compute-0 systemd[1]: systemd-coredump@3-122429-0.service: Deactivated successfully.
Sep 30 17:44:36 compute-0 systemd[1]: systemd-coredump@3-122429-0.service: Consumed 1.171s CPU time.
Sep 30 17:44:36 compute-0 podman[122436]: 2025-09-30 17:44:36.932074452 +0000 UTC m=+0.026462041 container died e9fc40bdae9459facda7ee0819cec90dbd5ff33a7266512c7605ada1f0350531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:44:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:44:36.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b5ec8bd30f5f0bd53843b4709b393929e122d667fb5b223155433352d243197-merged.mount: Deactivated successfully.
Sep 30 17:44:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 17:44:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:44:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:44:37 compute-0 podman[122436]: 2025-09-30 17:44:37.31427976 +0000 UTC m=+0.408667329 container remove e9fc40bdae9459facda7ee0819cec90dbd5ff33a7266512c7605ada1f0350531 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:44:37 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:44:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:44:37 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:44:37 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.378s CPU time.
Sep 30 17:44:37 compute-0 sshd-session[122481]: Accepted publickey for zuul from 192.168.122.30 port 36266 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:44:37 compute-0 systemd-logind[811]: New session 44 of user zuul.
Sep 30 17:44:37 compute-0 systemd[1]: Started Session 44 of User zuul.
Sep 30 17:44:37 compute-0 sshd-session[122481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:44:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:37.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:44:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:44:38 compute-0 sudo[122635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gorospfexbyficzzqqgtdleekmqirtfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254277.7206452-24-268455081305563/AnsiballZ_file.py'
Sep 30 17:44:38 compute-0 sudo[122635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:38 compute-0 python3.9[122637]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:38 compute-0 sudo[122635]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:38 compute-0 ceph-mon[73755]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 17:44:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:38] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:44:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:38] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:44:39 compute-0 sudo[122787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rebfsfzpwwclgkrfrpmibrzukeosqovx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254278.7546523-48-45168306492145/AnsiballZ_stat.py'
Sep 30 17:44:39 compute-0 sudo[122787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 17:44:39 compute-0 python3.9[122789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:39 compute-0 sudo[122787]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:39 compute-0 sudo[122867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbuhnqjfnkfrrgtoqvuldscjjkqrkcjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254278.7546523-48-45168306492145/AnsiballZ_file.py'
Sep 30 17:44:39 compute-0 sudo[122867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:39.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:39 compute-0 python3.9[122869]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:39 compute-0 sudo[122867]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:44:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:44:40 compute-0 sudo[122894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:44:40 compute-0 sudo[122894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:40 compute-0 sudo[122894]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:40 compute-0 sshd-session[122485]: Connection closed by 192.168.122.30 port 36266
Sep 30 17:44:40 compute-0 sshd-session[122481]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:44:40 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Sep 30 17:44:40 compute-0 systemd[1]: session-44.scope: Consumed 1.409s CPU time.
Sep 30 17:44:40 compute-0 systemd-logind[811]: Session 44 logged out. Waiting for processes to exit.
Sep 30 17:44:40 compute-0 systemd-logind[811]: Removed session 44.
Sep 30 17:44:40 compute-0 ceph-mon[73755]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 17:44:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174440 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:44:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 17:44:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174441 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:44:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:44:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:44:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:42.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:42 compute-0 ceph-mon[73755]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 17:44:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 17:44:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:43.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:44 compute-0 sudo[122923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:44:44 compute-0 sudo[122923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:44 compute-0 sudo[122923]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:44 compute-0 sudo[122948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:44:44 compute-0 sudo[122948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:44 compute-0 sudo[122948]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:44 compute-0 ceph-mon[73755]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 17:44:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:44:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:44:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:44:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:44:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:44:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:44:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:44:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:44:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:44:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:44:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:44:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:44:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:44:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:44:44 compute-0 sudo[123003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:44:44 compute-0 sudo[123003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:44 compute-0 sudo[123003]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:44 compute-0 sudo[123028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:44:44 compute-0 sudo[123028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:45 compute-0 podman[123094]: 2025-09-30 17:44:45.228000844 +0000 UTC m=+0.048489389 container create 6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 17:44:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 17:44:45 compute-0 systemd[92405]: Created slice User Background Tasks Slice.
Sep 30 17:44:45 compute-0 systemd[92405]: Starting Cleanup of User's Temporary Files and Directories...
Sep 30 17:44:45 compute-0 systemd[92405]: Finished Cleanup of User's Temporary Files and Directories.
Sep 30 17:44:45 compute-0 systemd[1]: Started libpod-conmon-6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2.scope.
Sep 30 17:44:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:44:45 compute-0 podman[123094]: 2025-09-30 17:44:45.200868675 +0000 UTC m=+0.021357250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:44:45 compute-0 podman[123094]: 2025-09-30 17:44:45.312625479 +0000 UTC m=+0.133114014 container init 6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 17:44:45 compute-0 podman[123094]: 2025-09-30 17:44:45.318630395 +0000 UTC m=+0.139118950 container start 6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:44:45 compute-0 unruffled_hoover[123112]: 167 167
Sep 30 17:44:45 compute-0 systemd[1]: libpod-6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2.scope: Deactivated successfully.
Sep 30 17:44:45 compute-0 conmon[123112]: conmon 6f5b51387570e880387b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2.scope/container/memory.events
Sep 30 17:44:45 compute-0 podman[123094]: 2025-09-30 17:44:45.330740649 +0000 UTC m=+0.151229204 container attach 6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:44:45 compute-0 podman[123094]: 2025-09-30 17:44:45.331289964 +0000 UTC m=+0.151778509 container died 6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9999717b0471f8ec4b6e22e6eae8ab0f399be869d9cd57f5e42cbfd555a040f9-merged.mount: Deactivated successfully.
Sep 30 17:44:45 compute-0 podman[123094]: 2025-09-30 17:44:45.506690955 +0000 UTC m=+0.327179490 container remove 6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:44:45 compute-0 systemd[1]: libpod-conmon-6f5b51387570e880387b21673e4f060169064f730978cc7830fe8db38718b7b2.scope: Deactivated successfully.
Sep 30 17:44:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:44:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:44:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:44:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:44:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:44:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:44:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:44:45 compute-0 podman[123140]: 2025-09-30 17:44:45.643369527 +0000 UTC m=+0.036589851 container create dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hopper, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:44:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:45.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:45 compute-0 systemd[1]: Started libpod-conmon-dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416.scope.
Sep 30 17:44:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d662d046a5afbbc39da9bdc9ae938b6b2ba54d4f6c409b5bc20a7dd8bca35af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d662d046a5afbbc39da9bdc9ae938b6b2ba54d4f6c409b5bc20a7dd8bca35af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d662d046a5afbbc39da9bdc9ae938b6b2ba54d4f6c409b5bc20a7dd8bca35af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d662d046a5afbbc39da9bdc9ae938b6b2ba54d4f6c409b5bc20a7dd8bca35af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d662d046a5afbbc39da9bdc9ae938b6b2ba54d4f6c409b5bc20a7dd8bca35af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:45 compute-0 podman[123140]: 2025-09-30 17:44:45.627042316 +0000 UTC m=+0.020262660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:44:45 compute-0 sshd-session[123154]: Accepted publickey for zuul from 192.168.122.30 port 36280 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:44:45 compute-0 podman[123140]: 2025-09-30 17:44:45.732485566 +0000 UTC m=+0.125705910 container init dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 17:44:45 compute-0 systemd-logind[811]: New session 45 of user zuul.
Sep 30 17:44:45 compute-0 podman[123140]: 2025-09-30 17:44:45.742899644 +0000 UTC m=+0.136119988 container start dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hopper, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:44:45 compute-0 podman[123140]: 2025-09-30 17:44:45.747059169 +0000 UTC m=+0.140279523 container attach dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:44:45 compute-0 systemd[1]: Started Session 45 of User zuul.
Sep 30 17:44:45 compute-0 sshd-session[123154]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:44:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:44:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:46.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:44:46 compute-0 tender_hopper[123158]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:44:46 compute-0 tender_hopper[123158]: --> All data devices are unavailable
Sep 30 17:44:46 compute-0 systemd[1]: libpod-dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416.scope: Deactivated successfully.
Sep 30 17:44:46 compute-0 podman[123140]: 2025-09-30 17:44:46.079695338 +0000 UTC m=+0.472915662 container died dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hopper, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d662d046a5afbbc39da9bdc9ae938b6b2ba54d4f6c409b5bc20a7dd8bca35af-merged.mount: Deactivated successfully.
Sep 30 17:44:46 compute-0 podman[123140]: 2025-09-30 17:44:46.121210703 +0000 UTC m=+0.514431027 container remove dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 17:44:46 compute-0 systemd[1]: libpod-conmon-dddb66fcc27cb3d1c8aa1edf43c20ed0b93bd0da14df453e45557d06d390c416.scope: Deactivated successfully.
Sep 30 17:44:46 compute-0 sudo[123028]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:46 compute-0 sudo[123239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:44:46 compute-0 sudo[123239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:46 compute-0 sudo[123239]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:46 compute-0 sudo[123264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:44:46 compute-0 sudo[123264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:46 compute-0 podman[123429]: 2025-09-30 17:44:46.583637856 +0000 UTC m=+0.035051169 container create 815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lamarr, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 17:44:46 compute-0 systemd[1]: Started libpod-conmon-815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914.scope.
Sep 30 17:44:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:44:46 compute-0 podman[123429]: 2025-09-30 17:44:46.638365046 +0000 UTC m=+0.089778379 container init 815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:44:46 compute-0 podman[123429]: 2025-09-30 17:44:46.643835887 +0000 UTC m=+0.095249200 container start 815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:44:46 compute-0 podman[123429]: 2025-09-30 17:44:46.646504161 +0000 UTC m=+0.097917474 container attach 815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:44:46 compute-0 infallible_lamarr[123445]: 167 167
Sep 30 17:44:46 compute-0 systemd[1]: libpod-815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914.scope: Deactivated successfully.
Sep 30 17:44:46 compute-0 podman[123429]: 2025-09-30 17:44:46.647641752 +0000 UTC m=+0.099055065 container died 815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lamarr, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:44:46 compute-0 podman[123429]: 2025-09-30 17:44:46.567776238 +0000 UTC m=+0.019189571 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bc0ab0cf7bd94f45349de73729c1311ca3ff0f7d4cc060bc1f820a376140a4a-merged.mount: Deactivated successfully.
Sep 30 17:44:46 compute-0 podman[123429]: 2025-09-30 17:44:46.683416939 +0000 UTC m=+0.134830262 container remove 815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lamarr, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:44:46 compute-0 systemd[1]: libpod-conmon-815a43e88a307e5d6007073a64a904dfcf1c494c62656e350bada9d8288d9914.scope: Deactivated successfully.
Sep 30 17:44:46 compute-0 ceph-mon[73755]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 17:44:46 compute-0 python3.9[123413]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:44:46 compute-0 podman[123470]: 2025-09-30 17:44:46.816565494 +0000 UTC m=+0.039406229 container create 4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jennings, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 17:44:46 compute-0 systemd[1]: Started libpod-conmon-4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7.scope.
Sep 30 17:44:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3838e12e032f8f17e66b90ccd3cd935d33262dfa07448593ea15fcfb18202ea7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3838e12e032f8f17e66b90ccd3cd935d33262dfa07448593ea15fcfb18202ea7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3838e12e032f8f17e66b90ccd3cd935d33262dfa07448593ea15fcfb18202ea7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3838e12e032f8f17e66b90ccd3cd935d33262dfa07448593ea15fcfb18202ea7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:46 compute-0 podman[123470]: 2025-09-30 17:44:46.796535201 +0000 UTC m=+0.019375966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:44:46 compute-0 podman[123470]: 2025-09-30 17:44:46.894248898 +0000 UTC m=+0.117089653 container init 4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jennings, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:44:46 compute-0 podman[123470]: 2025-09-30 17:44:46.906026363 +0000 UTC m=+0.128867108 container start 4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:44:46 compute-0 podman[123470]: 2025-09-30 17:44:46.910824715 +0000 UTC m=+0.133665460 container attach 4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 17:44:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:44:46.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:44:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:44:46.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:44:47 compute-0 kind_jennings[123490]: {
Sep 30 17:44:47 compute-0 kind_jennings[123490]:     "0": [
Sep 30 17:44:47 compute-0 kind_jennings[123490]:         {
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "devices": [
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "/dev/loop3"
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             ],
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "lv_name": "ceph_lv0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "lv_size": "21470642176",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "name": "ceph_lv0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "tags": {
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.cluster_name": "ceph",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.crush_device_class": "",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.encrypted": "0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.osd_id": "0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.type": "block",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.vdo": "0",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:                 "ceph.with_tpm": "0"
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             },
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "type": "block",
Sep 30 17:44:47 compute-0 kind_jennings[123490]:             "vg_name": "ceph_vg0"
Sep 30 17:44:47 compute-0 kind_jennings[123490]:         }
Sep 30 17:44:47 compute-0 kind_jennings[123490]:     ]
Sep 30 17:44:47 compute-0 kind_jennings[123490]: }
Sep 30 17:44:47 compute-0 systemd[1]: libpod-4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7.scope: Deactivated successfully.
Sep 30 17:44:47 compute-0 podman[123470]: 2025-09-30 17:44:47.197581839 +0000 UTC m=+0.420422594 container died 4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jennings, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3838e12e032f8f17e66b90ccd3cd935d33262dfa07448593ea15fcfb18202ea7-merged.mount: Deactivated successfully.
Sep 30 17:44:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Sep 30 17:44:47 compute-0 podman[123470]: 2025-09-30 17:44:47.253248055 +0000 UTC m=+0.476088800 container remove 4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:44:47 compute-0 systemd[1]: libpod-conmon-4644f2d9df83b06be409f965accc1d6a7a5256db45fd914b8f500ec333b894f7.scope: Deactivated successfully.
Sep 30 17:44:47 compute-0 sudo[123264]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:47 compute-0 sudo[123538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:44:47 compute-0 sudo[123538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:47 compute-0 sudo[123538]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:47 compute-0 sudo[123563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:44:47 compute-0 sudo[123563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:47.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:47 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 4.
Sep 30 17:44:47 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:44:47 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.378s CPU time.
Sep 30 17:44:47 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:44:47 compute-0 podman[123742]: 2025-09-30 17:44:47.822725862 +0000 UTC m=+0.051566754 container create e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_payne, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 17:44:47 compute-0 systemd[1]: Started libpod-conmon-e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e.scope.
Sep 30 17:44:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:44:47 compute-0 sudo[123812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqsbsdrbqkkmakdyxeznajnpomvelmob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254287.3963397-46-239215421711492/AnsiballZ_file.py'
Sep 30 17:44:47 compute-0 podman[123742]: 2025-09-30 17:44:47.802410081 +0000 UTC m=+0.031250993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:44:47 compute-0 sudo[123812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:47 compute-0 podman[123742]: 2025-09-30 17:44:47.912652194 +0000 UTC m=+0.141493106 container init e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_payne, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Sep 30 17:44:47 compute-0 podman[123742]: 2025-09-30 17:44:47.919724119 +0000 UTC m=+0.148565011 container start e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_payne, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:44:47 compute-0 hungry_payne[123814]: 167 167
Sep 30 17:44:47 compute-0 podman[123816]: 2025-09-30 17:44:47.924227873 +0000 UTC m=+0.042450682 container create cb0d1e06bba8db75426bf0ed3fb4a2546b194198ed9672dc577bed20aa93f9e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:44:47 compute-0 systemd[1]: libpod-e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e.scope: Deactivated successfully.
Sep 30 17:44:47 compute-0 podman[123742]: 2025-09-30 17:44:47.947835925 +0000 UTC m=+0.176676847 container attach e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_payne, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:44:47 compute-0 podman[123742]: 2025-09-30 17:44:47.948115792 +0000 UTC m=+0.176956674 container died e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_payne, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31611dfeec4ef80ed8c3678d8fc881430732d5cb34083f4796245947279530b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31611dfeec4ef80ed8c3678d8fc881430732d5cb34083f4796245947279530b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31611dfeec4ef80ed8c3678d8fc881430732d5cb34083f4796245947279530b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31611dfeec4ef80ed8c3678d8fc881430732d5cb34083f4796245947279530b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a4d88d100fb76791f6717251507cd50cd2e0f505520b7f31199a0dce5636a83-merged.mount: Deactivated successfully.
Sep 30 17:44:47 compute-0 podman[123816]: 2025-09-30 17:44:47.98898059 +0000 UTC m=+0.107203429 container init cb0d1e06bba8db75426bf0ed3fb4a2546b194198ed9672dc577bed20aa93f9e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:44:47 compute-0 podman[123816]: 2025-09-30 17:44:47.99511931 +0000 UTC m=+0.113342119 container start cb0d1e06bba8db75426bf0ed3fb4a2546b194198ed9672dc577bed20aa93f9e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:44:47 compute-0 podman[123816]: 2025-09-30 17:44:47.90637354 +0000 UTC m=+0.024596379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:48 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:48 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:44:48 compute-0 bash[123816]: cb0d1e06bba8db75426bf0ed3fb4a2546b194198ed9672dc577bed20aa93f9e7
Sep 30 17:44:48 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:44:48 compute-0 podman[123742]: 2025-09-30 17:44:48.018832604 +0000 UTC m=+0.247673496 container remove e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_payne, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 17:44:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:48 compute-0 systemd[1]: libpod-conmon-e01feaf84d96de9ad229fff95e668f7bd327f49cddd612cf7e2f7ee529a7361e.scope: Deactivated successfully.
Sep 30 17:44:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:48.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:48 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:48 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:48 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:48 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:48 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:48 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:44:48 compute-0 python3.9[123830]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:48 compute-0 sudo[123812]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:48 compute-0 podman[123896]: 2025-09-30 17:44:48.17049353 +0000 UTC m=+0.045197439 container create 47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:44:48 compute-0 systemd[1]: Started libpod-conmon-47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16.scope.
Sep 30 17:44:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd0b3bc50a61efbbd9146d5b3e3cc11d4641fc95a35eeae982867e45ada32b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd0b3bc50a61efbbd9146d5b3e3cc11d4641fc95a35eeae982867e45ada32b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd0b3bc50a61efbbd9146d5b3e3cc11d4641fc95a35eeae982867e45ada32b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd0b3bc50a61efbbd9146d5b3e3cc11d4641fc95a35eeae982867e45ada32b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:44:48 compute-0 podman[123896]: 2025-09-30 17:44:48.151748502 +0000 UTC m=+0.026452441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:44:48 compute-0 podman[123896]: 2025-09-30 17:44:48.255449514 +0000 UTC m=+0.130153433 container init 47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euler, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:44:48 compute-0 podman[123896]: 2025-09-30 17:44:48.262363515 +0000 UTC m=+0.137067424 container start 47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euler, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 17:44:48 compute-0 podman[123896]: 2025-09-30 17:44:48.270948132 +0000 UTC m=+0.145652041 container attach 47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euler, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 17:44:48 compute-0 sudo[124149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znxtrcsglijshrfqwbaklvzcbvnlxssm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254288.2944753-62-113723776924516/AnsiballZ_stat.py'
Sep 30 17:44:48 compute-0 sudo[124149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:48] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:44:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:48] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:44:48 compute-0 ceph-mon[73755]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Sep 30 17:44:48 compute-0 lvm[124160]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:44:48 compute-0 lvm[124160]: VG ceph_vg0 finished
Sep 30 17:44:48 compute-0 happy_euler[123935]: {}
Sep 30 17:44:48 compute-0 python3.9[124154]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:48 compute-0 systemd[1]: libpod-47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16.scope: Deactivated successfully.
Sep 30 17:44:48 compute-0 podman[123896]: 2025-09-30 17:44:48.959143485 +0000 UTC m=+0.833847404 container died 47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 17:44:48 compute-0 systemd[1]: libpod-47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16.scope: Consumed 1.047s CPU time.
Sep 30 17:44:48 compute-0 sudo[124149]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fd0b3bc50a61efbbd9146d5b3e3cc11d4641fc95a35eeae982867e45ada32b8-merged.mount: Deactivated successfully.
Sep 30 17:44:49 compute-0 podman[123896]: 2025-09-30 17:44:49.074362205 +0000 UTC m=+0.949066114 container remove 47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euler, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:44:49 compute-0 systemd[1]: libpod-conmon-47a5fe6fc2defb07d021e468912b68b295390f2d9f51f8b919e00ea9d7e8ba16.scope: Deactivated successfully.
Sep 30 17:44:49 compute-0 sudo[123563]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:44:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:44:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:44:49 compute-0 sudo[124252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgyhzoizghlgwplqeyysrpmscitspgkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254288.2944753-62-113723776924516/AnsiballZ_file.py'
Sep 30 17:44:49 compute-0 sudo[124252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:44:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:49 compute-0 sudo[124255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:44:49 compute-0 sudo[124255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:44:49 compute-0 sudo[124255]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:49 compute-0 python3.9[124254]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.x5z7lr89 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:49 compute-0 sudo[124252]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:49.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:50.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:44:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:44:50 compute-0 ceph-mon[73755]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 255 B/s wr, 1 op/s
Sep 30 17:44:50 compute-0 sudo[124431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpwrkiglowxdbmaxlbzmsbwhhjvuotzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254289.985096-102-63779345175034/AnsiballZ_stat.py'
Sep 30 17:44:50 compute-0 sudo[124431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:50 compute-0 python3.9[124433]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:50 compute-0 sudo[124431]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:50 compute-0 sudo[124509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrapacpqcgelcsfgyrmgkaxgcclziwis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254289.985096-102-63779345175034/AnsiballZ_file.py'
Sep 30 17:44:50 compute-0 sudo[124509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174450 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:44:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [ALERT] 272/174450 (4) : backend 'backend' has no server available!
Sep 30 17:44:50 compute-0 python3.9[124511]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.1esxwahk recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:50 compute-0 sudo[124509]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:44:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:51.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:51 compute-0 sudo[124663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvoyjvswcqwbpcogdclarcfqctnshqif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254291.426632-128-239216929067067/AnsiballZ_file.py'
Sep 30 17:44:51 compute-0 sudo[124663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:51 compute-0 python3.9[124665]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:44:51 compute-0 sudo[124663]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:44:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:44:52 compute-0 ceph-mon[73755]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:44:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:44:52 compute-0 sudo[124815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhwytityyeejejzbnrxgcsflutbgsgyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254292.2132647-144-114562631365493/AnsiballZ_stat.py'
Sep 30 17:44:52 compute-0 sudo[124815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:52 compute-0 python3.9[124817]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:52 compute-0 sudo[124815]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:52 compute-0 sudo[124893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaahymrcquhrxffceptoymbuokpbxehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254292.2132647-144-114562631365493/AnsiballZ_file.py'
Sep 30 17:44:52 compute-0 sudo[124893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:53 compute-0 python3.9[124895]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:44:53 compute-0 sudo[124893]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:44:53 compute-0 sudo[125046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqtuviqkfnlkzxhuphrbahhoowtusdpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254293.1758914-144-32475762425704/AnsiballZ_stat.py'
Sep 30 17:44:53 compute-0 sudo[125046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:53 compute-0 python3.9[125048]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:53 compute-0 sudo[125046]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:53.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:53 compute-0 sudo[125125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gohykrdqnewwmkmtpldrgmribtoxaijo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254293.1758914-144-32475762425704/AnsiballZ_file.py'
Sep 30 17:44:53 compute-0 sudo[125125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:53 compute-0 python3.9[125127]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:44:53 compute-0 sudo[125125]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:44:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:54.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:44:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:54 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:44:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:54 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:44:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:54 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 17:44:54 compute-0 ceph-mon[73755]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:44:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:44:54 compute-0 sudo[125277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlnoprdtsslteqchzsukknoybkoipwaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254294.678195-190-263492096070223/AnsiballZ_file.py'
Sep 30 17:44:54 compute-0 sudo[125277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:55 compute-0 python3.9[125279]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:55 compute-0 sudo[125277]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Sep 30 17:44:55 compute-0 sudo[125430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoittzirofcuclzcebmxdnipwljbihdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254295.3693013-206-238807868595708/AnsiballZ_stat.py'
Sep 30 17:44:55 compute-0 sudo[125430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:55.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:55 compute-0 python3.9[125433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:55 compute-0 sudo[125430]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:56 compute-0 sudo[125509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjumopbipvzbntvzmdzdfaojjebdkpnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254295.3693013-206-238807868595708/AnsiballZ_file.py'
Sep 30 17:44:56 compute-0 sudo[125509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:56.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:56 compute-0 python3.9[125511]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:56 compute-0 sudo[125509]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:56 compute-0 ceph-mon[73755]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Sep 30 17:44:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:44:56.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:44:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:44:56.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:44:56 compute-0 sudo[125661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irgyqhmwcsxfatypsiwpbvvcstqhdafl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254296.7507124-230-223640003871632/AnsiballZ_stat.py'
Sep 30 17:44:56 compute-0 sudo[125661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:57 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:44:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:57 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:44:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:44:57 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:44:57 compute-0 python3.9[125663]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:44:57 compute-0 sudo[125661]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Sep 30 17:44:57 compute-0 sudo[125740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocbeaeikstzqewpikmrvumaksqarabvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254296.7507124-230-223640003871632/AnsiballZ_file.py'
Sep 30 17:44:57 compute-0 sudo[125740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:57 compute-0 python3.9[125742]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:44:57 compute-0 sudo[125740]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:57.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:44:58.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:58 compute-0 ceph-mon[73755]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Sep 30 17:44:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:58] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:44:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:44:58] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:44:58 compute-0 sudo[125893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdemrsezbxwqlkledwnckjmkhokzfjbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254298.2034593-254-194250102509402/AnsiballZ_systemd.py'
Sep 30 17:44:58 compute-0 sudo[125893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:44:59 compute-0 python3.9[125895]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:44:59 compute-0 systemd[1]: Reloading.
Sep 30 17:44:59 compute-0 systemd-rc-local-generator[125924]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:44:59 compute-0 systemd-sysv-generator[125928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:44:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:44:59 compute-0 sudo[125893]: pam_unix(sudo:session): session closed for user root
Sep 30 17:44:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:44:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:44:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:44:59.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:44:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:00.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:00 compute-0 sudo[126084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hstcogpjmitydbvgomctpmlqepaegqft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254299.8319964-270-158712515975449/AnsiballZ_stat.py'
Sep 30 17:45:00 compute-0 sudo[126084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:00 compute-0 python3.9[126086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:00 compute-0 sudo[126084]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:00 compute-0 sudo[126089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:45:00 compute-0 sudo[126089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:00 compute-0 sudo[126089]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:00 compute-0 ceph-mon[73755]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:45:00 compute-0 sudo[126187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjalrlnyrvsjwinxvptbrsdocntqxznp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254299.8319964-270-158712515975449/AnsiballZ_file.py'
Sep 30 17:45:00 compute-0 sudo[126187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:00 compute-0 python3.9[126189]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:00 compute-0 sudo[126187]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:45:01 compute-0 sudo[126339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mblmljzmljzdnumtclwklytmpatokkrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254301.0372112-294-149207438756092/AnsiballZ_stat.py'
Sep 30 17:45:01 compute-0 sudo[126339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:01 compute-0 python3.9[126341]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:01 compute-0 sudo[126339]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:01.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:01 compute-0 sudo[126419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjojcppltwefnvvnfymgzqqxgmtvcjqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254301.0372112-294-149207438756092/AnsiballZ_file.py'
Sep 30 17:45:01 compute-0 sudo[126419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:01 compute-0 python3.9[126421]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:01 compute-0 sudo[126419]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:02.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:02 compute-0 sudo[126571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nehlsftvjrmsxzpkfnborrwkuxefdojp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254302.2406564-318-222681035862061/AnsiballZ_systemd.py'
Sep 30 17:45:02 compute-0 sudo[126571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:02 compute-0 ceph-mon[73755]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:45:02 compute-0 python3.9[126573]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:45:02 compute-0 systemd[1]: Reloading.
Sep 30 17:45:02 compute-0 systemd-rc-local-generator[126600]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:45:02 compute-0 systemd-sysv-generator[126603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:45:03 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 17:45:03 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 17:45:03 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 17:45:03 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 17:45:03 compute-0 sudo[126571]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:03.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:04 compute-0 python3.9[126781]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:45:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:04.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:04 compute-0 network[126798]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:45:04 compute-0 network[126799]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:45:04 compute-0 network[126800]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:45:04 compute-0 ceph-mon[73755]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:45:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 17:45:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174505 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:45:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:05 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:05.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:05 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:06.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:06 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:45:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:06 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:45:06 compute-0 ceph-mon[73755]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 17:45:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:45:06.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 17:45:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:45:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:45:07
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', '.nfs', 'images', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:45:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:45:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:07 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:45:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:07.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:07 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:08.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:08 compute-0 ceph-mon[73755]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 17:45:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:08] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:45:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:08] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:45:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:09 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:45:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 17:45:09 compute-0 sudo[127068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fibdblleaanbmpkjnzyhscfbxrtqboor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254309.0115907-370-273521417493187/AnsiballZ_stat.py'
Sep 30 17:45:09 compute-0 sudo[127068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:09 compute-0 python3.9[127070]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:09 compute-0 sudo[127068]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:09 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:09.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:09 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:09 compute-0 sudo[127147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvzmadncuxzeeyggsfsfgqxvhfrogrzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254309.0115907-370-273521417493187/AnsiballZ_file.py'
Sep 30 17:45:09 compute-0 sudo[127147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:10 compute-0 python3.9[127149]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:10 compute-0 sudo[127147]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:10.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:10 compute-0 sudo[127299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsccbemnsgwqkxmqgjxmgvjajxktpmnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254310.3257232-396-39602134831682/AnsiballZ_file.py'
Sep 30 17:45:10 compute-0 sudo[127299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:10 compute-0 ceph-mon[73755]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 17:45:10 compute-0 python3.9[127301]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:10 compute-0 sudo[127299]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:45:11 compute-0 sudo[127452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fteqoyyunutiaozohkrebugzhhmfkzrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254311.1693187-412-247251906843266/AnsiballZ_stat.py'
Sep 30 17:45:11 compute-0 sudo[127452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:11 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:11 compute-0 python3.9[127454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:45:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:11.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:45:11 compute-0 sudo[127452]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:11 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:11 compute-0 sudo[127531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrknjqbhbcdhvduhankagorfphikcezi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254311.1693187-412-247251906843266/AnsiballZ_file.py'
Sep 30 17:45:11 compute-0 sudo[127531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:12.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:12 compute-0 python3.9[127533]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:12 compute-0 sudo[127531]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174512 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:45:12 compute-0 ceph-mon[73755]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:45:12 compute-0 sudo[127683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcmqqcuujtrkbnxenlixnxyppfrxkbfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254312.5405838-442-112422564351206/AnsiballZ_timezone.py'
Sep 30 17:45:12 compute-0 sudo[127683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:13 compute-0 python3.9[127685]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Sep 30 17:45:13 compute-0 systemd[1]: Starting Time & Date Service...
Sep 30 17:45:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:45:13 compute-0 systemd[1]: Started Time & Date Service.
Sep 30 17:45:13 compute-0 sudo[127683]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:13 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:13.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:13 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:14.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:14 compute-0 ceph-mon[73755]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:45:14 compute-0 sudo[127841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtetqfjjjwkwzyehmquettdjwvoirlbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254314.7503698-460-96707792780104/AnsiballZ_file.py'
Sep 30 17:45:14 compute-0 sudo[127841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:15 compute-0 python3.9[127843]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:15 compute-0 sudo[127841]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:45:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:15 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:15.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:15 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:15 compute-0 sudo[127995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidiiubckdbuadnnfdbsbtrdvihbhssp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254315.5479875-476-199874795726042/AnsiballZ_stat.py'
Sep 30 17:45:15 compute-0 sudo[127995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:16 compute-0 python3.9[127997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:16 compute-0 sudo[127995]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:16.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:16 compute-0 sudo[128073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmdvzllxooaljetwujflcagbanfytyuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254315.5479875-476-199874795726042/AnsiballZ_file.py'
Sep 30 17:45:16 compute-0 sudo[128073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:16 compute-0 python3.9[128075]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:16 compute-0 sudo[128073]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:16 compute-0 ceph-mon[73755]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:45:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:45:16.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:45:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Sep 30 17:45:17 compute-0 sudo[128225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fubxvuwqvobrfpvwqypfbboprofjpncv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254316.978014-500-204304280340362/AnsiballZ_stat.py'
Sep 30 17:45:17 compute-0 sudo[128225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:17 compute-0 python3.9[128227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:17 compute-0 sudo[128225]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:17 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:17 compute-0 sudo[128305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbwuiiedkbdxwykkocjwerscshaimcmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254316.978014-500-204304280340362/AnsiballZ_file.py'
Sep 30 17:45:17 compute-0 sudo[128305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:17.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:17 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:17 compute-0 python3.9[128307]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ohvinmuq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:17 compute-0 sudo[128305]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:18.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:18 compute-0 sudo[128457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqjrpugbbcyikudbgozwsqtltxnrnguo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254318.3365827-524-131343136871365/AnsiballZ_stat.py'
Sep 30 17:45:18 compute-0 sudo[128457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:18] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:45:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:18] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:45:18 compute-0 ceph-mon[73755]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Sep 30 17:45:18 compute-0 python3.9[128459]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:18 compute-0 sudo[128457]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:19 compute-0 sudo[128535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnrxdylpkzbtkmvrgxkxcreyxrkuhhqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254318.3365827-524-131343136871365/AnsiballZ_file.py'
Sep 30 17:45:19 compute-0 sudo[128535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Sep 30 17:45:19 compute-0 python3.9[128537]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:19 compute-0 sudo[128535]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:19 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:19.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:19 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:20.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:20 compute-0 sudo[128689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgksyzdadlbdodbwegkhscipaponriy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254319.9366586-550-150504201554201/AnsiballZ_command.py'
Sep 30 17:45:20 compute-0 sudo[128689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:20 compute-0 sudo[128692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:45:20 compute-0 sudo[128692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:20 compute-0 sudo[128692]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:20 compute-0 python3.9[128691]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:45:20 compute-0 sudo[128689]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:20 compute-0 ceph-mon[73755]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Sep 30 17:45:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:45:21 compute-0 sudo[128867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaiwepfydnshhhjkdafjrpmsoakhkqez ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759254320.9093058-566-168461802639413/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 17:45:21 compute-0 sudo[128867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:21 compute-0 python3[128870]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 17:45:21 compute-0 sudo[128867]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:21 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:21.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:21 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:21 compute-0 ceph-mon[73755]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:45:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:22.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:22 compute-0 sudo[129021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puowejmismnjtulilmbwfulgdbwadkqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254321.822103-582-72971646592201/AnsiballZ_stat.py'
Sep 30 17:45:22 compute-0 sudo[129021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:45:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:45:22 compute-0 python3.9[129023]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:22 compute-0 sudo[129021]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:22 compute-0 sudo[129099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etsyvlcaforpdxhdmphvigwqmgnynrzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254321.822103-582-72971646592201/AnsiballZ_file.py'
Sep 30 17:45:22 compute-0 sudo[129099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:22 compute-0 python3.9[129101]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:22 compute-0 sudo[129099]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:45:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:45:23 compute-0 sudo[129252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-refkdnlnsozxwameiawankqbxpqnfmkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254323.1794488-606-2986183321940/AnsiballZ_stat.py'
Sep 30 17:45:23 compute-0 sudo[129252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:23 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:23 compute-0 python3.9[129254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:23.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:23 compute-0 sudo[129252]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:23 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:23 compute-0 ceph-mon[73755]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:45:23 compute-0 sudo[129331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgpsphopizedufzkgymyezkvqrvrkobz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254323.1794488-606-2986183321940/AnsiballZ_file.py'
Sep 30 17:45:23 compute-0 sudo[129331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:24.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:24 compute-0 python3.9[129333]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:24 compute-0 sudo[129331]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:24 compute-0 sudo[129483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqzhyfsmfayjzgjmghmlknorlqesaywd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254324.5921614-630-218609573189812/AnsiballZ_stat.py'
Sep 30 17:45:24 compute-0 sudo[129483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:25 compute-0 python3.9[129485]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:25 compute-0 sudo[129483]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:45:25 compute-0 sudo[129561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmacoyzjryqzkuxzkubpwibodewhtexi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254324.5921614-630-218609573189812/AnsiballZ_file.py'
Sep 30 17:45:25 compute-0 sudo[129561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:25 compute-0 python3.9[129563]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:25 compute-0 sudo[129561]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:25 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:25.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:25 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:45:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:26.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:45:26 compute-0 sudo[129715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dweoombsuvjqxavuhfyqrwcbuyvmytmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254326.02487-654-68360656297387/AnsiballZ_stat.py'
Sep 30 17:45:26 compute-0 sudo[129715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:26 compute-0 ceph-mon[73755]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:45:26 compute-0 python3.9[129717]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:26 compute-0 sudo[129715]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:26 compute-0 sudo[129793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqlcqmfamdtnrgpuurzggvmeohrcuybf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254326.02487-654-68360656297387/AnsiballZ_file.py'
Sep 30 17:45:26 compute-0 sudo[129793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:26 compute-0 python3.9[129795]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:26 compute-0 sudo[129793]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:45:26.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:45:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:27 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:27.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:27 compute-0 sudo[129947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfovyrchsefyaavtihgkmwjivlpqgmqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254327.416806-678-158096561833420/AnsiballZ_stat.py'
Sep 30 17:45:27 compute-0 sudo[129947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:27 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:27 compute-0 python3.9[129949]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:27 compute-0 sudo[129947]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:28.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:28 compute-0 ceph-mon[73755]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:28 compute-0 sudo[130025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrkunkzvcbipqjblxxdyutalhdyvrsiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254327.416806-678-158096561833420/AnsiballZ_file.py'
Sep 30 17:45:28 compute-0 sudo[130025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:28 compute-0 python3.9[130027]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:28 compute-0 sudo[130025]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:28] "GET /metrics HTTP/1.1" 200 46514 "" "Prometheus/2.51.0"
Sep 30 17:45:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:28] "GET /metrics HTTP/1.1" 200 46514 "" "Prometheus/2.51.0"
Sep 30 17:45:29 compute-0 sudo[130177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxugfpvuphhndcnisxccbqopjmpjxllf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254328.950611-704-240251743002093/AnsiballZ_command.py'
Sep 30 17:45:29 compute-0 sudo[130177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:29 compute-0 python3.9[130179]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:45:29 compute-0 sudo[130177]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:29 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80034f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:29.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:29 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:30.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:30 compute-0 sudo[130334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoqxfvoqhbpdfjodneciogkrbhxajnoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254329.7922792-720-184430476780859/AnsiballZ_blockinfile.py'
Sep 30 17:45:30 compute-0 sudo[130334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:30 compute-0 ceph-mon[73755]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:30 compute-0 python3.9[130336]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:30 compute-0 sudo[130334]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:30 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:45:30 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:45:30 compute-0 sudo[130487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alewgnkiypdildlyrnutllspedhjspau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254330.7610402-738-248151452789786/AnsiballZ_file.py'
Sep 30 17:45:30 compute-0 sudo[130487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:31 compute-0 python3.9[130489]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:31 compute-0 sudo[130487]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:31 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:31 compute-0 sudo[130641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrirlerurdhsdkjirubsvjfkpmwjxouk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254331.3928907-738-259648936118162/AnsiballZ_file.py'
Sep 30 17:45:31 compute-0 sudo[130641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:31.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:31 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:31 compute-0 python3.9[130643]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:31 compute-0 sudo[130641]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:32.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:32 compute-0 ceph-mon[73755]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:32 compute-0 sudo[130793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yajjmnyglsghllafetyjuwocbgiacyib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254332.4434655-768-175042440910147/AnsiballZ_mount.py'
Sep 30 17:45:32 compute-0 sudo[130793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:33 compute-0 python3.9[130795]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Sep 30 17:45:33 compute-0 sudo[130793]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:33 compute-0 sudo[130946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwycrgnbkspqwmcsefkjdezxwfsgrrcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254333.293301-768-122811925204640/AnsiballZ_mount.py'
Sep 30 17:45:33 compute-0 sudo[130946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:33 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80034f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:33.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:33 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:33 compute-0 python3.9[130948]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Sep 30 17:45:33 compute-0 sudo[130946]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:34.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:34 compute-0 ceph-mon[73755]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:34 compute-0 sshd-session[123164]: Connection closed by 192.168.122.30 port 36280
Sep 30 17:45:34 compute-0 sshd-session[123154]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:45:34 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Sep 30 17:45:34 compute-0 systemd[1]: session-45.scope: Consumed 27.382s CPU time.
Sep 30 17:45:34 compute-0 systemd-logind[811]: Session 45 logged out. Waiting for processes to exit.
Sep 30 17:45:34 compute-0 systemd-logind[811]: Removed session 45.
Sep 30 17:45:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:45:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:35 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:35 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:36.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:36 compute-0 ceph-mon[73755]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:45:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:45:36.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:45:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:45:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:45:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:45:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:45:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:37 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:37.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:37 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:38.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:38 compute-0 ceph-mon[73755]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:38] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:45:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:38] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:45:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:39 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:39.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:39 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab0000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:40.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:40 compute-0 sshd-session[130981]: Accepted publickey for zuul from 192.168.122.30 port 48336 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:45:40 compute-0 systemd-logind[811]: New session 46 of user zuul.
Sep 30 17:45:40 compute-0 systemd[1]: Started Session 46 of User zuul.
Sep 30 17:45:40 compute-0 sshd-session[130981]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:45:40 compute-0 ceph-mon[73755]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:40 compute-0 sudo[131037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:45:40 compute-0 sudo[131037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:40 compute-0 sudo[131037]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:40 compute-0 sudo[131159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdjjtcyduqseplqhwmiqjkwqwgpfjnyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254340.3861096-17-162374952956528/AnsiballZ_tempfile.py'
Sep 30 17:45:40 compute-0 sudo[131159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:41 compute-0 python3.9[131161]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Sep 30 17:45:41 compute-0 sudo[131159]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:41 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:41.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:41 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:41 compute-0 sudo[131313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzeykomgaixunwctnpqeyqdrvextxiqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254341.3638046-41-80209204650692/AnsiballZ_stat.py'
Sep 30 17:45:41 compute-0 sudo[131313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:41 compute-0 python3.9[131315]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:45:41 compute-0 sudo[131313]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:42.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:42 compute-0 ceph-mon[73755]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:42 compute-0 sudo[131467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaaaznabelvrypxmwkvmgynafchrahnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254342.2743485-57-195209063317021/AnsiballZ_slurp.py'
Sep 30 17:45:42 compute-0 sudo[131467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:42 compute-0 python3.9[131469]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Sep 30 17:45:42 compute-0 sudo[131467]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:43 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Sep 30 17:45:43 compute-0 sudo[131622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddbbzinmpuqrilnkdhbfxfylqsuvsynf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254343.1748207-73-248240664227182/AnsiballZ_stat.py'
Sep 30 17:45:43 compute-0 sudo[131622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:43 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:43 compute-0 python3.9[131624]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.mo8m3unj follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:45:43 compute-0 sudo[131622]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:43.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:43 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab0001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:44 compute-0 sudo[131748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbpillhnlobmhauvkxjlzpltveygesjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254343.1748207-73-248240664227182/AnsiballZ_copy.py'
Sep 30 17:45:44 compute-0 sudo[131748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:44 compute-0 python3.9[131750]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.mo8m3unj mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254343.1748207-73-248240664227182/.source.mo8m3unj _original_basename=.1d6chw1k follow=False checksum=32661b263589debfc2b37628181d327f091429d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:44 compute-0 sudo[131748]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:44 compute-0 ceph-mon[73755]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:45:45 compute-0 sudo[131900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvmqbwnnyscljroyzclcxghhnefldypa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254344.6408424-103-205085823763121/AnsiballZ_setup.py'
Sep 30 17:45:45 compute-0 sudo[131900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:45 compute-0 python3.9[131902]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:45:45 compute-0 sudo[131900]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:45 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:45 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:46 compute-0 sudo[132054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdkiwgdtflgcwunlloxjlbjljzuffufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254345.980251-120-79784791942199/AnsiballZ_blockinfile.py'
Sep 30 17:45:46 compute-0 sudo[132054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:46 compute-0 ceph-mon[73755]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:45:46 compute-0 python3.9[132056]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc3938ID3mnGjsgZen6kQCNM5mkVWqANJocuRy3sXOMN63dtyjhBgL73dvNVc4/MHyzxPDQuzK8tshXHOcqwYvNyllWa9UhEuAdhcNXcRKSELVxmBLRWZx/tsxp7Ws5/jqm87BYWYBOH23DCI96hjzPNZvDj8g24u1gnFFIDlGQELa7bj3YLXw2mWWadQeLxX35z9zMP39YZLf/2F8zAFy27zfi5U4Ni1I6YXvTL+DNwg7Ulluud4fY+sf3ds4pU5htK63pEPYw1f4eI/82wYgnmmEjUqBXGUraTbHG7EoY0kg8bnebUO02l1uSbV+YM/5LNKomXhUy/kUhb9l1uqNuqXvimRH4xVgJ9Mn0cJ2WGhlnkU1gqx0p1FNE01EWx7Spbz4uwVESHAmr67aymcw0Da5R+P9sI5lMqVNJHUeQiAq9bA3X9EbU9oIBIzoZCm7x5N8UpcvzrK0tNMaVLymDnsI8Rkc1MJpuTboQqnsrWs1q2SxaKY2vfqidEBk+Xs=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILy+MaglT2Cqq/Z1fTckQQdU2y2eh3D3Okv7pfMd4ZvV
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKG527neZJvIIF0UdmoBKFMIwvlh64Ua1Pir0KM8tM5Fy8tZbjiOY/Dz3agm+i5OWkd7fXEaYOfPR/rFSi9+L8s=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/5i5Z2213BrqRzlkXUPKi2V6ft/sZowq7ddp+Dq/QqjnkediXByZsJmLLkuA+smrhUZwo5vyubq2HeUmZ3U1EenWRQdC11cJP4ll9+UV0iP2vlc4cMh+DV62ujsM8T15I5/7JnPBcvrGrTJTmpQSQoCm2yD2q/v4Kx2V27sLj8ZlX64zDSBOYy+KhjhBuUM4gEbyrRzO2PqNsMeDrDGr3QFiyGNe8qS2KHmuEa4QFJnumNPJrxYBdTjcsKMZOeuVw2a33JPia0kDgKtaNDV7Izq8h9DlidYk1/aPo6MhfwzYDkRUaKSVhqM1oEDQWc40AK7EX4S00KLr5Nix8bd2nqEZsbD5lk/6wKNR1xdutyZt0GcnOEVJB7+VWN6Y3COvwe9Q1GSKCAhMthkn0Vd9ZvrwiFVKpMUyWD1b74vjHcDu8UOcJlVoqol0jJYEqDCy6mRh0l4Q2PfmyFpVMJ1ib1hV4dPIfzJIkuON6jMedqsKPGZnio8U1E/EMWBlaVn8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICeWw7E9xsgcxKn1cBOcDfvvFIX4M5Blc+gMQNI96O43
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG/CY8xOSIOy2V9qTWOkLlPGEg36qW1s4MO9P37ZVKfdA8ded8m++iIKGFCGxQiTUk0W+13bPq0LIPsJgw+4osM=
                                              create=True mode=0644 path=/tmp/ansible.mo8m3unj state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:46 compute-0 sudo[132054]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:45:46.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:45:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:47 compute-0 sudo[132206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jalwekbuhmtsjuhnjqvpjamzkvwgacnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254346.8655524-136-101523609850367/AnsiballZ_command.py'
Sep 30 17:45:47 compute-0 sudo[132206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:47 compute-0 python3.9[132208]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.mo8m3unj' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:45:47 compute-0 sudo[132206]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:47 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:47.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:47 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab0001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:48.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:48 compute-0 sudo[132362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkbtnklmvvgswcozuljzzsuqpwdquzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254347.8183131-152-216355588070392/AnsiballZ_file.py'
Sep 30 17:45:48 compute-0 sudo[132362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:48 compute-0 python3.9[132364]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.mo8m3unj state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:45:48 compute-0 sudo[132362]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:48 compute-0 ceph-mon[73755]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:48] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:45:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:48] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:45:49 compute-0 sshd-session[130984]: Connection closed by 192.168.122.30 port 48336
Sep 30 17:45:49 compute-0 sshd-session[130981]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:45:49 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Sep 30 17:45:49 compute-0 systemd[1]: session-46.scope: Consumed 4.747s CPU time.
Sep 30 17:45:49 compute-0 systemd-logind[811]: Session 46 logged out. Waiting for processes to exit.
Sep 30 17:45:49 compute-0 systemd-logind[811]: Removed session 46.
Sep 30 17:45:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:49 compute-0 sudo[132390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:45:49 compute-0 sudo[132390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:49 compute-0 sudo[132390]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:49 compute-0 sudo[132415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:45:49 compute-0 sudo[132415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:49 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:49.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:49 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:50 compute-0 sudo[132415]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:50.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:45:50 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:45:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:45:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:45:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:45:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:45:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:45:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:45:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:45:50 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:45:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:45:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:45:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:45:50 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:45:50 compute-0 sudo[132472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:45:50 compute-0 sudo[132472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:50 compute-0 sudo[132472]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:50 compute-0 sudo[132497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:45:50 compute-0 sudo[132497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:50 compute-0 ceph-mon[73755]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:45:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:45:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:45:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:45:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:45:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:45:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:45:50 compute-0 podman[132563]: 2025-09-30 17:45:50.784731609 +0000 UTC m=+0.037783708 container create 6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_elion, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:45:50 compute-0 systemd[1]: Started libpod-conmon-6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56.scope.
Sep 30 17:45:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:45:50 compute-0 podman[132563]: 2025-09-30 17:45:50.849059058 +0000 UTC m=+0.102111157 container init 6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_elion, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:45:50 compute-0 podman[132563]: 2025-09-30 17:45:50.857673619 +0000 UTC m=+0.110725718 container start 6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:45:50 compute-0 podman[132563]: 2025-09-30 17:45:50.860661872 +0000 UTC m=+0.113713991 container attach 6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:45:50 compute-0 determined_elion[132579]: 167 167
Sep 30 17:45:50 compute-0 podman[132563]: 2025-09-30 17:45:50.767462356 +0000 UTC m=+0.020514475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:45:50 compute-0 systemd[1]: libpod-6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56.scope: Deactivated successfully.
Sep 30 17:45:50 compute-0 conmon[132579]: conmon 6f13c85a51a471711c4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56.scope/container/memory.events
Sep 30 17:45:50 compute-0 podman[132563]: 2025-09-30 17:45:50.863947464 +0000 UTC m=+0.116999563 container died 6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 17:45:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec8f6864f8a6d3955751956cbdd0ade41bc27374ef05df734d0f16073fde6972-merged.mount: Deactivated successfully.
Sep 30 17:45:50 compute-0 podman[132563]: 2025-09-30 17:45:50.899828668 +0000 UTC m=+0.152880767 container remove 6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:45:50 compute-0 systemd[1]: libpod-conmon-6f13c85a51a471711c4ed30ad8c1d16ed3124027c2a168564fec9438cfe32d56.scope: Deactivated successfully.
Sep 30 17:45:51 compute-0 podman[132605]: 2025-09-30 17:45:51.071102607 +0000 UTC m=+0.043152247 container create 2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_matsumoto, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:45:51 compute-0 systemd[1]: Started libpod-conmon-2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63.scope.
Sep 30 17:45:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e28a3e9b2aef30034124482bd3e71ac1549fad64aec6bca708806c4455ec06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e28a3e9b2aef30034124482bd3e71ac1549fad64aec6bca708806c4455ec06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e28a3e9b2aef30034124482bd3e71ac1549fad64aec6bca708806c4455ec06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e28a3e9b2aef30034124482bd3e71ac1549fad64aec6bca708806c4455ec06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e28a3e9b2aef30034124482bd3e71ac1549fad64aec6bca708806c4455ec06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:51 compute-0 podman[132605]: 2025-09-30 17:45:51.054269477 +0000 UTC m=+0.026319147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:45:51 compute-0 podman[132605]: 2025-09-30 17:45:51.154596682 +0000 UTC m=+0.126646352 container init 2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_matsumoto, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 17:45:51 compute-0 podman[132605]: 2025-09-30 17:45:51.162332189 +0000 UTC m=+0.134381839 container start 2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_matsumoto, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:45:51 compute-0 podman[132605]: 2025-09-30 17:45:51.166091134 +0000 UTC m=+0.138140774 container attach 2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_matsumoto, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:45:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:51 compute-0 stoic_matsumoto[132622]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:45:51 compute-0 stoic_matsumoto[132622]: --> All data devices are unavailable
Sep 30 17:45:51 compute-0 systemd[1]: libpod-2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63.scope: Deactivated successfully.
Sep 30 17:45:51 compute-0 podman[132638]: 2025-09-30 17:45:51.508688175 +0000 UTC m=+0.023785456 container died 2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:45:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-05e28a3e9b2aef30034124482bd3e71ac1549fad64aec6bca708806c4455ec06-merged.mount: Deactivated successfully.
Sep 30 17:45:51 compute-0 podman[132638]: 2025-09-30 17:45:51.545625828 +0000 UTC m=+0.060723089 container remove 2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_matsumoto, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:45:51 compute-0 systemd[1]: libpod-conmon-2b4a20c282ea1fdee16a4d9d62484e9f902d6b70de8d21e2ab658d617e68ca63.scope: Deactivated successfully.
Sep 30 17:45:51 compute-0 sudo[132497]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:51 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:51 compute-0 sudo[132656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:45:51 compute-0 sudo[132656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:51 compute-0 sudo[132656]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:51.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:51 compute-0 sudo[132681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:45:51 compute-0 sudo[132681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:51 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab0001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:52 compute-0 podman[132744]: 2025-09-30 17:45:52.091621807 +0000 UTC m=+0.037390246 container create ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_colden, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:45:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:52.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:52 compute-0 systemd[1]: Started libpod-conmon-ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b.scope.
Sep 30 17:45:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:45:52 compute-0 podman[132744]: 2025-09-30 17:45:52.075528837 +0000 UTC m=+0.021297306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:45:52 compute-0 podman[132744]: 2025-09-30 17:45:52.173310632 +0000 UTC m=+0.119079101 container init ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 17:45:52 compute-0 podman[132744]: 2025-09-30 17:45:52.179797853 +0000 UTC m=+0.125566292 container start ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:45:52 compute-0 podman[132744]: 2025-09-30 17:45:52.183093596 +0000 UTC m=+0.128862065 container attach ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:45:52 compute-0 elated_colden[132761]: 167 167
Sep 30 17:45:52 compute-0 systemd[1]: libpod-ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b.scope: Deactivated successfully.
Sep 30 17:45:52 compute-0 podman[132744]: 2025-09-30 17:45:52.185694638 +0000 UTC m=+0.131463077 container died ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 17:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d78b984ef24e8b0f146eeb7eaa63a0c97f1804f5294e36419cbf20ef21154672-merged.mount: Deactivated successfully.
Sep 30 17:45:52 compute-0 podman[132744]: 2025-09-30 17:45:52.221217772 +0000 UTC m=+0.166986211 container remove ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:45:52 compute-0 systemd[1]: libpod-conmon-ef8251d0bfdb0624cd02bf6848248863890714c9e56cbdd7d2b9dc604052020b.scope: Deactivated successfully.
Sep 30 17:45:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:45:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:45:52 compute-0 podman[132784]: 2025-09-30 17:45:52.377293767 +0000 UTC m=+0.040797602 container create 9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:45:52 compute-0 systemd[1]: Started libpod-conmon-9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d.scope.
Sep 30 17:45:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d380e0892d43389ddc97c45f83771885c09a7d0fb3b89d00d3014a588f49e2b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d380e0892d43389ddc97c45f83771885c09a7d0fb3b89d00d3014a588f49e2b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d380e0892d43389ddc97c45f83771885c09a7d0fb3b89d00d3014a588f49e2b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d380e0892d43389ddc97c45f83771885c09a7d0fb3b89d00d3014a588f49e2b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:52 compute-0 podman[132784]: 2025-09-30 17:45:52.358727897 +0000 UTC m=+0.022231752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:45:52 compute-0 podman[132784]: 2025-09-30 17:45:52.46396367 +0000 UTC m=+0.127467525 container init 9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:45:52 compute-0 podman[132784]: 2025-09-30 17:45:52.470164054 +0000 UTC m=+0.133667889 container start 9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:45:52 compute-0 podman[132784]: 2025-09-30 17:45:52.473470366 +0000 UTC m=+0.136974221 container attach 9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 17:45:52 compute-0 ceph-mon[73755]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:45:52 compute-0 modest_shannon[132800]: {
Sep 30 17:45:52 compute-0 modest_shannon[132800]:     "0": [
Sep 30 17:45:52 compute-0 modest_shannon[132800]:         {
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "devices": [
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "/dev/loop3"
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             ],
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "lv_name": "ceph_lv0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "lv_size": "21470642176",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "name": "ceph_lv0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "tags": {
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.cluster_name": "ceph",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.crush_device_class": "",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.encrypted": "0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.osd_id": "0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.type": "block",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.vdo": "0",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:                 "ceph.with_tpm": "0"
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             },
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "type": "block",
Sep 30 17:45:52 compute-0 modest_shannon[132800]:             "vg_name": "ceph_vg0"
Sep 30 17:45:52 compute-0 modest_shannon[132800]:         }
Sep 30 17:45:52 compute-0 modest_shannon[132800]:     ]
Sep 30 17:45:52 compute-0 modest_shannon[132800]: }
Sep 30 17:45:52 compute-0 systemd[1]: libpod-9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d.scope: Deactivated successfully.
Sep 30 17:45:52 compute-0 podman[132784]: 2025-09-30 17:45:52.754385792 +0000 UTC m=+0.417889637 container died 9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d380e0892d43389ddc97c45f83771885c09a7d0fb3b89d00d3014a588f49e2b4-merged.mount: Deactivated successfully.
Sep 30 17:45:52 compute-0 podman[132784]: 2025-09-30 17:45:52.794005571 +0000 UTC m=+0.457509456 container remove 9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_shannon, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:45:52 compute-0 systemd[1]: libpod-conmon-9860e3606045d8a62ea1bee6cf663d45e1fd5aefd74e76f0912bf3270ecdac2d.scope: Deactivated successfully.
Sep 30 17:45:52 compute-0 sudo[132681]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:52 compute-0 sudo[132820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:45:52 compute-0 sudo[132820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:52 compute-0 sudo[132820]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:52 compute-0 sudo[132845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:45:52 compute-0 sudo[132845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:53 compute-0 podman[132911]: 2025-09-30 17:45:53.334054834 +0000 UTC m=+0.037416288 container create e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 17:45:53 compute-0 systemd[1]: Started libpod-conmon-e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c.scope.
Sep 30 17:45:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:45:53 compute-0 podman[132911]: 2025-09-30 17:45:53.409479633 +0000 UTC m=+0.112841107 container init e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:45:53 compute-0 podman[132911]: 2025-09-30 17:45:53.316813251 +0000 UTC m=+0.020174735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:45:53 compute-0 podman[132911]: 2025-09-30 17:45:53.414839183 +0000 UTC m=+0.118200637 container start e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_aryabhata, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:45:53 compute-0 podman[132911]: 2025-09-30 17:45:53.418851615 +0000 UTC m=+0.122213089 container attach e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_aryabhata, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 17:45:53 compute-0 zealous_aryabhata[132928]: 167 167
Sep 30 17:45:53 compute-0 systemd[1]: libpod-e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c.scope: Deactivated successfully.
Sep 30 17:45:53 compute-0 podman[132911]: 2025-09-30 17:45:53.420634915 +0000 UTC m=+0.123996369 container died e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_aryabhata, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4f232d057159507a02ec8a854a8af74d28b3102aa604d7381e5f5814a927cab-merged.mount: Deactivated successfully.
Sep 30 17:45:53 compute-0 podman[132911]: 2025-09-30 17:45:53.458168085 +0000 UTC m=+0.161529539 container remove e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 17:45:53 compute-0 systemd[1]: libpod-conmon-e6d253e87a489056fdbc77a55d19c969273c15e4337363ee836630f6066f327c.scope: Deactivated successfully.
Sep 30 17:45:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:53 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:53 compute-0 podman[132951]: 2025-09-30 17:45:53.626259995 +0000 UTC m=+0.054859305 container create d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bardeen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:45:53 compute-0 systemd[1]: Started libpod-conmon-d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45.scope.
Sep 30 17:45:53 compute-0 podman[132951]: 2025-09-30 17:45:53.601142303 +0000 UTC m=+0.029741633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:45:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8f56445a3729b47cb3c08bd2bdeb9d141095b81689d2ef93b794b1878d5771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8f56445a3729b47cb3c08bd2bdeb9d141095b81689d2ef93b794b1878d5771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8f56445a3729b47cb3c08bd2bdeb9d141095b81689d2ef93b794b1878d5771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8f56445a3729b47cb3c08bd2bdeb9d141095b81689d2ef93b794b1878d5771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:45:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:45:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:53.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:45:53 compute-0 podman[132951]: 2025-09-30 17:45:53.722579199 +0000 UTC m=+0.151178469 container init d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:45:53 compute-0 podman[132951]: 2025-09-30 17:45:53.732850866 +0000 UTC m=+0.161450166 container start d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bardeen, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:45:53 compute-0 podman[132951]: 2025-09-30 17:45:53.737361562 +0000 UTC m=+0.165960852 container attach d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:45:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:53 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:54.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:54 compute-0 sshd-session[133028]: Accepted publickey for zuul from 192.168.122.30 port 53004 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:45:54 compute-0 systemd-logind[811]: New session 47 of user zuul.
Sep 30 17:45:54 compute-0 systemd[1]: Started Session 47 of User zuul.
Sep 30 17:45:54 compute-0 sshd-session[133028]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:45:54 compute-0 sshd-session[70665]: Received disconnect from 38.102.83.36 port 51828:11: disconnected by user
Sep 30 17:45:54 compute-0 lvm[133049]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:45:54 compute-0 lvm[133049]: VG ceph_vg0 finished
Sep 30 17:45:54 compute-0 sshd-session[70665]: Disconnected from user zuul 38.102.83.36 port 51828
Sep 30 17:45:54 compute-0 sshd-session[70662]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:45:54 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Sep 30 17:45:54 compute-0 systemd[1]: session-19.scope: Consumed 1min 30.348s CPU time.
Sep 30 17:45:54 compute-0 systemd-logind[811]: Session 19 logged out. Waiting for processes to exit.
Sep 30 17:45:54 compute-0 systemd-logind[811]: Removed session 19.
Sep 30 17:45:54 compute-0 festive_bardeen[132971]: {}
Sep 30 17:45:54 compute-0 systemd[1]: libpod-d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45.scope: Deactivated successfully.
Sep 30 17:45:54 compute-0 systemd[1]: libpod-d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45.scope: Consumed 1.184s CPU time.
Sep 30 17:45:54 compute-0 podman[132951]: 2025-09-30 17:45:54.488300802 +0000 UTC m=+0.916900082 container died d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b8f56445a3729b47cb3c08bd2bdeb9d141095b81689d2ef93b794b1878d5771-merged.mount: Deactivated successfully.
Sep 30 17:45:54 compute-0 podman[132951]: 2025-09-30 17:45:54.531633654 +0000 UTC m=+0.960232924 container remove d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:45:54 compute-0 systemd[1]: libpod-conmon-d8c59b85c5dd884879f9736c28d23fd567cfbe1cbfb6de82250061fb9adb2a45.scope: Deactivated successfully.
Sep 30 17:45:54 compute-0 sudo[132845]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:45:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:45:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:45:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:45:54 compute-0 sudo[133116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:45:54 compute-0 sudo[133116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:45:54 compute-0 sudo[133116]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:54 compute-0 ceph-mon[73755]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:54 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:45:54 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:45:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:45:55 compute-0 python3.9[133238]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:45:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:55 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:55.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:55 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc0047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:56.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:56 compute-0 sudo[133394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgdpewxvsyiidkkqhvbdpafgwyvjsblf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254355.9226804-44-246143582187035/AnsiballZ_systemd.py'
Sep 30 17:45:56 compute-0 sudo[133394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:56 compute-0 ceph-mon[73755]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:45:56 compute-0 python3.9[133396]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Sep 30 17:45:56 compute-0 sudo[133394]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:45:56.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:45:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:45:56.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:45:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:57 compute-0 sudo[133549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwpbjxocjryilkabhohmjjmkuvnyqonf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254357.122098-60-263725573541127/AnsiballZ_systemd.py'
Sep 30 17:45:57 compute-0 sudo[133549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:57 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:57 compute-0 python3.9[133551]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:45:57 compute-0 sudo[133549]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:57.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:57 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:45:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:45:58.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:45:58 compute-0 sudo[133704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpgtraulecnqopzuhcyjednprtcpxgew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254358.1232333-78-261649139780766/AnsiballZ_command.py'
Sep 30 17:45:58 compute-0 sudo[133704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:58 compute-0 python3.9[133706]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:45:58 compute-0 sudo[133704]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:58 compute-0 ceph-mon[73755]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:58] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:45:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:45:58] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:45:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:45:59 compute-0 sudo[133858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxldaizocjsicrvzeubparucsrkimplh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254359.0951717-94-103225786061080/AnsiballZ_stat.py'
Sep 30 17:45:59 compute-0 sudo[133858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:45:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:59 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:45:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:45:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:45:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000029s ======
Sep 30 17:45:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:45:59.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Sep 30 17:45:59 compute-0 python3.9[133860]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:45:59 compute-0 sudo[133858]: pam_unix(sudo:session): session closed for user root
Sep 30 17:45:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:45:59 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab0002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:00.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:00 compute-0 sudo[134011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqbdoxsccswjyrfcpwmpycpzgsdyyow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254360.0329094-112-273693446845919/AnsiballZ_file.py'
Sep 30 17:46:00 compute-0 sudo[134011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:00 compute-0 sudo[134014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:46:00 compute-0 sudo[134014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:00 compute-0 sudo[134014]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:00 compute-0 python3.9[134013]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:00 compute-0 sudo[134011]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:00 compute-0 ceph-mon[73755]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:01 compute-0 sshd-session[133047]: Connection closed by 192.168.122.30 port 53004
Sep 30 17:46:01 compute-0 sshd-session[133028]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:46:01 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Sep 30 17:46:01 compute-0 systemd[1]: session-47.scope: Consumed 3.642s CPU time.
Sep 30 17:46:01 compute-0 systemd-logind[811]: Session 47 logged out. Waiting for processes to exit.
Sep 30 17:46:01 compute-0 systemd-logind[811]: Removed session 47.
Sep 30 17:46:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:01 compute-0 anacron[4242]: Job `cron.daily' started
Sep 30 17:46:01 compute-0 anacron[4242]: Job `cron.daily' terminated
Sep 30 17:46:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:01 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:01.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:01 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:02 compute-0 ceph-mon[73755]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:03.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:03 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab0002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:04 compute-0 ceph-mon[73755]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:05 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab0002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:05 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:06 compute-0 sshd-session[134071]: Accepted publickey for zuul from 192.168.122.30 port 45416 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:46:06 compute-0 systemd-logind[811]: New session 48 of user zuul.
Sep 30 17:46:06 compute-0 systemd[1]: Started Session 48 of User zuul.
Sep 30 17:46:06 compute-0 sshd-session[134071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:46:06 compute-0 ceph-mon[73755]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:46:06.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:46:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:46:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:46:07 compute-0 python3.9[134224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:46:07
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.rgw.root', '.nfs', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:46:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:46:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:07 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:07.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:07 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:46:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:08.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:08 compute-0 sudo[134381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrniazevesodicaxqpnrdqivyoigkvgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254367.9682903-48-57267056203428/AnsiballZ_setup.py'
Sep 30 17:46:08 compute-0 sudo[134381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:08 compute-0 python3.9[134383]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:46:08 compute-0 sudo[134381]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:08] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:46:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:08] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:46:08 compute-0 ceph-mon[73755]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:09 compute-0 sudo[134465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odyhwdrllqmydropdpttvvhpzpubhyez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254367.9682903-48-57267056203428/AnsiballZ_dnf.py'
Sep 30 17:46:09 compute-0 sudo[134465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:09 compute-0 python3.9[134467]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Sep 30 17:46:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:09 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:09.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:09 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:09 compute-0 ceph-mon[73755]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:10.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:10 compute-0 sudo[134465]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:11 compute-0 python3.9[134620]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:46:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:11 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc0010d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:11.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:11 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:12.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:12 compute-0 ceph-mon[73755]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:12 compute-0 python3.9[134773]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 17:46:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:13 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:13 compute-0 python3.9[134924]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:46:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:13 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:14.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:14 compute-0 ceph-mon[73755]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:14 compute-0 python3.9[135075]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:46:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:15 compute-0 sshd-session[134074]: Connection closed by 192.168.122.30 port 45416
Sep 30 17:46:15 compute-0 sshd-session[134071]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:46:15 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Sep 30 17:46:15 compute-0 systemd[1]: session-48.scope: Consumed 5.583s CPU time.
Sep 30 17:46:15 compute-0 systemd-logind[811]: Session 48 logged out. Waiting for processes to exit.
Sep 30 17:46:15 compute-0 systemd-logind[811]: Removed session 48.
Sep 30 17:46:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:15 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc002d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:15.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:15 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:16.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:16 compute-0 ceph-mon[73755]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:46:16.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:46:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:46:16.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:46:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:17 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80021a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:17.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:17 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:18.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:18 compute-0 ceph-mon[73755]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:18] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:46:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:18] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:46:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:19 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc002eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:19.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:19 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:20.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:20 compute-0 sshd-session[135106]: Accepted publickey for zuul from 192.168.122.30 port 56206 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:46:20 compute-0 systemd-logind[811]: New session 49 of user zuul.
Sep 30 17:46:20 compute-0 systemd[1]: Started Session 49 of User zuul.
Sep 30 17:46:20 compute-0 sshd-session[135106]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:46:20 compute-0 ceph-mon[73755]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:20 compute-0 sudo[135162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:46:20 compute-0 sudo[135162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:20 compute-0 sudo[135162]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:21 compute-0 python3.9[135284]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:46:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:21 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80021a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:21 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:22.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:46:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:46:22 compute-0 ceph-mon[73755]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:46:23 compute-0 sudo[135440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iooxplzxgpxfhogroekldnzszatggkmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254382.6072047-81-204300034958531/AnsiballZ_file.py'
Sep 30 17:46:23 compute-0 sudo[135440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:23 compute-0 python3.9[135442]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:23 compute-0 sudo[135440]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:23 compute-0 sudo[135594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anndnltxcsjvdddzwyekicwyoemjiilh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254383.370749-81-173810383759549/AnsiballZ_file.py'
Sep 30 17:46:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:23 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc0037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:23 compute-0 sudo[135594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:23 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:23 compute-0 python3.9[135596]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:23 compute-0 sudo[135594]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:24.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:24 compute-0 sudo[135746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irnrvyhgvjgzgwwpydnmmehsjvsaffwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254384.0551212-109-266321309329544/AnsiballZ_stat.py'
Sep 30 17:46:24 compute-0 sudo[135746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:24 compute-0 ceph-mon[73755]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:24 compute-0 python3.9[135748]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:24 compute-0 sudo[135746]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.725888) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254384726065, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1235, "num_deletes": 251, "total_data_size": 2151537, "memory_usage": 2183608, "flush_reason": "Manual Compaction"}
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254384736529, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1278545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10496, "largest_seqno": 11730, "table_properties": {"data_size": 1274113, "index_size": 1892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11377, "raw_average_key_size": 19, "raw_value_size": 1264509, "raw_average_value_size": 2218, "num_data_blocks": 86, "num_entries": 570, "num_filter_entries": 570, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759254264, "oldest_key_time": 1759254264, "file_creation_time": 1759254384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 10700 microseconds, and 4657 cpu microseconds.
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.736586) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1278545 bytes OK
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.736604) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.737637) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.737650) EVENT_LOG_v1 {"time_micros": 1759254384737646, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.737665) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2146049, prev total WAL file size 2146049, number of live WAL files 2.
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.738299) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1248KB)], [26(11MB)]
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254384738326, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 13503684, "oldest_snapshot_seqno": -1}
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3976 keys, 11191710 bytes, temperature: kUnknown
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254384802294, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 11191710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11161897, "index_size": 18759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 100813, "raw_average_key_size": 25, "raw_value_size": 11086126, "raw_average_value_size": 2788, "num_data_blocks": 806, "num_entries": 3976, "num_filter_entries": 3976, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759254384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.802776) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 11191710 bytes
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.805270) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.9 rd, 174.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.7 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(19.3) write-amplify(8.8) OK, records in: 4438, records dropped: 462 output_compression: NoCompression
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.805301) EVENT_LOG_v1 {"time_micros": 1759254384805286, "job": 10, "event": "compaction_finished", "compaction_time_micros": 64039, "compaction_time_cpu_micros": 21610, "output_level": 6, "num_output_files": 1, "total_output_size": 11191710, "num_input_records": 4438, "num_output_records": 3976, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254384805914, "job": 10, "event": "table_file_deletion", "file_number": 28}
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254384809094, "job": 10, "event": "table_file_deletion", "file_number": 26}
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.738230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.809165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.809168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.809170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.809171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:46:24 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:46:24.809173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:46:25 compute-0 sudo[135869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syaywwaaeklxthkvteihizwevvpwuflr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254384.0551212-109-266321309329544/AnsiballZ_copy.py'
Sep 30 17:46:25 compute-0 sudo[135869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:25 compute-0 python3.9[135871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254384.0551212-109-266321309329544/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=42c8f86044e973828dfa307c523e75a3a0d9b231 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:25 compute-0 sudo[135869]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:25 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80021a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:25.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:25 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:25 compute-0 sudo[136023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhngfbaytcqfmulszzzsohsbqjvckhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254385.6303134-109-8278169570182/AnsiballZ_stat.py'
Sep 30 17:46:25 compute-0 sudo[136023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:26 compute-0 python3.9[136025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:26 compute-0 sudo[136023]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:26.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:26 compute-0 sudo[136146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvyeofxccboufmlraxfdswraoeflmtua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254385.6303134-109-8278169570182/AnsiballZ_copy.py'
Sep 30 17:46:26 compute-0 sudo[136146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:26 compute-0 python3.9[136148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254385.6303134-109-8278169570182/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=14ca920c1050ec666e37d5428df0a6816cc0fde3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:26 compute-0 sudo[136146]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:26 compute-0 ceph-mon[73755]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:46:26.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:46:27 compute-0 sudo[136298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgprgfaefyuhggyptecqjpbgynstbfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254386.7380908-109-201785826826812/AnsiballZ_stat.py'
Sep 30 17:46:27 compute-0 sudo[136298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:27 compute-0 python3.9[136300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:27 compute-0 sudo[136298]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:27 compute-0 sudo[136422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlsbdbyoakodddluycsslzrupixwmerh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254386.7380908-109-201785826826812/AnsiballZ_copy.py'
Sep 30 17:46:27 compute-0 sudo[136422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:27 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8abc0037d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:27 compute-0 python3.9[136424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254386.7380908-109-201785826826812/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2ed99aab988555ee9308d8c48e2f8c112f29a6ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:27.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:27 compute-0 sudo[136422]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:27 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:28.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:28 compute-0 sudo[136575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmmfyuqmzardwxnlfdvbjjzzzwdkxze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254387.9502904-192-256598186579824/AnsiballZ_file.py'
Sep 30 17:46:28 compute-0 sudo[136575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:28 compute-0 python3.9[136577]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:28 compute-0 sudo[136575]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:28 compute-0 ceph-mon[73755]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:28] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:46:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:28] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:46:28 compute-0 sudo[136727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upprvtqfgxkcacvijekcynolaaeriwjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254388.6195967-192-236684721424939/AnsiballZ_file.py'
Sep 30 17:46:28 compute-0 sudo[136727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:29 compute-0 python3.9[136729]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:29 compute-0 sudo[136727]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:29 compute-0 sudo[136880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimekoyfgjomypbyayrylbzjpckxqxtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254389.3036752-220-104389836851407/AnsiballZ_stat.py'
Sep 30 17:46:29 compute-0 sudo[136880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:29 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80032a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:29 compute-0 python3.9[136882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:29.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:29 compute-0 sudo[136880]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:29 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:30 compute-0 sudo[137005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpbaxzkqydeaeizxawafthvrckdsksbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254389.3036752-220-104389836851407/AnsiballZ_copy.py'
Sep 30 17:46:30 compute-0 sudo[137005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:30.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:30 compute-0 python3.9[137007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254389.3036752-220-104389836851407/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=142365ae1f62f4a089d08abaaed7b7d0b4e346bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:30 compute-0 sudo[137005]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:30 compute-0 ceph-mon[73755]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:30 compute-0 sudo[137157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaovrqujixivcqttqgfflrhzeupxvljt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254390.6491349-220-93492518385954/AnsiballZ_stat.py'
Sep 30 17:46:30 compute-0 sudo[137157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:31 compute-0 python3.9[137159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:31 compute-0 sudo[137157]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:31 compute-0 sudo[137281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgmgyirvmhgtyzeeycqsmuinmpdptdyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254390.6491349-220-93492518385954/AnsiballZ_copy.py'
Sep 30 17:46:31 compute-0 sudo[137281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:31 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 17:46:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:31.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 17:46:31 compute-0 python3.9[137284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254390.6491349-220-93492518385954/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b80e9b9f8c9d7ff05236c445fef88812ffce8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:31 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:31 compute-0 sudo[137281]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:32.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:32 compute-0 sudo[137434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfwurftajbqurolhzcmboztdsrkdnbuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254391.9987712-220-40506729698444/AnsiballZ_stat.py'
Sep 30 17:46:32 compute-0 sudo[137434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:32 compute-0 python3.9[137436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:32 compute-0 sudo[137434]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:32 compute-0 ceph-mon[73755]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:32 compute-0 sudo[137557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkpsyltsvgxwekdrqxjzxeneausuzgph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254391.9987712-220-40506729698444/AnsiballZ_copy.py'
Sep 30 17:46:32 compute-0 sudo[137557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:33 compute-0 python3.9[137559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254391.9987712-220-40506729698444/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=865ce379f214e478984a1587ee63d5e80a2a56e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:33 compute-0 sudo[137557]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:33 compute-0 sudo[137710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nveyelsynzgbgezjayepczmizvzkowji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254393.2929213-303-54050374409016/AnsiballZ_file.py'
Sep 30 17:46:33 compute-0 sudo[137710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:33 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80032a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:33 compute-0 python3.9[137712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:33.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:33 compute-0 sudo[137710]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:33 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 17:46:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.3 total, 600.0 interval
                                           Cumulative writes: 2466 writes, 11K keys, 2465 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 2466 writes, 2465 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2466 writes, 11K keys, 2465 commit groups, 1.0 writes per commit group, ingest: 19.61 MB, 0.03 MB/s
                                           Interval WAL: 2466 writes, 2465 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.5      0.34              0.04         5    0.069       0      0       0.0       0.0
                                             L6      1/0   10.67 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    176.5    154.3      0.29              0.11         4    0.072     16K   1787       0.0       0.0
                                            Sum      1/0   10.67 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6     80.4     97.3      0.63              0.15         9    0.070     16K   1787       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    130.0    157.1      0.39              0.15         8    0.049     16K   1787       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    176.5    154.3      0.29              0.11         4    0.072     16K   1787       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    165.1      0.10              0.04         4    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.017, interval 0.017
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.10 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.6 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 304.00 MB usage: 2.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(142,2.03 MB,0.668706%) FilterBlock(10,52.36 KB,0.0168198%) IndexBlock(10,115.25 KB,0.0370226%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 17:46:34 compute-0 sudo[137863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guvbcdtmrlejekgauwoxbrnoonldwwwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254393.9211235-303-243696995420322/AnsiballZ_file.py'
Sep 30 17:46:34 compute-0 sudo[137863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:34 compute-0 python3.9[137865]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:34 compute-0 sudo[137863]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:34 compute-0 ceph-mon[73755]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:34 compute-0 sudo[138015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnnfsurrnmebiyehfcysrfrzhmnlvpxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254394.616547-331-242900729100838/AnsiballZ_stat.py'
Sep 30 17:46:34 compute-0 sudo[138015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:35 compute-0 python3.9[138017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:35 compute-0 sudo[138015]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:35 compute-0 sudo[138139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-typxtjkwztsegsqjwoocymilzackovie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254394.616547-331-242900729100838/AnsiballZ_copy.py'
Sep 30 17:46:35 compute-0 sudo[138139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:35 compute-0 python3.9[138141]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254394.616547-331-242900729100838/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=84faf05597b1f33c8d566af2f682f5684d7292af backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:35 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8adc002eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:35 compute-0 sudo[138139]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:35 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ab8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:35 compute-0 ceph-mon[73755]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:36 compute-0 sudo[138292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoeydnjthecprkevizuwulademttbsrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254395.8233838-331-70679360180730/AnsiballZ_stat.py'
Sep 30 17:46:36 compute-0 sudo[138292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:36.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:36 compute-0 python3.9[138294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:36 compute-0 sudo[138292]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:36 compute-0 sudo[138415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwihlzkymnetvzyjrslniwrxzqjwswvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254395.8233838-331-70679360180730/AnsiballZ_copy.py'
Sep 30 17:46:36 compute-0 sudo[138415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:36 compute-0 python3.9[138417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254395.8233838-331-70679360180730/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=948f0dfed0e9186afecce803b60afdaad42b0122 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:46:36.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:46:36 compute-0 sudo[138415]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:46:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:46:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:46:37 compute-0 sudo[138568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icxefgwmyalbpsikdhboygvasrxoargj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254397.1004052-331-99306015944453/AnsiballZ_stat.py'
Sep 30 17:46:37 compute-0 sudo[138568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:46:37 compute-0 python3.9[138570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:37 compute-0 sudo[138568]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:37 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ac80032a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:46:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:37.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:46:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:37 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:46:37 compute-0 sudo[138692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbugancgqpivziizenetqbgmjzteyolc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254397.1004052-331-99306015944453/AnsiballZ_copy.py'
Sep 30 17:46:37 compute-0 sudo[138692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:38.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:38 compute-0 python3.9[138694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254397.1004052-331-99306015944453/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=adf673795857d31f9bee8a67ac42d03ebdd20e47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:38 compute-0 sudo[138692]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:38 compute-0 ceph-mon[73755]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:38 compute-0 sudo[138844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhpchtgsbtzguypfispacqcyhmsiwipx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254398.4318633-414-170708919273045/AnsiballZ_file.py'
Sep 30 17:46:38 compute-0 sudo[138844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:38] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:46:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:38] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:46:38 compute-0 python3.9[138846]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:38 compute-0 sudo[138844]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:39 compute-0 sudo[138997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izoiczvuelfotihrvgqihxdjnzkhhsmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254399.081877-414-59212299467987/AnsiballZ_file.py'
Sep 30 17:46:39 compute-0 sudo[138997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:39 compute-0 kernel: ganesha.nfsd[126706]: segfault at 50 ip 00007f8b8d87632e sp 00007f8b41ffa210 error 4 in libntirpc.so.5.8[7f8b8d85b000+2c000] likely on CPU 7 (core 0, socket 7)
Sep 30 17:46:39 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:46:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[123845]: 30/09/2025 17:46:39 : epoch 68dc1710 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ae000a960 fd 38 proxy ignored for local
Sep 30 17:46:39 compute-0 python3.9[138999]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:39 compute-0 sudo[138997]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:39 compute-0 systemd[1]: Started Process Core Dump (PID 139002/UID 0).
Sep 30 17:46:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:39.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:40.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:40 compute-0 sudo[139153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxgnvgzjusicbfvmsyxmimssnabeqsyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254399.9105859-443-203640358692471/AnsiballZ_stat.py'
Sep 30 17:46:40 compute-0 sudo[139153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:40 compute-0 ceph-mon[73755]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:40 compute-0 python3.9[139155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:40 compute-0 sudo[139153]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:40 compute-0 sudo[139276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txrubhacbcfnstpcnrsswabvnmhearvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254399.9105859-443-203640358692471/AnsiballZ_copy.py'
Sep 30 17:46:40 compute-0 sudo[139276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:40 compute-0 sudo[139279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:46:40 compute-0 sudo[139279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:40 compute-0 sudo[139279]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:41 compute-0 python3.9[139278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254399.9105859-443-203640358692471/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6ea2516045fb4829b1bcd81f0771c96ab1b73931 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:41 compute-0 systemd-coredump[139003]: Process 123851 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f8b8d87632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:46:41 compute-0 sudo[139276]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:41 compute-0 systemd[1]: systemd-coredump@4-139002-0.service: Deactivated successfully.
Sep 30 17:46:41 compute-0 systemd[1]: systemd-coredump@4-139002-0.service: Consumed 1.350s CPU time.
Sep 30 17:46:41 compute-0 podman[139355]: 2025-09-30 17:46:41.224381599 +0000 UTC m=+0.034676244 container died cb0d1e06bba8db75426bf0ed3fb4a2546b194198ed9672dc577bed20aa93f9e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f31611dfeec4ef80ed8c3678d8fc881430732d5cb34083f4796245947279530b-merged.mount: Deactivated successfully.
Sep 30 17:46:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:41 compute-0 podman[139355]: 2025-09-30 17:46:41.315170345 +0000 UTC m=+0.125464950 container remove cb0d1e06bba8db75426bf0ed3fb4a2546b194198ed9672dc577bed20aa93f9e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:46:41 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:46:41 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:46:41 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.475s CPU time.
Sep 30 17:46:41 compute-0 sudo[139498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htodekigwaqiflmxkttlhynixpsiyupp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254401.176877-443-118793587130775/AnsiballZ_stat.py'
Sep 30 17:46:41 compute-0 sudo[139498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:41 compute-0 python3.9[139500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:41 compute-0 sudo[139498]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:41.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:42 compute-0 sudo[139622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgvznsaylicqubnzcsvmqnzkfxjxpnhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254401.176877-443-118793587130775/AnsiballZ_copy.py'
Sep 30 17:46:42 compute-0 sudo[139622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:42.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:42 compute-0 python3.9[139624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254401.176877-443-118793587130775/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=948f0dfed0e9186afecce803b60afdaad42b0122 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:42 compute-0 sudo[139622]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:42 compute-0 ceph-mon[73755]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:42 compute-0 sudo[139774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwxfearklvfmhirdeuzrcbtmqntcedz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254402.539797-443-3924447574971/AnsiballZ_stat.py'
Sep 30 17:46:42 compute-0 sudo[139774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:43 compute-0 python3.9[139776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:43 compute-0 sudo[139774]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:43 compute-0 sudo[139898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsnjfrqtksfqutqimkkyhutoplxylvlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254402.539797-443-3924447574971/AnsiballZ_copy.py'
Sep 30 17:46:43 compute-0 sudo[139898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:43 compute-0 python3.9[139900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254402.539797-443-3924447574971/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2a066722c37c0492750e5ed67539b576f5ec3059 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:43 compute-0 sudo[139898]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:44.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:44 compute-0 ceph-mon[73755]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:44 compute-0 sudo[140051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgulwkowlllkfyyrgjvjlkcvchlwauyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254404.3259368-550-206116790028474/AnsiballZ_file.py'
Sep 30 17:46:44 compute-0 sudo[140051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:44 compute-0 python3.9[140053]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:44 compute-0 sudo[140051]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:45 compute-0 sudo[140203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrmpmcehzvlfcawlxkyieimganxvztij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254404.9281306-563-5560748946188/AnsiballZ_stat.py'
Sep 30 17:46:45 compute-0 sudo[140203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:45 compute-0 python3.9[140205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:45 compute-0 sudo[140203]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174645 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:46:45 compute-0 sudo[140328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssootxbynamagbhyiehpsuicpsmpzgxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254404.9281306-563-5560748946188/AnsiballZ_copy.py'
Sep 30 17:46:45 compute-0 sudo[140328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:45.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:45 compute-0 python3.9[140330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254404.9281306-563-5560748946188/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=77fa443739ff2ff1b18352a001fa075b3190ad3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:46 compute-0 sudo[140328]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:46.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:46 compute-0 ceph-mon[73755]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:46:46 compute-0 sudo[140480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiwxuuamarqemfwymdkvzoptwpzsjpui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254406.189515-600-103552948513437/AnsiballZ_file.py'
Sep 30 17:46:46 compute-0 sudo[140480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:46 compute-0 python3.9[140482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:46 compute-0 sudo[140480]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:46:46.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:46:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:46:46.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:46:47 compute-0 sudo[140632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqddkwhypglqasvpbnuzuxftamnmbqgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254406.9731927-621-100515865774315/AnsiballZ_stat.py'
Sep 30 17:46:47 compute-0 sudo[140632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:47 compute-0 python3.9[140634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:47 compute-0 sudo[140632]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:47 compute-0 sudo[140757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhbkjhcgfkpexqptgxmhhokqqbcbwilc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254406.9731927-621-100515865774315/AnsiballZ_copy.py'
Sep 30 17:46:47 compute-0 sudo[140757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:47.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:47 compute-0 python3.9[140759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254406.9731927-621-100515865774315/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=77fa443739ff2ff1b18352a001fa075b3190ad3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:47 compute-0 sudo[140757]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:48.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:48 compute-0 sudo[140909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cywabokluufudbgkytrfnfbyjkqztqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254408.144949-647-10592845277214/AnsiballZ_file.py'
Sep 30 17:46:48 compute-0 sudo[140909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:48 compute-0 ceph-mon[73755]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:48 compute-0 python3.9[140911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:48 compute-0 sudo[140909]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:48] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:46:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:48] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:46:48 compute-0 sudo[141061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejvcfjprfyfiqdxfjktczqlgctetftnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254408.731405-662-232552930676178/AnsiballZ_stat.py'
Sep 30 17:46:48 compute-0 sudo[141061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:49 compute-0 python3.9[141063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:49 compute-0 sudo[141061]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:49 compute-0 sudo[141185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcpoktunjlxrenozrevqnrndvfzieddm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254408.731405-662-232552930676178/AnsiballZ_copy.py'
Sep 30 17:46:49 compute-0 sudo[141185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:49 compute-0 python3.9[141187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254408.731405-662-232552930676178/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=77fa443739ff2ff1b18352a001fa075b3190ad3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:49 compute-0 sudo[141185]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:49.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:50 compute-0 sudo[141338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwhvvfulvoofzggxuvqwiltmlrcurgui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254409.8847783-688-129473913006198/AnsiballZ_file.py'
Sep 30 17:46:50 compute-0 sudo[141338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:50.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:50 compute-0 python3.9[141340]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:50 compute-0 sudo[141338]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:50 compute-0 ceph-mon[73755]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:51 compute-0 sudo[141490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajitvhckwinbzxdmqlbngjbtcfagnbxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254410.5452821-703-213874210668860/AnsiballZ_stat.py'
Sep 30 17:46:51 compute-0 sudo[141490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:51 compute-0 python3.9[141492]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:51 compute-0 sudo[141490]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:51 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 5.
Sep 30 17:46:51 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:46:51 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.475s CPU time.
Sep 30 17:46:51 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:46:51 compute-0 sudo[141627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqndlslwadwxwuztlhzikruodijjwbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254410.5452821-703-213874210668860/AnsiballZ_copy.py'
Sep 30 17:46:51 compute-0 sudo[141627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:51 compute-0 podman[141664]: 2025-09-30 17:46:51.734754927 +0000 UTC m=+0.042138199 container create 02f29918286e27389d9523e2f3bcce74ba367615b31dd0e3787d4b1f78167b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:46:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:51.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05ea930c20f6d83d89763907b014519879950d4c4e49c0258ef5cfa32f75050/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05ea930c20f6d83d89763907b014519879950d4c4e49c0258ef5cfa32f75050/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05ea930c20f6d83d89763907b014519879950d4c4e49c0258ef5cfa32f75050/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05ea930c20f6d83d89763907b014519879950d4c4e49c0258ef5cfa32f75050/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:51 compute-0 podman[141664]: 2025-09-30 17:46:51.798171969 +0000 UTC m=+0.105555271 container init 02f29918286e27389d9523e2f3bcce74ba367615b31dd0e3787d4b1f78167b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 17:46:51 compute-0 podman[141664]: 2025-09-30 17:46:51.803323354 +0000 UTC m=+0.110706626 container start 02f29918286e27389d9523e2f3bcce74ba367615b31dd0e3787d4b1f78167b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:46:51 compute-0 python3.9[141630]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254410.5452821-703-213874210668860/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=77fa443739ff2ff1b18352a001fa075b3190ad3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:51 compute-0 bash[141664]: 02f29918286e27389d9523e2f3bcce74ba367615b31dd0e3787d4b1f78167b49
Sep 30 17:46:51 compute-0 podman[141664]: 2025-09-30 17:46:51.716226654 +0000 UTC m=+0.023609946 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:46:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:51 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:46:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:51 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:46:51 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:46:51 compute-0 sudo[141627]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:51 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:46:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:51 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:46:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:51 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:46:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:51 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:46:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:51 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:46:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:51 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:46:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:52.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:46:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:46:52 compute-0 sudo[141871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zncdekfooqbkzitffwvqbmnffeyvyqjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254412.0027237-733-114043969078913/AnsiballZ_file.py'
Sep 30 17:46:52 compute-0 sudo[141871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:52 compute-0 python3.9[141873]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:52 compute-0 sudo[141871]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:52 compute-0 ceph-mon[73755]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:46:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:46:52 compute-0 sudo[142023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuncpaiqmvxknbybdcumygiaxlwodyft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254412.6112995-747-49407678259618/AnsiballZ_stat.py'
Sep 30 17:46:52 compute-0 sudo[142023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:53 compute-0 python3.9[142025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:53 compute-0 sudo[142023]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:46:53 compute-0 sudo[142147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmoikqmheefwcnzvwlkcimkmlbfahmpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254412.6112995-747-49407678259618/AnsiballZ_copy.py'
Sep 30 17:46:53 compute-0 sudo[142147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:53 compute-0 python3.9[142149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254412.6112995-747-49407678259618/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=77fa443739ff2ff1b18352a001fa075b3190ad3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:53 compute-0 sudo[142147]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:53.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:54 compute-0 sudo[142300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzqricvaevmetkioydcopuzjrvpqljm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254413.8306603-775-25789469770496/AnsiballZ_file.py'
Sep 30 17:46:54 compute-0 sudo[142300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:54 compute-0 python3.9[142302]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:54 compute-0 sudo[142300]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:54 compute-0 ceph-mon[73755]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:46:54 compute-0 sudo[142452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okggzoioqdrmmjleeoffqybzalhqtjfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254414.4315002-791-167166370796211/AnsiballZ_stat.py'
Sep 30 17:46:54 compute-0 sudo[142452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:54 compute-0 sudo[142455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:46:54 compute-0 sudo[142455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:54 compute-0 sudo[142455]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:54 compute-0 python3.9[142454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:54 compute-0 sudo[142452]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:54 compute-0 sudo[142480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:46:54 compute-0 sudo[142480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:55 compute-0 sudo[142645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpaxgxsrjihotcmwrzjhqlqgvhpexmge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254414.4315002-791-167166370796211/AnsiballZ_copy.py'
Sep 30 17:46:55 compute-0 sudo[142645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:46:55 compute-0 sudo[142480]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:55 compute-0 python3.9[142647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254414.4315002-791-167166370796211/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=77fa443739ff2ff1b18352a001fa075b3190ad3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:55 compute-0 sudo[142645]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:46:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:46:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:46:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:46:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:46:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:46:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:46:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:46:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:46:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:46:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:46:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:46:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:46:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:46:55 compute-0 sudo[142696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:46:55 compute-0 sudo[142696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:55 compute-0 sudo[142696]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:55 compute-0 sudo[142756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:46:55 compute-0 sudo[142756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:55.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:56 compute-0 sudo[142917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csodmqqwqntjntzukqbwvipicgeixlwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254415.6545196-820-144529257643082/AnsiballZ_file.py'
Sep 30 17:46:56 compute-0 sudo[142917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:56 compute-0 podman[142885]: 2025-09-30 17:46:56.154329655 +0000 UTC m=+0.044583543 container create 13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_curie, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:46:56 compute-0 systemd[1]: Started libpod-conmon-13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68.scope.
Sep 30 17:46:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:56.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:46:56 compute-0 podman[142885]: 2025-09-30 17:46:56.135313189 +0000 UTC m=+0.025567097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:46:56 compute-0 podman[142885]: 2025-09-30 17:46:56.246470126 +0000 UTC m=+0.136724044 container init 13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:46:56 compute-0 podman[142885]: 2025-09-30 17:46:56.254748861 +0000 UTC m=+0.145002789 container start 13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_curie, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:46:56 compute-0 podman[142885]: 2025-09-30 17:46:56.258827638 +0000 UTC m=+0.149081526 container attach 13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_curie, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:46:56 compute-0 interesting_curie[142924]: 167 167
Sep 30 17:46:56 compute-0 systemd[1]: libpod-13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68.scope: Deactivated successfully.
Sep 30 17:46:56 compute-0 podman[142885]: 2025-09-30 17:46:56.262270537 +0000 UTC m=+0.152524425 container died 13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-59f637549aee570c50b0e56eb7958ca1542cf8bdd430f9801d1a93565b43c7a5-merged.mount: Deactivated successfully.
Sep 30 17:46:56 compute-0 podman[142885]: 2025-09-30 17:46:56.311540911 +0000 UTC m=+0.201794799 container remove 13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:46:56 compute-0 systemd[1]: libpod-conmon-13b11e16640490ad96921099aa1d38e8cc1919a5efa19019bbbb8862bf818d68.scope: Deactivated successfully.
Sep 30 17:46:56 compute-0 python3.9[142921]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:46:56 compute-0 sudo[142917]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:56 compute-0 podman[142947]: 2025-09-30 17:46:56.510435723 +0000 UTC m=+0.079916673 container create 1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:46:56 compute-0 podman[142947]: 2025-09-30 17:46:56.459035734 +0000 UTC m=+0.028516764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:46:56 compute-0 systemd[1]: Started libpod-conmon-1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e.scope.
Sep 30 17:46:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d05d5fad2c6220feabc4a61d80b50d43e137f1f59995019037e4699141c44f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d05d5fad2c6220feabc4a61d80b50d43e137f1f59995019037e4699141c44f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d05d5fad2c6220feabc4a61d80b50d43e137f1f59995019037e4699141c44f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d05d5fad2c6220feabc4a61d80b50d43e137f1f59995019037e4699141c44f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d05d5fad2c6220feabc4a61d80b50d43e137f1f59995019037e4699141c44f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:56 compute-0 ceph-mon[73755]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:46:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:46:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:46:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:46:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:46:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:46:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:46:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:46:56 compute-0 podman[142947]: 2025-09-30 17:46:56.630423899 +0000 UTC m=+0.199904869 container init 1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:46:56 compute-0 podman[142947]: 2025-09-30 17:46:56.639445774 +0000 UTC m=+0.208926734 container start 1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:46:56 compute-0 podman[142947]: 2025-09-30 17:46:56.643537121 +0000 UTC m=+0.213018201 container attach 1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:46:56 compute-0 sudo[143122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awuafurygfbwgwzyqgonhkdcncfebnwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254416.603514-838-19988164117279/AnsiballZ_stat.py'
Sep 30 17:46:56 compute-0 sudo[143122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:46:56.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:46:56 compute-0 wizardly_boyd[142994]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:46:56 compute-0 wizardly_boyd[142994]: --> All data devices are unavailable
Sep 30 17:46:57 compute-0 systemd[1]: libpod-1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e.scope: Deactivated successfully.
Sep 30 17:46:57 compute-0 podman[142947]: 2025-09-30 17:46:57.054625242 +0000 UTC m=+0.624106252 container died 1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:46:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d05d5fad2c6220feabc4a61d80b50d43e137f1f59995019037e4699141c44f7-merged.mount: Deactivated successfully.
Sep 30 17:46:57 compute-0 python3.9[143126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:46:57 compute-0 podman[142947]: 2025-09-30 17:46:57.140709844 +0000 UTC m=+0.710190794 container remove 1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:46:57 compute-0 systemd[1]: libpod-conmon-1b8faab8c1d19789dccca481a6ea6f7a8f8a70f8de80b4d90031f7af6ce8536e.scope: Deactivated successfully.
Sep 30 17:46:57 compute-0 sudo[143122]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:57 compute-0 sudo[142756]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:57 compute-0 sudo[143164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:46:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:46:57 compute-0 sudo[143164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:57 compute-0 sudo[143164]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:57 compute-0 sudo[143215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:46:57 compute-0 sudo[143215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:57 compute-0 sudo[143314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krideafxowkkdgkcveleludoodsbmpnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254416.603514-838-19988164117279/AnsiballZ_copy.py'
Sep 30 17:46:57 compute-0 sudo[143314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:46:57 compute-0 python3.9[143317]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254416.603514-838-19988164117279/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=77fa443739ff2ff1b18352a001fa075b3190ad3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:46:57 compute-0 sudo[143314]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:57.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:57 compute-0 podman[143360]: 2025-09-30 17:46:57.837231182 +0000 UTC m=+0.045204409 container create a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_sammet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:46:57 compute-0 systemd[1]: Started libpod-conmon-a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1.scope.
Sep 30 17:46:57 compute-0 podman[143360]: 2025-09-30 17:46:57.817237591 +0000 UTC m=+0.025210858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:46:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:57 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:46:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:46:57 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:46:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:46:57 compute-0 podman[143360]: 2025-09-30 17:46:57.9496235 +0000 UTC m=+0.157596807 container init a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:46:57 compute-0 podman[143360]: 2025-09-30 17:46:57.959712513 +0000 UTC m=+0.167685730 container start a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_sammet, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:46:57 compute-0 blissful_sammet[143401]: 167 167
Sep 30 17:46:57 compute-0 systemd[1]: libpod-a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1.scope: Deactivated successfully.
Sep 30 17:46:57 compute-0 conmon[143401]: conmon a8cc08ff83d3cbf11871 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1.scope/container/memory.events
Sep 30 17:46:57 compute-0 podman[143360]: 2025-09-30 17:46:57.972980279 +0000 UTC m=+0.180953526 container attach a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_sammet, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:46:57 compute-0 podman[143360]: 2025-09-30 17:46:57.973488092 +0000 UTC m=+0.181461369 container died a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfc24d3f45175c6f243fc36a00b2ca3dc45707f55a0231f50eabbd144be4f3a4-merged.mount: Deactivated successfully.
Sep 30 17:46:58 compute-0 podman[143360]: 2025-09-30 17:46:58.041964416 +0000 UTC m=+0.249937633 container remove a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:46:58 compute-0 systemd[1]: libpod-conmon-a8cc08ff83d3cbf118713d256f05dbc72fef26a42a66daf5baf9e35f4db0dee1.scope: Deactivated successfully.
Sep 30 17:46:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:46:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:46:58.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:46:58 compute-0 podman[143426]: 2025-09-30 17:46:58.235772906 +0000 UTC m=+0.050934158 container create ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:46:58 compute-0 systemd[1]: Started libpod-conmon-ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e.scope.
Sep 30 17:46:58 compute-0 podman[143426]: 2025-09-30 17:46:58.214257705 +0000 UTC m=+0.029418977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:46:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c8fbeffa9f208419ad0adbb41702d499e84db2418b923d8c7f6d7a04017772/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c8fbeffa9f208419ad0adbb41702d499e84db2418b923d8c7f6d7a04017772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c8fbeffa9f208419ad0adbb41702d499e84db2418b923d8c7f6d7a04017772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18c8fbeffa9f208419ad0adbb41702d499e84db2418b923d8c7f6d7a04017772/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:58 compute-0 podman[143426]: 2025-09-30 17:46:58.334710574 +0000 UTC m=+0.149871876 container init ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:46:58 compute-0 podman[143426]: 2025-09-30 17:46:58.344473878 +0000 UTC m=+0.159635150 container start ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:46:58 compute-0 podman[143426]: 2025-09-30 17:46:58.34840015 +0000 UTC m=+0.163561422 container attach ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]: {
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:     "0": [
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:         {
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "devices": [
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "/dev/loop3"
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             ],
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "lv_name": "ceph_lv0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "lv_size": "21470642176",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "name": "ceph_lv0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "tags": {
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.cluster_name": "ceph",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.crush_device_class": "",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.encrypted": "0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.osd_id": "0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.type": "block",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.vdo": "0",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:                 "ceph.with_tpm": "0"
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             },
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "type": "block",
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:             "vg_name": "ceph_vg0"
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:         }
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]:     ]
Sep 30 17:46:58 compute-0 awesome_chatelet[143442]: }
Sep 30 17:46:58 compute-0 ceph-mon[73755]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:46:58 compute-0 systemd[1]: libpod-ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e.scope: Deactivated successfully.
Sep 30 17:46:58 compute-0 podman[143426]: 2025-09-30 17:46:58.678597552 +0000 UTC m=+0.493758864 container died ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-18c8fbeffa9f208419ad0adbb41702d499e84db2418b923d8c7f6d7a04017772-merged.mount: Deactivated successfully.
Sep 30 17:46:58 compute-0 podman[143426]: 2025-09-30 17:46:58.739387196 +0000 UTC m=+0.554548458 container remove ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:46:58 compute-0 systemd[1]: libpod-conmon-ef40c11fc5996c9e6357dc02c436866646b327c26321b6e466a0eb5c51a43c3e.scope: Deactivated successfully.
Sep 30 17:46:58 compute-0 sudo[143215]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:58] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:46:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:46:58] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:46:58 compute-0 sudo[143464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:46:58 compute-0 sudo[143464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:58 compute-0 sudo[143464]: pam_unix(sudo:session): session closed for user root
Sep 30 17:46:58 compute-0 sudo[143489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:46:58 compute-0 sudo[143489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:46:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:46:59 compute-0 podman[143553]: 2025-09-30 17:46:59.321485792 +0000 UTC m=+0.041504932 container create 7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:46:59 compute-0 systemd[1]: Started libpod-conmon-7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5.scope.
Sep 30 17:46:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:46:59 compute-0 podman[143553]: 2025-09-30 17:46:59.303554345 +0000 UTC m=+0.023573505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:46:59 compute-0 podman[143553]: 2025-09-30 17:46:59.458620705 +0000 UTC m=+0.178639865 container init 7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_carson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:46:59 compute-0 podman[143553]: 2025-09-30 17:46:59.464825127 +0000 UTC m=+0.184844277 container start 7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:46:59 compute-0 gifted_carson[143570]: 167 167
Sep 30 17:46:59 compute-0 systemd[1]: libpod-7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5.scope: Deactivated successfully.
Sep 30 17:46:59 compute-0 podman[143553]: 2025-09-30 17:46:59.468955544 +0000 UTC m=+0.188974694 container attach 7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:46:59 compute-0 podman[143553]: 2025-09-30 17:46:59.472580669 +0000 UTC m=+0.192599809 container died 7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_carson, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:46:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4e9ea4ca42f8a38eb34b88256b131f17425437b8ff228731212ea9225094dec-merged.mount: Deactivated successfully.
Sep 30 17:46:59 compute-0 podman[143553]: 2025-09-30 17:46:59.514184033 +0000 UTC m=+0.234203173 container remove 7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:46:59 compute-0 systemd[1]: libpod-conmon-7e9a3900a04eb988e35112a26c753be2320b096c964e6231bf1673080c978fc5.scope: Deactivated successfully.
Sep 30 17:46:59 compute-0 podman[143594]: 2025-09-30 17:46:59.694876421 +0000 UTC m=+0.060620141 container create 0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 17:46:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:46:59 compute-0 systemd[1]: Started libpod-conmon-0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e.scope.
Sep 30 17:46:59 compute-0 podman[143594]: 2025-09-30 17:46:59.66761171 +0000 UTC m=+0.033355520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:46:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3778ffbd3dca5e261621b59276c2517a4a20bb9621504eef43076d6ac330dfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3778ffbd3dca5e261621b59276c2517a4a20bb9621504eef43076d6ac330dfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3778ffbd3dca5e261621b59276c2517a4a20bb9621504eef43076d6ac330dfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3778ffbd3dca5e261621b59276c2517a4a20bb9621504eef43076d6ac330dfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:46:59 compute-0 podman[143594]: 2025-09-30 17:46:59.781867827 +0000 UTC m=+0.147611577 container init 0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 17:46:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:46:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:46:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:46:59.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:46:59 compute-0 podman[143594]: 2025-09-30 17:46:59.792596487 +0000 UTC m=+0.158340217 container start 0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 17:46:59 compute-0 podman[143594]: 2025-09-30 17:46:59.795907313 +0000 UTC m=+0.161651053 container attach 0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 17:47:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:00.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:00 compute-0 lvm[143685]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:47:00 compute-0 lvm[143685]: VG ceph_vg0 finished
Sep 30 17:47:00 compute-0 eloquent_archimedes[143610]: {}
Sep 30 17:47:00 compute-0 systemd[1]: libpod-0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e.scope: Deactivated successfully.
Sep 30 17:47:00 compute-0 podman[143594]: 2025-09-30 17:47:00.625576359 +0000 UTC m=+0.991320099 container died 0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 17:47:00 compute-0 systemd[1]: libpod-0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e.scope: Consumed 1.285s CPU time.
Sep 30 17:47:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3778ffbd3dca5e261621b59276c2517a4a20bb9621504eef43076d6ac330dfc-merged.mount: Deactivated successfully.
Sep 30 17:47:00 compute-0 ceph-mon[73755]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:00 compute-0 podman[143594]: 2025-09-30 17:47:00.680059309 +0000 UTC m=+1.045803029 container remove 0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:47:00 compute-0 systemd[1]: libpod-conmon-0b10f49ab6780582a444b2379fbac19ace28e000b8a025668a8557c84513176e.scope: Deactivated successfully.
Sep 30 17:47:00 compute-0 sudo[143489]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:47:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:47:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:47:00 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:47:00 compute-0 sudo[143700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:47:00 compute-0 sudo[143700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:47:00 compute-0 sudo[143700]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:01 compute-0 sudo[143725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:47:01 compute-0 sudo[143725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:47:01 compute-0 sudo[143725]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:01 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:47:01 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:47:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:01.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:02.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:02 compute-0 ceph-mon[73755]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:47:03 compute-0 sshd-session[135109]: Connection closed by 192.168.122.30 port 56206
Sep 30 17:47:03 compute-0 sshd-session[135106]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:47:03 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Sep 30 17:47:03 compute-0 systemd[1]: session-49.scope: Consumed 29.081s CPU time.
Sep 30 17:47:03 compute-0 systemd-logind[811]: Session 49 logged out. Waiting for processes to exit.
Sep 30 17:47:03 compute-0 systemd-logind[811]: Removed session 49.
Sep 30 17:47:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:03.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:03 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:47:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:04.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:04 compute-0 ceph-mon[73755]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:47:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:47:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:05 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6dc000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:05.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:05 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001c40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:06.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:06 compute-0 ceph-mon[73755]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:47:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:47:06.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:47:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:47:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:47:07
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', 'images', '.rgw.root', 'backups', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:47:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:47:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174707 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:47:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:07 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6b0000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:07.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:47:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:07 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ac000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:08.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:08] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:47:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:08] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:47:08 compute-0 ceph-mon[73755]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:47:09 compute-0 sshd-session[143775]: Accepted publickey for zuul from 192.168.122.30 port 33510 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:47:09 compute-0 systemd-logind[811]: New session 50 of user zuul.
Sep 30 17:47:09 compute-0 systemd[1]: Started Session 50 of User zuul.
Sep 30 17:47:09 compute-0 sshd-session[143775]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:47:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:09 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6b8000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:09 compute-0 sudo[143930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnyvqspwjzihuhkgdhjpzsxpqcrunwzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254429.1698544-24-156060797834522/AnsiballZ_file.py'
Sep 30 17:47:09 compute-0 sudo[143930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:09 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001c40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:09 compute-0 python3.9[143932]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:09 compute-0 sudo[143930]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:09 compute-0 ceph-mon[73755]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:10.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:10 compute-0 sudo[144082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvewnuopzasbnrbxcqdhdipkrhoqvuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254430.1430454-48-25057264876009/AnsiballZ_stat.py'
Sep 30 17:47:10 compute-0 sudo[144082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:10 compute-0 python3.9[144084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:10 compute-0 sudo[144082]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:11 compute-0 sudo[144205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrszjqgxzprpibglknnlegmszxrlkmff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254430.1430454-48-25057264876009/AnsiballZ_copy.py'
Sep 30 17:47:11 compute-0 sudo[144205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:11 compute-0 python3.9[144208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254430.1430454-48-25057264876009/.source.conf _original_basename=ceph.conf follow=False checksum=e66796ec907df6d0d5e4b75f31c3de3e776363a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:11 compute-0 sudo[144205]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:11 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001c40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:11.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:11 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001c40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:11 compute-0 sudo[144359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyoyjbfrldmnhbzgeomazwimjtaivfbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254431.6385274-48-97015207605677/AnsiballZ_stat.py'
Sep 30 17:47:11 compute-0 sudo[144359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:12 compute-0 python3.9[144361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:12 compute-0 sudo[144359]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:12.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:12 compute-0 ceph-mon[73755]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:12 compute-0 sudo[144482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohdqgdincxqfyrdziwuhlairxcmjbojb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254431.6385274-48-97015207605677/AnsiballZ_copy.py'
Sep 30 17:47:12 compute-0 sudo[144482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:12 compute-0 python3.9[144484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254431.6385274-48-97015207605677/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=7362c4454e8786984d45f5a884c5c867d1ac96a9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:12 compute-0 sudo[144482]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:13 compute-0 sshd-session[143778]: Connection closed by 192.168.122.30 port 33510
Sep 30 17:47:13 compute-0 sshd-session[143775]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:47:13 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Sep 30 17:47:13 compute-0 systemd[1]: session-50.scope: Consumed 2.653s CPU time.
Sep 30 17:47:13 compute-0 systemd-logind[811]: Session 50 logged out. Waiting for processes to exit.
Sep 30 17:47:13 compute-0 systemd-logind[811]: Removed session 50.
Sep 30 17:47:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:13 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6dc000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:13.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:13 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ac0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:14.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:14 compute-0 ceph-mon[73755]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:15 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ac0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:15.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:15 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003310 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:16.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:16 compute-0 ceph-mon[73755]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:47:16.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:47:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:47:16.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:47:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:17 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6dc000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:17 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ac0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:18 compute-0 ceph-mon[73755]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:18] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:47:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:18] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:47:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:19 compute-0 sshd-session[144516]: Accepted publickey for zuul from 192.168.122.30 port 47072 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:47:19 compute-0 systemd-logind[811]: New session 51 of user zuul.
Sep 30 17:47:19 compute-0 systemd[1]: Started Session 51 of User zuul.
Sep 30 17:47:19 compute-0 sshd-session[144516]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:47:19 compute-0 kernel: ganesha.nfsd[143771]: segfault at 50 ip 00007fa78689232e sp 00007fa73effc210 error 4 in libntirpc.so.5.8[7fa786877000+2c000] likely on CPU 0 (core 0, socket 0)
Sep 30 17:47:19 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:47:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[141680]: 30/09/2025 17:47:19 : epoch 68dc178b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ac0016a0 fd 37 proxy ignored for local
Sep 30 17:47:19 compute-0 systemd[1]: Started Process Core Dump (PID 144573/UID 0).
Sep 30 17:47:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:19.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:20 compute-0 python3.9[144672]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:47:20 compute-0 ceph-mon[73755]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:20 compute-0 systemd-coredump[144574]: Process 141684 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007fa78689232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:47:20 compute-0 systemd[1]: systemd-coredump@5-144573-0.service: Deactivated successfully.
Sep 30 17:47:20 compute-0 systemd[1]: systemd-coredump@5-144573-0.service: Consumed 1.072s CPU time.
Sep 30 17:47:20 compute-0 podman[144705]: 2025-09-30 17:47:20.931255774 +0000 UTC m=+0.045971839 container died 02f29918286e27389d9523e2f3bcce74ba367615b31dd0e3787d4b1f78167b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:47:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a05ea930c20f6d83d89763907b014519879950d4c4e49c0258ef5cfa32f75050-merged.mount: Deactivated successfully.
Sep 30 17:47:20 compute-0 podman[144705]: 2025-09-30 17:47:20.97448522 +0000 UTC m=+0.089201275 container remove 02f29918286e27389d9523e2f3bcce74ba367615b31dd0e3787d4b1f78167b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 17:47:20 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:47:21 compute-0 sudo[144776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:47:21 compute-0 sudo[144776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:47:21 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:47:21 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.247s CPU time.
Sep 30 17:47:21 compute-0 sudo[144776]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:47:21 compute-0 sudo[144896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phiqcbzeeqwhdsagoxdshzpedljhgwxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254441.0263612-48-194359464064525/AnsiballZ_file.py'
Sep 30 17:47:21 compute-0 sudo[144896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:21 compute-0 python3.9[144898]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:47:21 compute-0 sudo[144896]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:21.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 17:47:22 compute-0 sudo[145049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzsgnnsnitblczrzrufcijbuxivyhvzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254441.8061724-48-93460973753597/AnsiballZ_file.py'
Sep 30 17:47:22 compute-0 sudo[145049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:22.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:47:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:47:22 compute-0 python3.9[145051]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:47:22 compute-0 sudo[145049]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:22 compute-0 ceph-mon[73755]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:47:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:47:23 compute-0 python3.9[145201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:47:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:47:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:23.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:23 compute-0 sudo[145353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjxsjmbeupcbzdvhgheyfuwwessnehyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254443.5645237-94-100814728953191/AnsiballZ_seboolean.py'
Sep 30 17:47:24 compute-0 sudo[145353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:24 compute-0 python3.9[145355]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Sep 30 17:47:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:24.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:24 compute-0 ceph-mon[73755]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:47:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:47:25 compute-0 sudo[145353]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174725 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:47:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:25.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:25 compute-0 ceph-mon[73755]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:47:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:26.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:26 compute-0 sudo[145511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltxyunimrawlhqrpexmzxzgfysmeiuuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254446.6369872-114-275321143743672/AnsiballZ_setup.py'
Sep 30 17:47:26 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Sep 30 17:47:26 compute-0 sudo[145511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:47:26.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:47:27 compute-0 python3.9[145513]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:47:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:47:27 compute-0 sudo[145511]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:27 compute-0 sudo[145597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avxwacbfaysmdvfioepkyittgdyjfugx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254446.6369872-114-275321143743672/AnsiballZ_dnf.py'
Sep 30 17:47:27 compute-0 sudo[145597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:28 compute-0 python3.9[145599]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:47:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:28.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:28 compute-0 ceph-mon[73755]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:47:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:28] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:47:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:28] "GET /metrics HTTP/1.1" 200 46509 "" "Prometheus/2.51.0"
Sep 30 17:47:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:47:29 compute-0 sudo[145597]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:29.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:30.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:30 compute-0 sudo[145752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrkaodmogaqmycukgnnwbyltocawxenh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254449.853215-138-153938313264100/AnsiballZ_systemd.py'
Sep 30 17:47:30 compute-0 sudo[145752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:30 compute-0 ceph-mon[73755]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:47:30 compute-0 python3.9[145754]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:47:30 compute-0 sudo[145752]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:31 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 6.
Sep 30 17:47:31 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:47:31 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.247s CPU time.
Sep 30 17:47:31 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:47:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:47:31 compute-0 podman[145880]: 2025-09-30 17:47:31.447691231 +0000 UTC m=+0.040057715 container create 0529ffb5cb030d5c033521a9a1517a83434d1d37b59a640422637d81c7f55fa3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 17:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bffd7de9cb0fdc1881cfa2ffe97a9dccf33440ed0934b7372fe92a2196aa2cfb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bffd7de9cb0fdc1881cfa2ffe97a9dccf33440ed0934b7372fe92a2196aa2cfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bffd7de9cb0fdc1881cfa2ffe97a9dccf33440ed0934b7372fe92a2196aa2cfb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bffd7de9cb0fdc1881cfa2ffe97a9dccf33440ed0934b7372fe92a2196aa2cfb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:47:31 compute-0 podman[145880]: 2025-09-30 17:47:31.508958927 +0000 UTC m=+0.101325461 container init 0529ffb5cb030d5c033521a9a1517a83434d1d37b59a640422637d81c7f55fa3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:47:31 compute-0 podman[145880]: 2025-09-30 17:47:31.513975988 +0000 UTC m=+0.106342482 container start 0529ffb5cb030d5c033521a9a1517a83434d1d37b59a640422637d81c7f55fa3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:47:31 compute-0 bash[145880]: 0529ffb5cb030d5c033521a9a1517a83434d1d37b59a640422637d81c7f55fa3
Sep 30 17:47:31 compute-0 podman[145880]: 2025-09-30 17:47:31.430720519 +0000 UTC m=+0.023087023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:47:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:47:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:47:31 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:47:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:47:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:47:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:47:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:47:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:47:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:47:31 compute-0 sudo[146012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjykjmhoohpopdvtzgzwvtngyvfoxbut ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759254451.2105055-154-209561740484766/AnsiballZ_edpm_nftables_snippet.py'
Sep 30 17:47:31 compute-0 sudo[146012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:31.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:31 compute-0 python3[146014]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Sep 30 17:47:31 compute-0 sudo[146012]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:32.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:32 compute-0 sudo[146164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjkravnlwylfaczpdzcbzqiihpnvolkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254452.216474-172-88330010509690/AnsiballZ_file.py'
Sep 30 17:47:32 compute-0 sudo[146164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:32 compute-0 python3.9[146166]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:32 compute-0 sudo[146164]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:32 compute-0 ceph-mon[73755]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:47:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:33 compute-0 sudo[146317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqtailinqxfhfqmuuiypjdiowazhuwlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254453.038141-188-47111088713287/AnsiballZ_stat.py'
Sep 30 17:47:33 compute-0 sudo[146317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:33 compute-0 python3.9[146319]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:33 compute-0 sudo[146317]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:33.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:33 compute-0 sudo[146396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gikcfkewlwsfzdjrxhytdpdibpccgktb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254453.038141-188-47111088713287/AnsiballZ_file.py'
Sep 30 17:47:33 compute-0 sudo[146396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:34 compute-0 python3.9[146398]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:34 compute-0 sudo[146396]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:34.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:34 compute-0 sudo[146548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiicevyckusavdduxqkdaicmykcfjaca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254454.5236714-212-167317985655863/AnsiballZ_stat.py'
Sep 30 17:47:34 compute-0 sudo[146548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:34 compute-0 ceph-mon[73755]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:34 compute-0 python3.9[146550]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:34 compute-0 sudo[146548]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:35 compute-0 sudo[146627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwirnbyxgqyjflulncxtadleofoownhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254454.5236714-212-167317985655863/AnsiballZ_file.py'
Sep 30 17:47:35 compute-0 sudo[146627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:35 compute-0 python3.9[146629]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.iypdgded recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:35 compute-0 sudo[146627]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:35.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:35 compute-0 ceph-mon[73755]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:36 compute-0 sudo[146781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhckdlzytlmdrflsylbvbnhliiwqfqmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254455.856358-236-10303403632474/AnsiballZ_stat.py'
Sep 30 17:47:36 compute-0 sudo[146781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:36.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:36 compute-0 python3.9[146783]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:36 compute-0 sudo[146781]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:36 compute-0 sudo[146859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnyetfdggwlrxbyggmgrytbdookwbghc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254455.856358-236-10303403632474/AnsiballZ_file.py'
Sep 30 17:47:36 compute-0 sudo[146859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:36 compute-0 python3.9[146861]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:36 compute-0 sudo[146859]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:47:36.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:47:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:47:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:47:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:47:37 compute-0 sudo[147012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixhtlzuxthtgxhfvahnhqxpfdtlfgprg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254457.155503-262-21637103443839/AnsiballZ_command.py'
Sep 30 17:47:37 compute-0 sudo[147012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:37 compute-0 python3.9[147014]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:47:37 compute-0 sudo[147012]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:37.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:37 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:47:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:37 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:47:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:38.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:38 compute-0 sudo[147166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnbtlbghsimcuragjdxhhmlfgdcicrrr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759254458.0429091-278-176960401436507/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 17:47:38 compute-0 sudo[147166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:38 compute-0 ceph-mon[73755]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:38 compute-0 python3[147168]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 17:47:38 compute-0 sudo[147166]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:38] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:47:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:38] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:47:39 compute-0 sudo[147318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxetleemkrvppzsmchlmbsoyrlsnyuzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254458.9745004-294-262892878457336/AnsiballZ_stat.py'
Sep 30 17:47:39 compute-0 sudo[147318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:39 compute-0 python3.9[147320]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:39 compute-0 sudo[147318]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:39.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:40 compute-0 sudo[147445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnsuxhmxejnigirysdemtofmltcdwpdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254458.9745004-294-262892878457336/AnsiballZ_copy.py'
Sep 30 17:47:40 compute-0 sudo[147445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:40.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:40 compute-0 python3.9[147447]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254458.9745004-294-262892878457336/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:40 compute-0 sudo[147445]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:40 compute-0 ceph-mon[73755]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:41 compute-0 sudo[147597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yogechvbvaprxtpugkkmrjhomqxwyjgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254460.7082932-324-106035935089174/AnsiballZ_stat.py'
Sep 30 17:47:41 compute-0 sudo[147597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:41 compute-0 sudo[147600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:47:41 compute-0 sudo[147600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:47:41 compute-0 sudo[147600]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:41 compute-0 python3.9[147599]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:41 compute-0 sudo[147597]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:41 compute-0 sudo[147749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrpbhgwodcpafqcohizbzwfabckkfzhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254460.7082932-324-106035935089174/AnsiballZ_copy.py'
Sep 30 17:47:41 compute-0 sudo[147749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:41.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:41 compute-0 python3.9[147751]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254460.7082932-324-106035935089174/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:41 compute-0 sudo[147749]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:42.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:42 compute-0 sudo[147901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbcyojkzhemglkesxbzodeeqynuombot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254462.3049042-354-278219035245845/AnsiballZ_stat.py'
Sep 30 17:47:42 compute-0 sudo[147901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:42 compute-0 ceph-mon[73755]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:42 compute-0 python3.9[147903]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:42 compute-0 sudo[147901]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:43 compute-0 sudo[148026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjuxgdqmrzybcrlqzrxdkzylsesuophi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254462.3049042-354-278219035245845/AnsiballZ_copy.py'
Sep 30 17:47:43 compute-0 sudo[148026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:47:43 compute-0 python3.9[148028]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254462.3049042-354-278219035245845/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:43 compute-0 sudo[148026]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:44 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:47:44 compute-0 sudo[148192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzlkstusvgutqyzmkpbdxozgwqjjhfsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254463.8659992-384-56800974636675/AnsiballZ_stat.py'
Sep 30 17:47:44 compute-0 sudo[148192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:44.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:44 compute-0 python3.9[148194]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:44 compute-0 sudo[148192]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:44 compute-0 sudo[148317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqvdhfabidpahzesadesiwwuwbspcekp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254463.8659992-384-56800974636675/AnsiballZ_copy.py'
Sep 30 17:47:44 compute-0 sudo[148317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:44 compute-0 ceph-mon[73755]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:47:44 compute-0 python3.9[148319]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254463.8659992-384-56800974636675/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:44 compute-0 sudo[148317]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:47:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:45 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1274000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:45 compute-0 sudo[148474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uburflgtzzjymybwqtxsxjchvsjnbzbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254465.3854098-414-5962445190622/AnsiballZ_stat.py'
Sep 30 17:47:45 compute-0 sudo[148474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:45 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:45 compute-0 ceph-mon[73755]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:47:46 compute-0 python3.9[148476]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:46 compute-0 sudo[148474]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:46 compute-0 sudo[148599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvtbbivdqecyhrtpylnvokxepelempxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254465.3854098-414-5962445190622/AnsiballZ_copy.py'
Sep 30 17:47:46 compute-0 sudo[148599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:46 compute-0 python3.9[148601]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254465.3854098-414-5962445190622/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:46 compute-0 sudo[148599]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:47:46.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:47:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:47:47 compute-0 sudo[148752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isaqsyrkriyjlmkocxtdgauqfawdsfvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254467.324924-444-49769332690759/AnsiballZ_file.py'
Sep 30 17:47:47 compute-0 sudo[148752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174747 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:47:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:47 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:47 compute-0 python3.9[148755]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:47 compute-0 sudo[148752]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:47.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:47 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:47:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:48.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:47:48 compute-0 sudo[148905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzipxkzjbjzmeizsszalzpmhbjncmcaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254468.0793178-460-6661655956803/AnsiballZ_command.py'
Sep 30 17:47:48 compute-0 sudo[148905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:48 compute-0 ceph-mon[73755]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:47:48 compute-0 python3.9[148907]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:47:48 compute-0 sudo[148905]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:48] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:47:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:48] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:47:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:49 compute-0 sudo[149061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efzaupotbawacirxkidocklfvfaaqmcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254468.9665732-476-103074982943403/AnsiballZ_blockinfile.py'
Sep 30 17:47:49 compute-0 sudo[149061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:49 compute-0 python3.9[149063]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:49 compute-0 sudo[149061]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:49 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:49.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:49 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:50 compute-0 sudo[149214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogohanwmmrcndsrhuysygasvvcsajhsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254469.9738429-494-94808974368422/AnsiballZ_command.py'
Sep 30 17:47:50 compute-0 sudo[149214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:50.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:50 compute-0 python3.9[149216]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:47:50 compute-0 sudo[149214]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:50 compute-0 ceph-mon[73755]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:47:51 compute-0 sudo[149367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cflxmvkkhfspbcobrkfnawmgcizuhirx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254470.817391-510-243651018558202/AnsiballZ_stat.py'
Sep 30 17:47:51 compute-0 sudo[149367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:51 compute-0 python3.9[149369]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:47:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:51 compute-0 sudo[149367]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:51 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:47:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:47:51 compute-0 sudo[149523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jacrzorugjlfwgbplaczxxsgjnadzdvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254471.5976286-526-93530312705570/AnsiballZ_command.py'
Sep 30 17:47:51 compute-0 sudo[149523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:51 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:52 compute-0 python3.9[149525]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:47:52 compute-0 sudo[149523]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:47:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:47:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:52.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:52 compute-0 ceph-mon[73755]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:47:52 compute-0 sudo[149678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxhxhmbnpiwtivzripntxghxclkjskxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254472.41915-542-57757149337135/AnsiballZ_file.py'
Sep 30 17:47:52 compute-0 sudo[149678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:52 compute-0 python3.9[149680]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:47:52 compute-0 sudo[149678]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:53 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:53 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:54.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:54 compute-0 python3.9[149832]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:47:54 compute-0 ceph-mon[73755]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:47:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:55 compute-0 sudo[149983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqkdqlzswdyngbgsmzvkysummfilcigw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254475.0264544-622-260845935937085/AnsiballZ_command.py'
Sep 30 17:47:55 compute-0 sudo[149983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:55 compute-0 python3.9[149985]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:74:f6:ca:ec" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:47:55 compute-0 ovs-vsctl[149987]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:74:f6:ca:ec external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Sep 30 17:47:55 compute-0 sudo[149983]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:55 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:55.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:55 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:56 compute-0 sudo[150138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idbyvwkaebzojehnoueejnwddsdvkgre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254475.8920825-640-150073841450031/AnsiballZ_command.py'
Sep 30 17:47:56 compute-0 sudo[150138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:56.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:56 compute-0 python3.9[150140]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:47:56 compute-0 sudo[150138]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:56 compute-0 ceph-mon[73755]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:56 compute-0 sudo[150293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgnccsryyhlsmiscrjgdvkbrzywnkxmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254476.6883924-656-101034936048485/AnsiballZ_command.py'
Sep 30 17:47:56 compute-0 sudo[150293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:47:56.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:47:57 compute-0 python3.9[150295]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:47:57 compute-0 ovs-vsctl[150296]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Sep 30 17:47:57 compute-0 sudo[150293]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:57 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:57.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:57 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:57 compute-0 python3.9[150448]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:47:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:47:58.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:58 compute-0 ceph-mon[73755]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:58 compute-0 sudo[150600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjfsocbsvquarhbwszqmpidctjxmjima ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254478.4277725-690-186152379652905/AnsiballZ_file.py'
Sep 30 17:47:58 compute-0 sudo[150600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:58] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:47:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:47:58] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:47:58 compute-0 python3.9[150602]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:47:58 compute-0 sudo[150600]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:47:59 compute-0 sudo[150753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoppyrcbolqcbqezwgzuryeegbfarzxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254479.2276287-706-69206033262304/AnsiballZ_stat.py'
Sep 30 17:47:59 compute-0 sudo[150753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:47:59 compute-0 python3.9[150755]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:47:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:59 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:59 compute-0 sudo[150753]: pam_unix(sudo:session): session closed for user root
Sep 30 17:47:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:47:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:47:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:47:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:47:59.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:47:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:47:59 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:47:59 compute-0 sudo[150832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ituzwsrjtjwdxhykvpsffynjqotmplpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254479.2276287-706-69206033262304/AnsiballZ_file.py'
Sep 30 17:47:59 compute-0 sudo[150832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:00 compute-0 python3.9[150834]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:00 compute-0 sudo[150832]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:00.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:00 compute-0 sudo[150984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eowrqjzjmzpnrjtaboiaobbsprthbsjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254480.2648978-706-270609090877069/AnsiballZ_stat.py'
Sep 30 17:48:00 compute-0 sudo[150984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:00 compute-0 ceph-mon[73755]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:48:00 compute-0 python3.9[150986]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:00 compute-0 sudo[150984]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:01 compute-0 sudo[151062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyixlkbxpffdtbbehcixtundffhlvudo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254480.2648978-706-270609090877069/AnsiballZ_file.py'
Sep 30 17:48:01 compute-0 sudo[151062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:01 compute-0 sudo[151065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:48:01 compute-0 sudo[151065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:01 compute-0 sudo[151065]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:01 compute-0 sudo[151090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 17:48:01 compute-0 sudo[151090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:01 compute-0 python3.9[151064]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:01 compute-0 sudo[151062]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:01 compute-0 sudo[151126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:48:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:01 compute-0 sudo[151126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:01 compute-0 sudo[151126]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:01 compute-0 sudo[151090]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:48:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:48:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:01 compute-0 sudo[151186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:48:01 compute-0 sudo[151186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:01 compute-0 sudo[151186]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:01 compute-0 sudo[151211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:48:01 compute-0 sudo[151211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:48:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:48:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:01 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:01 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:02 compute-0 sudo[151211]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:02.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:02 compute-0 sudo[151395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tancaodsslrjpthjuazehunhscyqkwor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254482.0233748-752-202306489715473/AnsiballZ_file.py'
Sep 30 17:48:02 compute-0 sudo[151395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:48:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:48:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:48:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:48:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:48:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:48:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:48:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:48:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:48:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:48:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:48:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:48:02 compute-0 ceph-mon[73755]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:48:02 compute-0 sudo[151398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:48:02 compute-0 sudo[151398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:02 compute-0 sudo[151398]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:02 compute-0 python3.9[151397]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:02 compute-0 sudo[151395]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:02 compute-0 sudo[151423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:48:02 compute-0 sudo[151423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:02 compute-0 podman[151569]: 2025-09-30 17:48:02.945579504 +0000 UTC m=+0.037990106 container create 30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:48:02 compute-0 systemd[1]: Started libpod-conmon-30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b.scope.
Sep 30 17:48:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:48:03 compute-0 podman[151569]: 2025-09-30 17:48:02.926223952 +0000 UTC m=+0.018634574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:48:03 compute-0 podman[151569]: 2025-09-30 17:48:03.034100642 +0000 UTC m=+0.126511244 container init 30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lichterman, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 17:48:03 compute-0 podman[151569]: 2025-09-30 17:48:03.04093803 +0000 UTC m=+0.133348632 container start 30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:48:03 compute-0 podman[151569]: 2025-09-30 17:48:03.044214565 +0000 UTC m=+0.136625197 container attach 30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lichterman, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:48:03 compute-0 sudo[151658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsfabwilxzkybsnoivexnknogdgetmhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254482.8199816-768-177595145479422/AnsiballZ_stat.py'
Sep 30 17:48:03 compute-0 elastic_lichterman[151629]: 167 167
Sep 30 17:48:03 compute-0 systemd[1]: libpod-30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b.scope: Deactivated successfully.
Sep 30 17:48:03 compute-0 podman[151569]: 2025-09-30 17:48:03.047473939 +0000 UTC m=+0.139884541 container died 30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lichterman, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 17:48:03 compute-0 sudo[151658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-94cefba2e3c2a4239995b69f56a564f2724cfcf16bb60b434bb4e8ce013a4bb1-merged.mount: Deactivated successfully.
Sep 30 17:48:03 compute-0 podman[151569]: 2025-09-30 17:48:03.085473676 +0000 UTC m=+0.177884308 container remove 30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_lichterman, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:48:03 compute-0 systemd[1]: libpod-conmon-30fb726862bd9bfc553dc7a870715562e5f2a3738a175067cc3294aee88ca83b.scope: Deactivated successfully.
Sep 30 17:48:03 compute-0 podman[151682]: 2025-09-30 17:48:03.243887848 +0000 UTC m=+0.046109648 container create a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:48:03 compute-0 python3.9[151663]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:03 compute-0 systemd[1]: Started libpod-conmon-a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce.scope.
Sep 30 17:48:03 compute-0 sudo[151658]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:03 compute-0 podman[151682]: 2025-09-30 17:48:03.223689734 +0000 UTC m=+0.025911534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:48:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1162c2ff128a3b8fc6b3a6e03800b72d22561ad35cde7840de3881af119373c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1162c2ff128a3b8fc6b3a6e03800b72d22561ad35cde7840de3881af119373c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1162c2ff128a3b8fc6b3a6e03800b72d22561ad35cde7840de3881af119373c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1162c2ff128a3b8fc6b3a6e03800b72d22561ad35cde7840de3881af119373c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1162c2ff128a3b8fc6b3a6e03800b72d22561ad35cde7840de3881af119373c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:03 compute-0 podman[151682]: 2025-09-30 17:48:03.345620879 +0000 UTC m=+0.147842679 container init a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:48:03 compute-0 podman[151682]: 2025-09-30 17:48:03.354640813 +0000 UTC m=+0.156862603 container start a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:48:03 compute-0 podman[151682]: 2025-09-30 17:48:03.357969619 +0000 UTC m=+0.160191409 container attach a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 17:48:03 compute-0 sudo[151781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibirjkwwgycywskptawmkdxpmsomwqnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254482.8199816-768-177595145479422/AnsiballZ_file.py'
Sep 30 17:48:03 compute-0 sudo[151781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:48:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.545298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254483545416, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1087, "num_deletes": 251, "total_data_size": 1883865, "memory_usage": 1910592, "flush_reason": "Manual Compaction"}
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254483556761, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1840436, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11732, "largest_seqno": 12817, "table_properties": {"data_size": 1835193, "index_size": 2703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11019, "raw_average_key_size": 19, "raw_value_size": 1824684, "raw_average_value_size": 3190, "num_data_blocks": 122, "num_entries": 572, "num_filter_entries": 572, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759254384, "oldest_key_time": 1759254384, "file_creation_time": 1759254483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11640 microseconds, and 5648 cpu microseconds.
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.556949) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1840436 bytes OK
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.557009) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.559000) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.559016) EVENT_LOG_v1 {"time_micros": 1759254483559010, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.559035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1878873, prev total WAL file size 1878873, number of live WAL files 2.
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.559999) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1797KB)], [29(10MB)]
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254483560090, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 13032146, "oldest_snapshot_seqno": -1}
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4030 keys, 10321957 bytes, temperature: kUnknown
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254483609513, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 10321957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10292771, "index_size": 18004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 102714, "raw_average_key_size": 25, "raw_value_size": 10217033, "raw_average_value_size": 2535, "num_data_blocks": 762, "num_entries": 4030, "num_filter_entries": 4030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759254483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.609797) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 10321957 bytes
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.610972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 263.2 rd, 208.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.7 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(12.7) write-amplify(5.6) OK, records in: 4548, records dropped: 518 output_compression: NoCompression
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.610991) EVENT_LOG_v1 {"time_micros": 1759254483610981, "job": 12, "event": "compaction_finished", "compaction_time_micros": 49506, "compaction_time_cpu_micros": 23479, "output_level": 6, "num_output_files": 1, "total_output_size": 10321957, "num_input_records": 4548, "num_output_records": 4030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254483611377, "job": 12, "event": "table_file_deletion", "file_number": 31}
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254483612944, "job": 12, "event": "table_file_deletion", "file_number": 29}
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.559861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.613039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.613046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.613048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.613050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:48:03 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:48:03.613051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:48:03 compute-0 optimistic_black[151700]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:48:03 compute-0 optimistic_black[151700]: --> All data devices are unavailable
Sep 30 17:48:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:03 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:03 compute-0 systemd[1]: libpod-a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce.scope: Deactivated successfully.
Sep 30 17:48:03 compute-0 python3.9[151783]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:03 compute-0 podman[151795]: 2025-09-30 17:48:03.779583454 +0000 UTC m=+0.031095588 container died a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:48:03 compute-0 sudo[151781]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1162c2ff128a3b8fc6b3a6e03800b72d22561ad35cde7840de3881af119373c0-merged.mount: Deactivated successfully.
Sep 30 17:48:03 compute-0 podman[151795]: 2025-09-30 17:48:03.828664958 +0000 UTC m=+0.080177092 container remove a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:48:03 compute-0 systemd[1]: libpod-conmon-a597024929760ff9af0d1a268d10a9a9aa6c4edfe37e8e39fd3a4525a95a1fce.scope: Deactivated successfully.
Sep 30 17:48:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:03.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:03 compute-0 sudo[151423]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:03 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:03 compute-0 sudo[151834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:48:03 compute-0 sudo[151834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:03 compute-0 sudo[151834]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:04 compute-0 sudo[151859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:48:04 compute-0 sudo[151859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:04 compute-0 sudo[152051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbjnlvhhzbfuqqtonbzrnwdztnnkztds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254484.12574-792-144339937648424/AnsiballZ_stat.py'
Sep 30 17:48:04 compute-0 sudo[152051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:04 compute-0 podman[152053]: 2025-09-30 17:48:04.48399681 +0000 UTC m=+0.043800108 container create d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:48:04 compute-0 systemd[1]: Started libpod-conmon-d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d.scope.
Sep 30 17:48:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:48:04 compute-0 ceph-mon[73755]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:04 compute-0 podman[152053]: 2025-09-30 17:48:04.464803391 +0000 UTC m=+0.024606709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:48:04 compute-0 podman[152053]: 2025-09-30 17:48:04.56643786 +0000 UTC m=+0.126241188 container init d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wu, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 17:48:04 compute-0 podman[152053]: 2025-09-30 17:48:04.574429187 +0000 UTC m=+0.134232485 container start d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Sep 30 17:48:04 compute-0 condescending_wu[152070]: 167 167
Sep 30 17:48:04 compute-0 podman[152053]: 2025-09-30 17:48:04.58069436 +0000 UTC m=+0.140497658 container attach d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:48:04 compute-0 podman[152053]: 2025-09-30 17:48:04.581070559 +0000 UTC m=+0.140873857 container died d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wu, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 17:48:04 compute-0 systemd[1]: libpod-d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d.scope: Deactivated successfully.
Sep 30 17:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-56633a11f93d5fe2114b0f70eaa1585a3e1c326a0ed7856d7f97a7e8d4d2f806-merged.mount: Deactivated successfully.
Sep 30 17:48:04 compute-0 podman[152053]: 2025-09-30 17:48:04.613475061 +0000 UTC m=+0.173278359 container remove d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wu, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:48:04 compute-0 systemd[1]: libpod-conmon-d4085d703080f6c59e03a972607ef2e6bdb3c3f7cfb9872bc6158a4f641eca4d.scope: Deactivated successfully.
Sep 30 17:48:04 compute-0 python3.9[152055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:04 compute-0 sudo[152051]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:04 compute-0 podman[152109]: 2025-09-30 17:48:04.77600233 +0000 UTC m=+0.040519233 container create a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mahavira, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 17:48:04 compute-0 systemd[1]: Started libpod-conmon-a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26.scope.
Sep 30 17:48:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fc674ff2276da2d5feef52b57e1c7177cdbae0ef5f050f4e106c25ca88da64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fc674ff2276da2d5feef52b57e1c7177cdbae0ef5f050f4e106c25ca88da64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fc674ff2276da2d5feef52b57e1c7177cdbae0ef5f050f4e106c25ca88da64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fc674ff2276da2d5feef52b57e1c7177cdbae0ef5f050f4e106c25ca88da64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:04 compute-0 podman[152109]: 2025-09-30 17:48:04.757934611 +0000 UTC m=+0.022451534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:48:04 compute-0 podman[152109]: 2025-09-30 17:48:04.863031819 +0000 UTC m=+0.127548752 container init a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mahavira, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:48:04 compute-0 podman[152109]: 2025-09-30 17:48:04.869813335 +0000 UTC m=+0.134330238 container start a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mahavira, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:48:04 compute-0 podman[152109]: 2025-09-30 17:48:04.874066605 +0000 UTC m=+0.138583598 container attach a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:48:04 compute-0 sudo[152191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtelrhmhphmjgvvifrwjeagxxyzlkysv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254484.12574-792-144339937648424/AnsiballZ_file.py'
Sep 30 17:48:04 compute-0 sudo[152191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:05 compute-0 python3.9[152193]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:05 compute-0 sudo[152191]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]: {
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:     "0": [
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:         {
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "devices": [
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "/dev/loop3"
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             ],
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "lv_name": "ceph_lv0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "lv_size": "21470642176",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "name": "ceph_lv0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "tags": {
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.cluster_name": "ceph",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.crush_device_class": "",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.encrypted": "0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.osd_id": "0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.type": "block",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.vdo": "0",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:                 "ceph.with_tpm": "0"
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             },
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "type": "block",
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:             "vg_name": "ceph_vg0"
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:         }
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]:     ]
Sep 30 17:48:05 compute-0 trusting_mahavira[152160]: }
Sep 30 17:48:05 compute-0 systemd[1]: libpod-a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26.scope: Deactivated successfully.
Sep 30 17:48:05 compute-0 podman[152109]: 2025-09-30 17:48:05.167826451 +0000 UTC m=+0.432343354 container died a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mahavira, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 17:48:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-58fc674ff2276da2d5feef52b57e1c7177cdbae0ef5f050f4e106c25ca88da64-merged.mount: Deactivated successfully.
Sep 30 17:48:05 compute-0 podman[152109]: 2025-09-30 17:48:05.205408306 +0000 UTC m=+0.469925209 container remove a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mahavira, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:48:05 compute-0 systemd[1]: libpod-conmon-a3149bd601e2a1c426547d0c466b933efe605c8472386b137f8d9612188a5d26.scope: Deactivated successfully.
Sep 30 17:48:05 compute-0 sudo[151859]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:05 compute-0 sudo[152234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:48:05 compute-0 sudo[152234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:05 compute-0 sudo[152234]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:05 compute-0 sudo[152259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:48:05 compute-0 sudo[152259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:05 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:05 compute-0 podman[152426]: 2025-09-30 17:48:05.730587849 +0000 UTC m=+0.040638325 container create 07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wescoff, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:48:05 compute-0 sudo[152466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfyaudushjriymfuvwqdzdzuyqtakntl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254485.449322-816-67585407387799/AnsiballZ_systemd.py'
Sep 30 17:48:05 compute-0 sudo[152466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:05 compute-0 systemd[1]: Started libpod-conmon-07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06.scope.
Sep 30 17:48:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:48:05 compute-0 podman[152426]: 2025-09-30 17:48:05.713787513 +0000 UTC m=+0.023837979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:48:05 compute-0 podman[152426]: 2025-09-30 17:48:05.824310882 +0000 UTC m=+0.134361348 container init 07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wescoff, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:48:05 compute-0 podman[152426]: 2025-09-30 17:48:05.83307467 +0000 UTC m=+0.143125136 container start 07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 17:48:05 compute-0 podman[152426]: 2025-09-30 17:48:05.836418667 +0000 UTC m=+0.146469133 container attach 07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wescoff, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 17:48:05 compute-0 dreamy_wescoff[152471]: 167 167
Sep 30 17:48:05 compute-0 podman[152426]: 2025-09-30 17:48:05.839930328 +0000 UTC m=+0.149980794 container died 07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wescoff, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:48:05 compute-0 systemd[1]: libpod-07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06.scope: Deactivated successfully.
Sep 30 17:48:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4127d45922ca3103f7d9bb5f64e453db18e0a9403ce011e681f6acb640852ca-merged.mount: Deactivated successfully.
Sep 30 17:48:05 compute-0 podman[152426]: 2025-09-30 17:48:05.876105597 +0000 UTC m=+0.186156063 container remove 07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:48:05 compute-0 systemd[1]: libpod-conmon-07c68c8937e635e9952fbbd38d3c33eac4adef471aa5e437029ba469d2e38c06.scope: Deactivated successfully.
Sep 30 17:48:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:05 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:06 compute-0 podman[152498]: 2025-09-30 17:48:06.089723672 +0000 UTC m=+0.046765715 container create af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:48:06 compute-0 python3.9[152469]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:48:06 compute-0 systemd[1]: Reloading.
Sep 30 17:48:06 compute-0 podman[152498]: 2025-09-30 17:48:06.070198255 +0000 UTC m=+0.027240318 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:48:06 compute-0 systemd-sysv-generator[152543]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:48:06 compute-0 systemd-rc-local-generator[152538]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:48:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:06.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:06 compute-0 systemd[1]: Started libpod-conmon-af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d.scope.
Sep 30 17:48:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e291d64fb67ffd53966f19a87199befc93c58b26e30061f9ca1fb361daa0138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e291d64fb67ffd53966f19a87199befc93c58b26e30061f9ca1fb361daa0138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e291d64fb67ffd53966f19a87199befc93c58b26e30061f9ca1fb361daa0138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e291d64fb67ffd53966f19a87199befc93c58b26e30061f9ca1fb361daa0138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:06 compute-0 podman[152498]: 2025-09-30 17:48:06.457228402 +0000 UTC m=+0.414270475 container init af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:48:06 compute-0 podman[152498]: 2025-09-30 17:48:06.466036371 +0000 UTC m=+0.423078404 container start af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_burnell, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:48:06 compute-0 podman[152498]: 2025-09-30 17:48:06.470193469 +0000 UTC m=+0.427235532 container attach af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_burnell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:48:06 compute-0 sudo[152466]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:06 compute-0 ceph-mon[73755]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:48:06.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:48:07 compute-0 sudo[152770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhtugzgjhtrdhxahinyarstnhzoygnmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254486.8224425-832-166247398006562/AnsiballZ_stat.py'
Sep 30 17:48:07 compute-0 sudo[152770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:07 compute-0 lvm[152779]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:48:07 compute-0 lvm[152779]: VG ceph_vg0 finished
Sep 30 17:48:07 compute-0 naughty_burnell[152551]: {}
Sep 30 17:48:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:48:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:48:07 compute-0 systemd[1]: libpod-af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d.scope: Deactivated successfully.
Sep 30 17:48:07 compute-0 systemd[1]: libpod-af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d.scope: Consumed 1.310s CPU time.
Sep 30 17:48:07 compute-0 podman[152498]: 2025-09-30 17:48:07.288528701 +0000 UTC m=+1.245570894 container died af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e291d64fb67ffd53966f19a87199befc93c58b26e30061f9ca1fb361daa0138-merged.mount: Deactivated successfully.
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:07 compute-0 python3.9[152773]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:07 compute-0 podman[152498]: 2025-09-30 17:48:07.344456952 +0000 UTC m=+1.301499005 container remove af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_burnell, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:48:07
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'images']
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:48:07 compute-0 systemd[1]: libpod-conmon-af419a97258ff3a6188a679d134225bc3f1fd0341cb5c54c342e7ef6202dbc4d.scope: Deactivated successfully.
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:48:07 compute-0 sudo[152770]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:48:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:48:07 compute-0 sudo[152259]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:48:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:48:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:07 compute-0 sudo[152818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:48:07 compute-0 sudo[152818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:07 compute-0 sudo[152818]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:48:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:48:07 compute-0 sudo[152895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmlhwjvluntvisiznihsdibbiwvvduso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254486.8224425-832-166247398006562/AnsiballZ_file.py'
Sep 30 17:48:07 compute-0 sudo[152895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:07 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:07 compute-0 python3.9[152897]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:07 compute-0 sudo[152895]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:07.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:07 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:08 compute-0 sudo[153048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqmhyqgqfcjrpumoxksappfrbsbrvmys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254488.192772-856-40756415123902/AnsiballZ_stat.py'
Sep 30 17:48:08 compute-0 sudo[153048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:08 compute-0 ceph-mon[73755]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:08 compute-0 python3.9[153050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:08 compute-0 sudo[153048]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:08] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:48:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:08] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:48:08 compute-0 sudo[153126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acyyhvllsfzbhwmbitrvqqtolmobkepj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254488.192772-856-40756415123902/AnsiballZ_file.py'
Sep 30 17:48:08 compute-0 sudo[153126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:09 compute-0 python3.9[153128]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:09 compute-0 sudo[153126]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:09 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:09 compute-0 sudo[153280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjwfqsffaxfbbhvfxloyqhdxhsutxpjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254489.4923947-880-223388021632759/AnsiballZ_systemd.py'
Sep 30 17:48:09 compute-0 sudo[153280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:09.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:09 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:10 compute-0 python3.9[153282]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:48:10 compute-0 systemd[1]: Reloading.
Sep 30 17:48:10 compute-0 systemd-rc-local-generator[153310]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:48:10 compute-0 systemd-sysv-generator[153314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:48:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:10.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:10 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 17:48:10 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 17:48:10 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 17:48:10 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 17:48:10 compute-0 sudo[153280]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:10 compute-0 ceph-mon[73755]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:11 compute-0 sudo[153473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stcmamyvzqrnjccnkczfsryzicjqsbyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254490.991264-900-9027130067831/AnsiballZ_file.py'
Sep 30 17:48:11 compute-0 sudo[153473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:11 compute-0 python3.9[153475]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:11 compute-0 sudo[153473]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:11 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:11.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:11 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:12 compute-0 sudo[153627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrqzcyaikzwwqrrinjnbtfxleaqfjkrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254491.785377-916-209689547632116/AnsiballZ_stat.py'
Sep 30 17:48:12 compute-0 sudo[153627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:12 compute-0 python3.9[153629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:12 compute-0 sudo[153627]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:12.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:12 compute-0 sudo[153750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qftgllpwsnatajbxjbphidlcpeqplzod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254491.785377-916-209689547632116/AnsiballZ_copy.py'
Sep 30 17:48:12 compute-0 sudo[153750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:12 compute-0 ceph-mon[73755]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:12 compute-0 python3.9[153752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254491.785377-916-209689547632116/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:12 compute-0 sudo[153750]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:13 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:13 compute-0 sudo[153904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhxmdhdtnhvzctqbmmsbarudqyrnbnlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254493.4967754-950-108567770244338/AnsiballZ_file.py'
Sep 30 17:48:13 compute-0 sudo[153904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:13.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:13 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:13 compute-0 python3.9[153906]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:13 compute-0 sudo[153904]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:14 compute-0 sudo[154056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlkkhlrpiikrtfriweciaeopvrgxfeub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254494.3843098-966-257734568354976/AnsiballZ_stat.py'
Sep 30 17:48:14 compute-0 sudo[154056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:14 compute-0 ceph-mon[73755]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:14 compute-0 python3.9[154058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:14 compute-0 sudo[154056]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:15 compute-0 sudo[154179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llgijaqlvuiyouhratpzswodpsrbzuzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254494.3843098-966-257734568354976/AnsiballZ_copy.py'
Sep 30 17:48:15 compute-0 sudo[154179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:15 compute-0 python3.9[154181]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254494.3843098-966-257734568354976/.source.json _original_basename=.3qy47trz follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:15 compute-0 sudo[154179]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:15 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:15.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:15 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:16 compute-0 sudo[154333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mebnyqbtjlcsqtvzvbktxvbshcwuvvbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254495.8753116-996-142002809416663/AnsiballZ_file.py'
Sep 30 17:48:16 compute-0 sudo[154333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:16 compute-0 python3.9[154335]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:16 compute-0 sudo[154333]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:16 compute-0 ceph-mon[73755]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:16 compute-0 sudo[154485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjkktyhiegzvslbitpfrtwolqigxralm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254496.6881766-1012-240516751522649/AnsiballZ_stat.py'
Sep 30 17:48:16 compute-0 sudo[154485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:48:16.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:48:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:48:16.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:48:17 compute-0 sudo[154485]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:17 compute-0 sudo[154609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lisrqxaehzsusdnpmdlpqxvnzydakmld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254496.6881766-1012-240516751522649/AnsiballZ_copy.py'
Sep 30 17:48:17 compute-0 sudo[154609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:17 compute-0 sudo[154609]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:17 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:17.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:17 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:18 compute-0 sudo[154764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irsdqxlhcbzephkwjcdrfjtadgcbnowf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254498.0352995-1046-171012714510246/AnsiballZ_container_config_data.py'
Sep 30 17:48:18 compute-0 sudo[154764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:18 compute-0 python3.9[154766]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Sep 30 17:48:18 compute-0 sudo[154764]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:18] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:48:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:18] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:48:18 compute-0 ceph-mon[73755]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:19 compute-0 sudo[154917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xosdnpncpvreflpqjlipnraddxevypos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254499.0468104-1064-206278577242349/AnsiballZ_container_config_hash.py'
Sep 30 17:48:19 compute-0 sudo[154917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:19 compute-0 python3.9[154919]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 17:48:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:19 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:19 compute-0 sudo[154917]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:19 compute-0 ceph-mon[73755]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:19.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:19 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:20.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:20 compute-0 sudo[155070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cemuyjjwokyijlsszjvxzaclsfkdseid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254499.9793818-1082-58312515864464/AnsiballZ_podman_container_info.py'
Sep 30 17:48:20 compute-0 sudo[155070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:20 compute-0 python3.9[155072]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Sep 30 17:48:20 compute-0 sudo[155070]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:21 compute-0 sudo[155125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:48:21 compute-0 sudo[155125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:21 compute-0 sudo[155125]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:21 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:21.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:21 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:22 compute-0 sudo[155276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsjtkpicgmkchrldzjljrwnnqvddhbdj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759254501.573085-1108-80843566418024/AnsiballZ_edpm_container_manage.py'
Sep 30 17:48:22 compute-0 sudo[155276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:48:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:48:22 compute-0 python3[155278]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 17:48:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:22.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:22 compute-0 ceph-mon[73755]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:48:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:23 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:23.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:23 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:24.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:24 compute-0 ceph-mon[73755]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:25 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:25.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:25 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:26.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:26 compute-0 ceph-mon[73755]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:48:26.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:48:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:27 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 17:48:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8179 writes, 32K keys, 8179 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 8179 writes, 1758 syncs, 4.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8179 writes, 32K keys, 8179 commit groups, 1.0 writes per commit group, ingest: 20.85 MB, 0.03 MB/s
                                           Interval WAL: 8179 writes, 1758 syncs, 4.65 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 17:48:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:27.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:27 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:28 compute-0 podman[155293]: 2025-09-30 17:48:28.018733873 +0000 UTC m=+5.632239824 image pull ceccf5ef5dadbbaa077cd4e0c11fe3a228fcf6f1eeda53795be19675ca3a7b05 38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest
Sep 30 17:48:28 compute-0 ceph-mon[73755]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:28 compute-0 podman[155418]: 2025-09-30 17:48:28.149184479 +0000 UTC m=+0.044951168 container create c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 17:48:28 compute-0 podman[155418]: 2025-09-30 17:48:28.122795714 +0000 UTC m=+0.018562423 image pull ceccf5ef5dadbbaa077cd4e0c11fe3a228fcf6f1eeda53795be19675ca3a7b05 38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest
Sep 30 17:48:28 compute-0 python3[155278]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z 38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest
Sep 30 17:48:28 compute-0 sudo[155276]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:28.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:28] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:48:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:28] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:48:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:29 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:29 compute-0 sudo[155609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sowwqwgfprmucjdebwvgsfusglhwkbru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254509.5981379-1124-272434579842/AnsiballZ_stat.py'
Sep 30 17:48:29 compute-0 sudo[155609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:29.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:29 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:30 compute-0 python3.9[155611]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:48:30 compute-0 sudo[155609]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:30.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:30 compute-0 ceph-mon[73755]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:30 compute-0 sudo[155763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkmqtlqmptgedcojuyqwlevjdzdvadyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254510.4976008-1142-70316622532493/AnsiballZ_file.py'
Sep 30 17:48:30 compute-0 sudo[155763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:30 compute-0 python3.9[155765]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:30 compute-0 sudo[155763]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:31 compute-0 sudo[155839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwoljfqqgtmkszpvjqodlmmvzqkugay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254510.4976008-1142-70316622532493/AnsiballZ_stat.py'
Sep 30 17:48:31 compute-0 sudo[155839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:31 compute-0 python3.9[155841]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:48:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:31 compute-0 sudo[155839]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:31.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:31 compute-0 sudo[155992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jytilsflpiciirevsebjquolijbkokvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254511.5052085-1142-144823206336977/AnsiballZ_copy.py'
Sep 30 17:48:31 compute-0 sudo[155992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:32 compute-0 python3.9[155994]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759254511.5052085-1142-144823206336977/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:48:32 compute-0 sudo[155992]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:32.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:32 compute-0 sudo[156068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghxrcifotbuwtxdvnglkyhjcefnkqilz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254511.5052085-1142-144823206336977/AnsiballZ_systemd.py'
Sep 30 17:48:32 compute-0 sudo[156068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:32 compute-0 ceph-mon[73755]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:32 compute-0 python3.9[156070]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:48:32 compute-0 systemd[1]: Reloading.
Sep 30 17:48:32 compute-0 systemd-rc-local-generator[156097]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:48:32 compute-0 systemd-sysv-generator[156101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:48:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:33 compute-0 sudo[156068]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:33 compute-0 sudo[156179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zacgztxsdbxkhtquoeppodffdvczjwtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254511.5052085-1142-144823206336977/AnsiballZ_systemd.py'
Sep 30 17:48:33 compute-0 sudo[156179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:33 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:33.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:33 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:34.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:35 compute-0 ceph-mon[73755]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:35 compute-0 python3.9[156181]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:48:35 compute-0 systemd[1]: Reloading.
Sep 30 17:48:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:35 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:35 compute-0 systemd-sysv-generator[156219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:48:35 compute-0 systemd-rc-local-generator[156215]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:48:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:35.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:35 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:35 compute-0 systemd[1]: Starting ovn_controller container...
Sep 30 17:48:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1b19185bee6e9982f171f3d4504bbce2b0e5cc731d41b7165e56f361fc92a0/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Sep 30 17:48:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4.
Sep 30 17:48:36 compute-0 podman[156227]: 2025-09-30 17:48:36.147585634 +0000 UTC m=+0.137538541 container init c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + sudo -E kolla_set_configs
Sep 30 17:48:36 compute-0 podman[156227]: 2025-09-30 17:48:36.173961409 +0000 UTC m=+0.163914306 container start c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 17:48:36 compute-0 edpm-start-podman-container[156227]: ovn_controller
Sep 30 17:48:36 compute-0 systemd[1]: Created slice User Slice of UID 0.
Sep 30 17:48:36 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Sep 30 17:48:36 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Sep 30 17:48:36 compute-0 systemd[1]: Starting User Manager for UID 0...
Sep 30 17:48:36 compute-0 systemd[156269]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Sep 30 17:48:36 compute-0 edpm-start-podman-container[156226]: Creating additional drop-in dependency for "ovn_controller" (c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4)
Sep 30 17:48:36 compute-0 podman[156249]: 2025-09-30 17:48:36.260247699 +0000 UTC m=+0.076276811 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 17:48:36 compute-0 systemd[1]: c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4-3041ac7c3e424c2.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 17:48:36 compute-0 systemd[1]: c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4-3041ac7c3e424c2.service: Failed with result 'exit-code'.
Sep 30 17:48:36 compute-0 systemd[1]: Reloading.
Sep 30 17:48:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:36.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:36 compute-0 systemd[156269]: Queued start job for default target Main User Target.
Sep 30 17:48:36 compute-0 systemd-rc-local-generator[156331]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:48:36 compute-0 systemd[156269]: Created slice User Application Slice.
Sep 30 17:48:36 compute-0 systemd[156269]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Sep 30 17:48:36 compute-0 systemd[156269]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 17:48:36 compute-0 systemd[156269]: Reached target Paths.
Sep 30 17:48:36 compute-0 systemd[156269]: Reached target Timers.
Sep 30 17:48:36 compute-0 systemd[156269]: Starting D-Bus User Message Bus Socket...
Sep 30 17:48:36 compute-0 systemd[156269]: Starting Create User's Volatile Files and Directories...
Sep 30 17:48:36 compute-0 systemd-sysv-generator[156334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:48:36 compute-0 systemd[156269]: Finished Create User's Volatile Files and Directories.
Sep 30 17:48:36 compute-0 systemd[156269]: Listening on D-Bus User Message Bus Socket.
Sep 30 17:48:36 compute-0 systemd[156269]: Reached target Sockets.
Sep 30 17:48:36 compute-0 systemd[156269]: Reached target Basic System.
Sep 30 17:48:36 compute-0 systemd[156269]: Reached target Main User Target.
Sep 30 17:48:36 compute-0 systemd[156269]: Startup finished in 134ms.
Sep 30 17:48:36 compute-0 systemd[1]: Started User Manager for UID 0.
Sep 30 17:48:36 compute-0 systemd[1]: Started ovn_controller container.
Sep 30 17:48:36 compute-0 systemd[1]: Started Session c1 of User root.
Sep 30 17:48:36 compute-0 sudo[156179]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:36 compute-0 ovn_controller[156242]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 17:48:36 compute-0 ovn_controller[156242]: INFO:__main__:Validating config file
Sep 30 17:48:36 compute-0 ovn_controller[156242]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 17:48:36 compute-0 ovn_controller[156242]: INFO:__main__:Writing out command to execute
Sep 30 17:48:36 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Sep 30 17:48:36 compute-0 ovn_controller[156242]: ++ cat /run_command
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + ARGS=
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + sudo kolla_copy_cacerts
Sep 30 17:48:36 compute-0 systemd[1]: Started Session c2 of User root.
Sep 30 17:48:36 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + [[ ! -n '' ]]
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + . kolla_extend_start
Sep 30 17:48:36 compute-0 ovn_controller[156242]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + umask 0022
Sep 30 17:48:36 compute-0 ovn_controller[156242]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00003|main|INFO|OVN internal version is : [24.09.4-20.37.0-77.8]
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00005|stream_ssl|ERR|ssl:ovsdbserver-sb.openstack.svc:6642: connect: Address family not supported by protocol
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00006|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connection attempt failed (Address family not supported by protocol)
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00008|main|INFO|OVNSB IDL reconnected, force recompute.
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00009|ovn_util|INFO|statctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00011|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00013|ovn_util|INFO|pinctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00014|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00015|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Sep 30 17:48:36 compute-0 ovn_controller[156242]: 2025-09-30T17:48:36Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Sep 30 17:48:36 compute-0 NetworkManager[45059]: <info>  [1759254516.7734] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Sep 30 17:48:36 compute-0 NetworkManager[45059]: <info>  [1759254516.7741] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 17:48:36 compute-0 NetworkManager[45059]: <info>  [1759254516.7754] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Sep 30 17:48:36 compute-0 NetworkManager[45059]: <info>  [1759254516.7761] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Sep 30 17:48:36 compute-0 NetworkManager[45059]: <info>  [1759254516.7765] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Sep 30 17:48:36 compute-0 kernel: br-int: entered promiscuous mode
Sep 30 17:48:36 compute-0 systemd-udevd[156377]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 17:48:36 compute-0 ceph-mon[73755]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:48:36.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:48:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:48:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:48:37 compute-0 sudo[156505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqqslmhpvcmhhjgbfujqffjydejxhtcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254516.9973266-1198-54318308721606/AnsiballZ_command.py'
Sep 30 17:48:37 compute-0 sudo[156505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:48:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:48:37 compute-0 python3.9[156507]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:48:37 compute-0 ovs-vsctl[156509]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Sep 30 17:48:37 compute-0 sudo[156505]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:37 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00001|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00001|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00017|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00018|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00019|ovn_util|INFO|features: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00021|features|INFO|OVS Feature: ct_zero_snat, state: supported
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00022|features|INFO|OVS Feature: ct_flush, state: supported
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00023|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00024|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00025|main|INFO|OVS feature set changed, force recompute.
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00026|ovn_util|INFO|ofctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00027|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00028|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00029|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00030|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00031|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00032|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00033|features|INFO|OVS Feature: meter_support, state: supported
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00034|features|INFO|OVS Feature: group_support, state: supported
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00035|main|INFO|OVS feature set changed, force recompute.
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00036|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Sep 30 17:48:37 compute-0 ovn_controller[156242]: 2025-09-30T17:48:37Z|00037|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Sep 30 17:48:37 compute-0 NetworkManager[45059]: <info>  [1759254517.7988] manager: (ovn-fdf940-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Sep 30 17:48:37 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Sep 30 17:48:37 compute-0 NetworkManager[45059]: <info>  [1759254517.8223] device (genev_sys_6081): carrier: link connected
Sep 30 17:48:37 compute-0 NetworkManager[45059]: <info>  [1759254517.8226] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Sep 30 17:48:37 compute-0 systemd-udevd[156379]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 17:48:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:48:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:37 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:37 compute-0 NetworkManager[45059]: <info>  [1759254517.9874] manager: (ovn-81ab3f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Sep 30 17:48:38 compute-0 sudo[156663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdnfurgkgdmjstpweuvdukymjptekqxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254517.8072548-1214-36658044564202/AnsiballZ_command.py'
Sep 30 17:48:38 compute-0 sudo[156663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:38 compute-0 python3.9[156665]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:48:38 compute-0 ovs-vsctl[156667]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Sep 30 17:48:38 compute-0 sudo[156663]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:38.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:38] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:48:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:38] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:48:38 compute-0 ceph-mon[73755]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:39 compute-0 sudo[156818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwghfpnshnxecpwgmgcktvjxoumqnacw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254518.787276-1242-279107774266345/AnsiballZ_command.py'
Sep 30 17:48:39 compute-0 sudo[156818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:39 compute-0 python3.9[156820]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:48:39 compute-0 ovs-vsctl[156821]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Sep 30 17:48:39 compute-0 sudo[156818]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:39 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:39 compute-0 sshd-session[144519]: Connection closed by 192.168.122.30 port 47072
Sep 30 17:48:39 compute-0 sshd-session[144516]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:48:39 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Sep 30 17:48:39 compute-0 ceph-mon[73755]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:39 compute-0 systemd[1]: session-51.scope: Consumed 1min 291ms CPU time.
Sep 30 17:48:39 compute-0 systemd-logind[811]: Session 51 logged out. Waiting for processes to exit.
Sep 30 17:48:39 compute-0 systemd-logind[811]: Removed session 51.
Sep 30 17:48:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:39.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:39 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:48:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:40.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:48:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:41 compute-0 sudo[156849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:48:41 compute-0 sudo[156849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:48:41 compute-0 sudo[156849]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:41 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:41.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:41 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12480032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:42.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:42 compute-0 ceph-mon[73755]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:43.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:43 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:44 compute-0 ceph-mon[73755]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:45 compute-0 sshd-session[156877]: Accepted publickey for zuul from 192.168.122.30 port 38130 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:48:45 compute-0 systemd-logind[811]: New session 53 of user zuul.
Sep 30 17:48:45 compute-0 systemd[1]: Started Session 53 of User zuul.
Sep 30 17:48:45 compute-0 sshd-session[156877]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:48:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:45 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1254004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:45.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:45 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12480032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:46 compute-0 python3.9[157032]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:48:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:48:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:46.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:48:46 compute-0 ceph-mon[73755]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:46 compute-0 systemd[1]: Stopping User Manager for UID 0...
Sep 30 17:48:46 compute-0 systemd[156269]: Activating special unit Exit the Session...
Sep 30 17:48:46 compute-0 systemd[156269]: Stopped target Main User Target.
Sep 30 17:48:46 compute-0 systemd[156269]: Stopped target Basic System.
Sep 30 17:48:46 compute-0 systemd[156269]: Stopped target Paths.
Sep 30 17:48:46 compute-0 systemd[156269]: Stopped target Sockets.
Sep 30 17:48:46 compute-0 systemd[156269]: Stopped target Timers.
Sep 30 17:48:46 compute-0 systemd[156269]: Stopped Daily Cleanup of User's Temporary Directories.
Sep 30 17:48:46 compute-0 systemd[156269]: Closed D-Bus User Message Bus Socket.
Sep 30 17:48:46 compute-0 systemd[156269]: Stopped Create User's Volatile Files and Directories.
Sep 30 17:48:46 compute-0 systemd[156269]: Removed slice User Application Slice.
Sep 30 17:48:46 compute-0 systemd[156269]: Reached target Shutdown.
Sep 30 17:48:46 compute-0 systemd[156269]: Finished Exit the Session.
Sep 30 17:48:46 compute-0 systemd[156269]: Reached target Exit the Session.
Sep 30 17:48:46 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Sep 30 17:48:46 compute-0 systemd[1]: Stopped User Manager for UID 0.
Sep 30 17:48:46 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Sep 30 17:48:46 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Sep 30 17:48:46 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Sep 30 17:48:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:48:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:48:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:48:46.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:48:46 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Sep 30 17:48:46 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Sep 30 17:48:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:47 compute-0 sudo[157190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeejeujjpknwyjgnhkezvrqhzadzhrrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254526.9475088-48-217593259848789/AnsiballZ_file.py'
Sep 30 17:48:47 compute-0 sudo[157190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:47 compute-0 python3.9[157192]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:47 compute-0 sudo[157190]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:47 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:47.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:47 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12600021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:48 compute-0 sudo[157344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rntjtwxgdolpyiueavxptehogeysenxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254527.7702696-48-121366030068486/AnsiballZ_file.py'
Sep 30 17:48:48 compute-0 sudo[157344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:48 compute-0 python3.9[157346]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:48 compute-0 sudo[157344]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:48.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:48 compute-0 ceph-mon[73755]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:48 compute-0 sudo[157496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpnempbtbbqebnawsoyggquoatyzjsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254528.4665956-48-163307803596712/AnsiballZ_file.py'
Sep 30 17:48:48 compute-0 sudo[157496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:48] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:48:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:48] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:48:48 compute-0 python3.9[157498]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:48 compute-0 sudo[157496]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:49 compute-0 sudo[157649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbppmsafdptoxiennhiaupmafodpoxmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254529.0667338-48-68568219612478/AnsiballZ_file.py'
Sep 30 17:48:49 compute-0 sudo[157649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:49 compute-0 python3.9[157651]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:49 compute-0 sudo[157649]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:49 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:48:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:49.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:48:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:49 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:49 compute-0 sudo[157803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjccfofisddutsiifqhxkexykbeaxduc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254529.7186615-48-241476391651064/AnsiballZ_file.py'
Sep 30 17:48:49 compute-0 sudo[157803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:50 compute-0 python3.9[157805]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:50 compute-0 sudo[157803]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:48:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:50.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:48:50 compute-0 ceph-mon[73755]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:51 compute-0 python3.9[157955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:48:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:51 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:51 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:48:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:48:52 compute-0 sudo[158107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycvfwvzldoejmpcvgjghwaxwisdesjrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254531.8833532-136-115958706347591/AnsiballZ_seboolean.py'
Sep 30 17:48:52 compute-0 sudo[158107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:52.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:52 compute-0 python3.9[158109]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Sep 30 17:48:52 compute-0 ceph-mon[73755]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:48:53 compute-0 sudo[158107]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:53 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:48:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:53.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:48:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:53 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:54 compute-0 python3.9[158261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:54.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:54 compute-0 ceph-mon[73755]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:54 compute-0 python3.9[158382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254533.408966-152-181058749245646/.source follow=False _original_basename=haproxy.j2 checksum=95d26d03c70c8c0693c538ed451937f0a3e9bd72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:55 compute-0 python3.9[158532]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:48:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:55 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1244000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:55 compute-0 python3.9[158654]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254534.848505-182-239158242969990/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:48:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:55.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:55 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:56 compute-0 sudo[158806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgtbknkcncuapmjnqsoomryszchjypmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254536.2557716-216-76760094322023/AnsiballZ_setup.py'
Sep 30 17:48:56 compute-0 sudo[158806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:56 compute-0 ceph-mon[73755]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:56 compute-0 python3.9[158808]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:48:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:48:56.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:48:57 compute-0 sudo[158806]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:57 compute-0 sudo[158891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undptukinkusgdrasjsrnswokwiqwkaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254536.2557716-216-76760094322023/AnsiballZ_dnf.py'
Sep 30 17:48:57 compute-0 sudo[158891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:48:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:57 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:57 compute-0 python3.9[158893]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:48:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:57.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:57 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:48:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:48:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:48:58 compute-0 ceph-mon[73755]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:48:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:58] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:48:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:48:58] "GET /metrics HTTP/1.1" 200 46511 "" "Prometheus/2.51.0"
Sep 30 17:48:59 compute-0 sudo[158891]: pam_unix(sudo:session): session closed for user root
Sep 30 17:48:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:48:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:59 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:48:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:48:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:48:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:48:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:48:59.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:48:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:48:59 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:00 compute-0 sudo[159047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lidlofgxxcyabzirxwsksscnmtkqbien ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254539.4435828-240-245795455224738/AnsiballZ_systemd.py'
Sep 30 17:49:00 compute-0 sudo[159047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:00 compute-0 python3.9[159049]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:49:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:00.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:00 compute-0 sudo[159047]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:00 compute-0 ceph-mon[73755]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:01 compute-0 sudo[159078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:49:01 compute-0 sudo[159078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:01 compute-0 sudo[159078]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:01 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:01.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:01 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1244001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:02 compute-0 python3.9[159229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:02 compute-0 ceph-mon[73755]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:02 compute-0 python3.9[159350]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254541.7737327-256-240058460180651/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:03 compute-0 python3.9[159500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:03 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:03 compute-0 python3.9[159623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254542.9052072-256-51130196419051/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:03.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:03 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:04.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:04 compute-0 ceph-mon[73755]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:05 compute-0 python3.9[159774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:05 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:05.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:05 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1244002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:06 compute-0 python3.9[159896]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254545.1523314-344-41502534394876/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:06.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:06 compute-0 ovn_controller[156242]: 2025-09-30T17:49:06Z|00038|memory|INFO|15964 kB peak resident set size after 29.7 seconds
Sep 30 17:49:06 compute-0 ovn_controller[156242]: 2025-09-30T17:49:06Z|00039|memory|INFO|idl-cells-OVN_Southbound:256 idl-cells-Open_vSwitch:585 ofctrl_desired_flow_usage-KB:6 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Sep 30 17:49:06 compute-0 podman[160020]: 2025-09-30 17:49:06.481122448 +0000 UTC m=+0.124058158 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20250930, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Sep 30 17:49:06 compute-0 python3.9[160059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:06 compute-0 ceph-mon[73755]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:49:06.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:49:07 compute-0 python3.9[160193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254546.1681104-344-131929022702305/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:49:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:49:07
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.mgr', 'volumes', '.nfs', 'default.rgw.meta', 'vms', '.rgw.root']
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:49:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:49:07 compute-0 sudo[160220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:49:07 compute-0 sudo[160220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:07 compute-0 sudo[160220]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:07 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:07 compute-0 sudo[160245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:49:07 compute-0 sudo[160245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:49:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:07.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:07 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1244002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:08 compute-0 python3.9[160412]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:49:08 compute-0 sudo[160245]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:08.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:49:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:49:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:49:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:49:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:49:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:49:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:49:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:49:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:49:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:49:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:49:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:49:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:49:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:49:08 compute-0 sudo[160456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:49:08 compute-0 sudo[160456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:08 compute-0 sudo[160456]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:08 compute-0 sudo[160481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:49:08 compute-0 sudo[160481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:08] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:49:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:08] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:49:08 compute-0 ceph-mon[73755]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:49:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:49:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:49:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:49:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:49:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:49:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:49:08 compute-0 sudo[160664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-selizlvbsvediphrzknxhhmwwrbsxpqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254548.6046073-420-173660458846699/AnsiballZ_file.py'
Sep 30 17:49:08 compute-0 sudo[160664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:08 compute-0 podman[160675]: 2025-09-30 17:49:08.913160807 +0000 UTC m=+0.037761620 container create 0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:49:08 compute-0 systemd[1]: Started libpod-conmon-0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343.scope.
Sep 30 17:49:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:49:08 compute-0 podman[160675]: 2025-09-30 17:49:08.979153055 +0000 UTC m=+0.103753878 container init 0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:49:08 compute-0 podman[160675]: 2025-09-30 17:49:08.98768495 +0000 UTC m=+0.112285783 container start 0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:49:08 compute-0 podman[160675]: 2025-09-30 17:49:08.89537998 +0000 UTC m=+0.019980823 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:49:08 compute-0 podman[160675]: 2025-09-30 17:49:08.991401403 +0000 UTC m=+0.116002216 container attach 0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 17:49:08 compute-0 stupefied_morse[160691]: 167 167
Sep 30 17:49:08 compute-0 systemd[1]: libpod-0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343.scope: Deactivated successfully.
Sep 30 17:49:08 compute-0 podman[160675]: 2025-09-30 17:49:08.992713676 +0000 UTC m=+0.117314499 container died 0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 17:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad514ff1a3990cd1d9038da17dd7850e49e629527fff15b9d439b9f672671435-merged.mount: Deactivated successfully.
Sep 30 17:49:09 compute-0 podman[160675]: 2025-09-30 17:49:09.02988426 +0000 UTC m=+0.154485073 container remove 0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:49:09 compute-0 systemd[1]: libpod-conmon-0dff873a03e79d09eda539343398f8d50ed31945faf22208fb11b69e12e7d343.scope: Deactivated successfully.
Sep 30 17:49:09 compute-0 python3.9[160672]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:09 compute-0 sudo[160664]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:09 compute-0 podman[160739]: 2025-09-30 17:49:09.205719759 +0000 UTC m=+0.043299029 container create e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Sep 30 17:49:09 compute-0 systemd[1]: Started libpod-conmon-e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c.scope.
Sep 30 17:49:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ca2eb778bd6b25db09ae5b385dedb02e0db10d7776239e0e63180e84c56eb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ca2eb778bd6b25db09ae5b385dedb02e0db10d7776239e0e63180e84c56eb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ca2eb778bd6b25db09ae5b385dedb02e0db10d7776239e0e63180e84c56eb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ca2eb778bd6b25db09ae5b385dedb02e0db10d7776239e0e63180e84c56eb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ca2eb778bd6b25db09ae5b385dedb02e0db10d7776239e0e63180e84c56eb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:09 compute-0 podman[160739]: 2025-09-30 17:49:09.186680491 +0000 UTC m=+0.024259781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:49:09 compute-0 podman[160739]: 2025-09-30 17:49:09.291579667 +0000 UTC m=+0.129158947 container init e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 17:49:09 compute-0 podman[160739]: 2025-09-30 17:49:09.299315071 +0000 UTC m=+0.136894341 container start e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:49:09 compute-0 podman[160739]: 2025-09-30 17:49:09.30284612 +0000 UTC m=+0.140425410 container attach e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:49:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:09 compute-0 elastic_dewdney[160756]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:49:09 compute-0 elastic_dewdney[160756]: --> All data devices are unavailable
Sep 30 17:49:09 compute-0 sudo[160898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvipneenheotofgmmeylrcdgfpxrghy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254549.3719766-436-223853974590953/AnsiballZ_stat.py'
Sep 30 17:49:09 compute-0 sudo[160898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:09 compute-0 systemd[1]: libpod-e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c.scope: Deactivated successfully.
Sep 30 17:49:09 compute-0 podman[160739]: 2025-09-30 17:49:09.650074126 +0000 UTC m=+0.487653416 container died e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-17ca2eb778bd6b25db09ae5b385dedb02e0db10d7776239e0e63180e84c56eb7-merged.mount: Deactivated successfully.
Sep 30 17:49:09 compute-0 podman[160739]: 2025-09-30 17:49:09.689508187 +0000 UTC m=+0.527087457 container remove e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dewdney, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:49:09 compute-0 systemd[1]: libpod-conmon-e3418cefe57bd43d5af8778f7ffa5453c8b166a454a654ea4f9f4aa647ccf58c.scope: Deactivated successfully.
Sep 30 17:49:09 compute-0 sudo[160481]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:09 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:09 compute-0 sudo[160912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:49:09 compute-0 sudo[160912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:09 compute-0 sudo[160912]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:09 compute-0 sudo[160937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:49:09 compute-0 sudo[160937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:09 compute-0 python3.9[160900]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:09 compute-0 sudo[160898]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:09.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:09 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:10 compute-0 sudo[161056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhmjkxhlmcdnmcrjxbkkopxvtoorvkxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254549.3719766-436-223853974590953/AnsiballZ_file.py'
Sep 30 17:49:10 compute-0 sudo[161056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:10 compute-0 podman[161080]: 2025-09-30 17:49:10.211406563 +0000 UTC m=+0.032779285 container create a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:49:10 compute-0 systemd[1]: Started libpod-conmon-a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f.scope.
Sep 30 17:49:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:49:10 compute-0 podman[161080]: 2025-09-30 17:49:10.270480838 +0000 UTC m=+0.091853590 container init a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:49:10 compute-0 podman[161080]: 2025-09-30 17:49:10.276418997 +0000 UTC m=+0.097791719 container start a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 17:49:10 compute-0 interesting_mclaren[161096]: 167 167
Sep 30 17:49:10 compute-0 systemd[1]: libpod-a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f.scope: Deactivated successfully.
Sep 30 17:49:10 compute-0 podman[161080]: 2025-09-30 17:49:10.279851453 +0000 UTC m=+0.101224175 container attach a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:49:10 compute-0 conmon[161096]: conmon a1b76de9b7c02585cc23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f.scope/container/memory.events
Sep 30 17:49:10 compute-0 podman[161080]: 2025-09-30 17:49:10.281489854 +0000 UTC m=+0.102862606 container died a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:49:10 compute-0 podman[161080]: 2025-09-30 17:49:10.196825967 +0000 UTC m=+0.018198709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a470fd78a607775f7649d2809f2db388d87db3c13731e6b3d6c873c1d7d802ad-merged.mount: Deactivated successfully.
Sep 30 17:49:10 compute-0 python3.9[161063]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:10 compute-0 podman[161080]: 2025-09-30 17:49:10.312302809 +0000 UTC m=+0.133675521 container remove a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:49:10 compute-0 systemd[1]: libpod-conmon-a1b76de9b7c02585cc23ca3a1b3212549643fd3e5938139ec896b1bedb05375f.scope: Deactivated successfully.
Sep 30 17:49:10 compute-0 sudo[161056]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:10.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:10 compute-0 podman[161144]: 2025-09-30 17:49:10.459224241 +0000 UTC m=+0.038996651 container create 33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_haslett, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 17:49:10 compute-0 systemd[1]: Started libpod-conmon-33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294.scope.
Sep 30 17:49:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec2491d7ad582f955dbcf3cd3c77182cdccb0076a3d82062d01c82f68da8c81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec2491d7ad582f955dbcf3cd3c77182cdccb0076a3d82062d01c82f68da8c81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec2491d7ad582f955dbcf3cd3c77182cdccb0076a3d82062d01c82f68da8c81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec2491d7ad582f955dbcf3cd3c77182cdccb0076a3d82062d01c82f68da8c81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:10 compute-0 podman[161144]: 2025-09-30 17:49:10.51807602 +0000 UTC m=+0.097848440 container init 33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:49:10 compute-0 podman[161144]: 2025-09-30 17:49:10.524331517 +0000 UTC m=+0.104103917 container start 33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:49:10 compute-0 podman[161144]: 2025-09-30 17:49:10.52681301 +0000 UTC m=+0.106585410 container attach 33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:49:10 compute-0 podman[161144]: 2025-09-30 17:49:10.44206723 +0000 UTC m=+0.021839660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:49:10 compute-0 sudo[161291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppdsxubeiurhwigdaphotehcsdxblksg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254550.4499092-436-55477257750372/AnsiballZ_stat.py'
Sep 30 17:49:10 compute-0 sudo[161291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]: {
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:     "0": [
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:         {
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "devices": [
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "/dev/loop3"
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             ],
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "lv_name": "ceph_lv0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "lv_size": "21470642176",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "name": "ceph_lv0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "tags": {
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.cluster_name": "ceph",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.crush_device_class": "",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.encrypted": "0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.osd_id": "0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.type": "block",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.vdo": "0",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:                 "ceph.with_tpm": "0"
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             },
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "type": "block",
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:             "vg_name": "ceph_vg0"
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:         }
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]:     ]
Sep 30 17:49:10 compute-0 thirsty_haslett[161207]: }
Sep 30 17:49:10 compute-0 ceph-mon[73755]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:10 compute-0 systemd[1]: libpod-33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294.scope: Deactivated successfully.
Sep 30 17:49:10 compute-0 podman[161144]: 2025-09-30 17:49:10.8590766 +0000 UTC m=+0.438849020 container died 33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_haslett, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ec2491d7ad582f955dbcf3cd3c77182cdccb0076a3d82062d01c82f68da8c81-merged.mount: Deactivated successfully.
Sep 30 17:49:10 compute-0 podman[161144]: 2025-09-30 17:49:10.905009014 +0000 UTC m=+0.484781414 container remove 33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:49:10 compute-0 python3.9[161295]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:10 compute-0 systemd[1]: libpod-conmon-33712bfb05cb31a4c4593655ec6f87d2d698378ffd06a03df35888a354dfd294.scope: Deactivated successfully.
Sep 30 17:49:10 compute-0 sudo[160937]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:10 compute-0 sudo[161291]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:11 compute-0 sudo[161312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:49:11 compute-0 sudo[161312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:11 compute-0 sudo[161312]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:11 compute-0 sudo[161350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:49:11 compute-0 sudo[161350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:11 compute-0 sudo[161435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okulchxjwfkvkueycjxrwucmkxgrnhrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254550.4499092-436-55477257750372/AnsiballZ_file.py'
Sep 30 17:49:11 compute-0 sudo[161435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:11 compute-0 python3.9[161437]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:11 compute-0 sudo[161435]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:11 compute-0 podman[161482]: 2025-09-30 17:49:11.401565932 +0000 UTC m=+0.019065470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:49:11 compute-0 podman[161482]: 2025-09-30 17:49:11.535935599 +0000 UTC m=+0.153435117 container create 1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:49:11 compute-0 systemd[1]: Started libpod-conmon-1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15.scope.
Sep 30 17:49:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:49:11 compute-0 podman[161482]: 2025-09-30 17:49:11.599445625 +0000 UTC m=+0.216945163 container init 1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 17:49:11 compute-0 podman[161482]: 2025-09-30 17:49:11.605757374 +0000 UTC m=+0.223256892 container start 1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gauss, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 17:49:11 compute-0 podman[161482]: 2025-09-30 17:49:11.608265937 +0000 UTC m=+0.225765475 container attach 1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gauss, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:49:11 compute-0 dazzling_gauss[161522]: 167 167
Sep 30 17:49:11 compute-0 systemd[1]: libpod-1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15.scope: Deactivated successfully.
Sep 30 17:49:11 compute-0 podman[161482]: 2025-09-30 17:49:11.610646127 +0000 UTC m=+0.228145745 container died 1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gauss, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cea4696615cc441fa3a830b17aa72c2bfe236b75467d24322b2e029e70aa4c9c-merged.mount: Deactivated successfully.
Sep 30 17:49:11 compute-0 podman[161482]: 2025-09-30 17:49:11.730830447 +0000 UTC m=+0.348329965 container remove 1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:49:11 compute-0 systemd[1]: libpod-conmon-1a30d6b6ba94b3060c4659a906b944206672d57a5cdcda7921835feeeae39f15.scope: Deactivated successfully.
Sep 30 17:49:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:11 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:11 compute-0 podman[161549]: 2025-09-30 17:49:11.910093872 +0000 UTC m=+0.047205857 container create 19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:49:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:11.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:11 compute-0 systemd[1]: Started libpod-conmon-19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca.scope.
Sep 30 17:49:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:11 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1244002140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:11 compute-0 podman[161549]: 2025-09-30 17:49:11.89210216 +0000 UTC m=+0.029214155 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:49:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/471b3154d895758fe1fe3dc6eb43173be2f86d650bd83c3fba3be1c6768fed46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/471b3154d895758fe1fe3dc6eb43173be2f86d650bd83c3fba3be1c6768fed46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/471b3154d895758fe1fe3dc6eb43173be2f86d650bd83c3fba3be1c6768fed46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/471b3154d895758fe1fe3dc6eb43173be2f86d650bd83c3fba3be1c6768fed46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:12 compute-0 podman[161549]: 2025-09-30 17:49:12.007744686 +0000 UTC m=+0.144856691 container init 19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:49:12 compute-0 podman[161549]: 2025-09-30 17:49:12.013435359 +0000 UTC m=+0.150547334 container start 19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:49:12 compute-0 podman[161549]: 2025-09-30 17:49:12.01625496 +0000 UTC m=+0.153366965 container attach 19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:49:12 compute-0 sudo[161708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uezzogiemhmvnkagpfmlnwmqlioeyjea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254552.0078025-482-169198508737350/AnsiballZ_file.py'
Sep 30 17:49:12 compute-0 sudo[161708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:12.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:12 compute-0 python3.9[161713]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:12 compute-0 sudo[161708]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:12 compute-0 lvm[161792]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:49:12 compute-0 lvm[161792]: VG ceph_vg0 finished
Sep 30 17:49:12 compute-0 elated_mahavira[161566]: {}
Sep 30 17:49:12 compute-0 systemd[1]: libpod-19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca.scope: Deactivated successfully.
Sep 30 17:49:12 compute-0 systemd[1]: libpod-19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca.scope: Consumed 1.108s CPU time.
Sep 30 17:49:12 compute-0 podman[161549]: 2025-09-30 17:49:12.723273678 +0000 UTC m=+0.860385643 container died 19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 17:49:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-471b3154d895758fe1fe3dc6eb43173be2f86d650bd83c3fba3be1c6768fed46-merged.mount: Deactivated successfully.
Sep 30 17:49:12 compute-0 podman[161549]: 2025-09-30 17:49:12.805206277 +0000 UTC m=+0.942318252 container remove 19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mahavira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:49:12 compute-0 systemd[1]: libpod-conmon-19b1646ab16b4da89a3cf290040cb740f016c9d9fff2ebb4241a8037e54a72ca.scope: Deactivated successfully.
Sep 30 17:49:12 compute-0 sudo[161350]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:49:12 compute-0 ceph-mon[73755]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:49:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:49:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:49:12 compute-0 sudo[161883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:49:12 compute-0 sudo[161883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:12 compute-0 sudo[161883]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:12 compute-0 sudo[161958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwfuslgqwnuwxiyvbxbcewjckoqnqhkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254552.7583776-498-2281933473656/AnsiballZ_stat.py'
Sep 30 17:49:13 compute-0 sudo[161958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:13 compute-0 python3.9[161960]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:13 compute-0 sudo[161958]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:13 compute-0 sudo[162037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqfuwyoerdbqgegkiddmwzotokceuimi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254552.7583776-498-2281933473656/AnsiballZ_file.py'
Sep 30 17:49:13 compute-0 sudo[162037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:13 compute-0 python3.9[162039]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:13 compute-0 sudo[162037]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:13 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:13 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:49:13 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:49:13 compute-0 ceph-mon[73755]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:13.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:13 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:14 compute-0 sudo[162191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smzzrfnuhzruhaxciyqdiaropfetrguf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254554.050566-522-156071481807600/AnsiballZ_stat.py'
Sep 30 17:49:14 compute-0 sudo[162191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:14.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:14 compute-0 python3.9[162193]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:14 compute-0 sudo[162191]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:14 compute-0 sudo[162270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fexukzzqhjykpqyuzygojuruvnvmveqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254554.050566-522-156071481807600/AnsiballZ_file.py'
Sep 30 17:49:14 compute-0 sudo[162270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:14 compute-0 python3.9[162272]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:14 compute-0 sudo[162270]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:15 compute-0 sshd-session[162065]: Invalid user postgres from 80.94.95.116 port 49882
Sep 30 17:49:15 compute-0 sshd-session[162065]: Connection closed by invalid user postgres 80.94.95.116 port 49882 [preauth]
Sep 30 17:49:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:15 compute-0 sudo[162423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwelcjpetzqcfmgfqbkzfiiocvkqyivz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254555.3653774-546-132597146782895/AnsiballZ_systemd.py'
Sep 30 17:49:15 compute-0 sudo[162423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:15 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:15 compute-0 python3.9[162426]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:49:15 compute-0 systemd[1]: Reloading.
Sep 30 17:49:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:15.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:15 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:16 compute-0 systemd-rc-local-generator[162456]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:49:16 compute-0 systemd-sysv-generator[162461]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:49:16 compute-0 sudo[162423]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:16.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:16 compute-0 ceph-mon[73755]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:16 compute-0 sudo[162616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvjovdcwrznajzetmakmelvqcqaqnrah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254556.7120264-562-159036869490024/AnsiballZ_stat.py'
Sep 30 17:49:16 compute-0 sudo[162616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:49:16.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:49:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:49:16.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:49:17 compute-0 python3.9[162618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:17 compute-0 sudo[162616]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:17 compute-0 sudo[162695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwtjxohnxwmfliltyknezjfdhhbydzqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254556.7120264-562-159036869490024/AnsiballZ_file.py'
Sep 30 17:49:17 compute-0 sudo[162695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:17 compute-0 python3.9[162697]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:17 compute-0 sudo[162695]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:17 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:17.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:17 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1260004240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:18.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:18 compute-0 ceph-mon[73755]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:18 compute-0 sudo[162848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcyuosdftbyccsfleavltkmfgykakdju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254558.2167807-586-140908999432770/AnsiballZ_stat.py'
Sep 30 17:49:18 compute-0 sudo[162848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:18 compute-0 python3.9[162850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:18 compute-0 sudo[162848]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:18] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:49:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:18] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:49:18 compute-0 sudo[162926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsgobslymwpgindkarydxtoopkineufm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254558.2167807-586-140908999432770/AnsiballZ_file.py'
Sep 30 17:49:18 compute-0 sudo[162926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:19 compute-0 python3.9[162928]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:19 compute-0 sudo[162926]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:19 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:19.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:19 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:20 compute-0 sudo[163080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikwuxsxuhjhvynylqnwdgqebexshwyqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254559.67436-610-271271083325159/AnsiballZ_systemd.py'
Sep 30 17:49:20 compute-0 sudo[163080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:20 compute-0 python3.9[163082]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:49:20 compute-0 systemd[1]: Reloading.
Sep 30 17:49:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:20.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:20 compute-0 systemd-rc-local-generator[163107]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:49:20 compute-0 systemd-sysv-generator[163111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:49:20 compute-0 ceph-mon[73755]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:20 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 17:49:20 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 17:49:20 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 17:49:20 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 17:49:20 compute-0 sudo[163080]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:21 compute-0 sudo[163273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgqwjaymnjhfenxtksvouptegrupchjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254561.1807368-630-76089046229232/AnsiballZ_file.py'
Sep 30 17:49:21 compute-0 sudo[163273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:21 compute-0 python3.9[163275]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:21 compute-0 sudo[163277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:49:21 compute-0 sudo[163277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:21 compute-0 sudo[163277]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:21 compute-0 sudo[163273]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:21 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:21.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:21 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1260004240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:49:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:49:22 compute-0 sudo[163451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrencbiptkqaultbubbkoixyskkspyyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254562.063827-646-29212761149666/AnsiballZ_stat.py'
Sep 30 17:49:22 compute-0 sudo[163451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:22.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:22 compute-0 ceph-mon[73755]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:49:22 compute-0 python3.9[163453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:22 compute-0 sudo[163451]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:22 compute-0 sudo[163574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftvkrxosroepmukjgjgvknwlwgwsdafm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254562.063827-646-29212761149666/AnsiballZ_copy.py'
Sep 30 17:49:22 compute-0 sudo[163574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:23 compute-0 python3.9[163576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254562.063827-646-29212761149666/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:23 compute-0 sudo[163574]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:23 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:23 compute-0 sudo[163728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahecrgcgoqwqhdiorsybnihhyxhnkaaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254563.6985662-680-13834185738318/AnsiballZ_file.py'
Sep 30 17:49:23 compute-0 sudo[163728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:23.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:23 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:24 compute-0 python3.9[163730]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:49:24 compute-0 sudo[163728]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:24.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:24 compute-0 ceph-mon[73755]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:24 compute-0 sudo[163880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptaiuzuvxvifemoqjsrxrnrzeyshlyla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254564.5646954-696-31695695310858/AnsiballZ_stat.py'
Sep 30 17:49:24 compute-0 sudo[163880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:25 compute-0 python3.9[163882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:49:25 compute-0 sudo[163880]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:25 compute-0 sudo[164004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cymeagjsndqcrdqxlqbgnjlbkyekqlsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254564.5646954-696-31695695310858/AnsiballZ_copy.py'
Sep 30 17:49:25 compute-0 sudo[164004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:25 compute-0 python3.9[164006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254564.5646954-696-31695695310858/.source.json _original_basename=.3d3_x4lx follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:25 compute-0 sudo[164004]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:25 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:49:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:25.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:49:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:25 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1260004240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:26 compute-0 sudo[164157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxkkvpoloxbkcfxtzlpkepvyyqgovvys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254565.987784-726-163291095416952/AnsiballZ_file.py'
Sep 30 17:49:26 compute-0 sudo[164157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:26.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:26 compute-0 python3.9[164159]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:26 compute-0 sudo[164157]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:26 compute-0 ceph-mon[73755]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:49:27.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:49:27 compute-0 sudo[164309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvqmwzwiipjudbvzrzxbmgikpbusuuyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254566.795464-742-182679598905123/AnsiballZ_stat.py'
Sep 30 17:49:27 compute-0 sudo[164309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:27 compute-0 sudo[164309]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:27 compute-0 sudo[164433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msrmiiywtukdohihchoxvwkcupgedfhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254566.795464-742-182679598905123/AnsiballZ_copy.py'
Sep 30 17:49:27 compute-0 sudo[164433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:27 compute-0 sudo[164433]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:27 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:27.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:27 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:28.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:28 compute-0 sudo[164586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbokbmntzjjagrcepeaytmeneomyjuin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254568.2887187-776-19021817081280/AnsiballZ_container_config_data.py'
Sep 30 17:49:28 compute-0 sudo[164586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:28 compute-0 ceph-mon[73755]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:28] "GET /metrics HTTP/1.1" 200 46508 "" "Prometheus/2.51.0"
Sep 30 17:49:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:28] "GET /metrics HTTP/1.1" 200 46508 "" "Prometheus/2.51.0"
Sep 30 17:49:28 compute-0 python3.9[164588]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Sep 30 17:49:28 compute-0 sudo[164586]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:29 compute-0 sudo[164739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jggrpoyjiwyjgcykmufujokvmsipqvqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254569.1946907-794-166249484661663/AnsiballZ_container_config_hash.py'
Sep 30 17:49:29 compute-0 sudo[164739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:29 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:29 compute-0 python3.9[164741]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 17:49:29 compute-0 sudo[164739]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:29.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:29 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1260004240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:30 compute-0 ceph-mon[73755]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:30.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:30 compute-0 sudo[164892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nitszothhzervbefwpakimynwjwxgcic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254570.215565-812-193936926575450/AnsiballZ_podman_container_info.py'
Sep 30 17:49:30 compute-0 sudo[164892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:30 compute-0 python3.9[164894]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Sep 30 17:49:30 compute-0 sudo[164892]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:31.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:31 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:32 compute-0 sudo[165073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phhrgpqbiopweknadwwxmngqswyccysc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759254571.8793833-838-195565805035779/AnsiballZ_edpm_container_manage.py'
Sep 30 17:49:32 compute-0 sudo[165073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:32.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:32 compute-0 ceph-mon[73755]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:32 compute-0 python3[165075]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 17:49:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:33 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:33 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:34.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:35 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:35 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f123c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:36 compute-0 ceph-mon[73755]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:36.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:49:37.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:49:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:49:37.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:49:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:49:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:49:37 compute-0 ceph-mon[73755]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:49:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:49:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:37 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1260004240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:37 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:49:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:38.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:38 compute-0 podman[165162]: 2025-09-30 17:49:38.570425877 +0000 UTC m=+1.101869652 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 17:49:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:49:38 compute-0 ceph-mon[73755]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:38] "GET /metrics HTTP/1.1" 200 46508 "" "Prometheus/2.51.0"
Sep 30 17:49:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:38] "GET /metrics HTTP/1.1" 200 46508 "" "Prometheus/2.51.0"
Sep 30 17:49:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:39 compute-0 kernel: ganesha.nfsd[157230]: segfault at 50 ip 00007f1322e5f32e sp 00007f12df7fd210 error 4 in libntirpc.so.5.8[7f1322e44000+2c000] likely on CPU 5 (core 0, socket 5)
Sep 30 17:49:39 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:49:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[145915]: 30/09/2025 17:49:39 : epoch 68dc17b3 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003240 fd 38 proxy ignored for local
Sep 30 17:49:39 compute-0 systemd[1]: Started Process Core Dump (PID 165227/UID 0).
Sep 30 17:49:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:39.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:40.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174941 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:49:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:41 compute-0 podman[165090]: 2025-09-30 17:49:41.65084115 +0000 UTC m=+8.977030091 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 17:49:41 compute-0 sudo[165268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:49:41 compute-0 sudo[165268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:49:41 compute-0 sudo[165268]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:41 compute-0 podman[165255]: 2025-09-30 17:49:41.768699402 +0000 UTC m=+0.022630819 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 17:49:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:41.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:42.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:42 compute-0 systemd-coredump[165228]: Process 145923 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007f1322e5f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:49:42 compute-0 systemd[1]: systemd-coredump@6-165227-0.service: Deactivated successfully.
Sep 30 17:49:42 compute-0 systemd[1]: systemd-coredump@6-165227-0.service: Consumed 1.186s CPU time.
Sep 30 17:49:43 compute-0 podman[165297]: 2025-09-30 17:49:43.019637509 +0000 UTC m=+0.080585506 container died 0529ffb5cb030d5c033521a9a1517a83434d1d37b59a640422637d81c7f55fa3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bffd7de9cb0fdc1881cfa2ffe97a9dccf33440ed0934b7372fe92a2196aa2cfb-merged.mount: Deactivated successfully.
Sep 30 17:49:43 compute-0 podman[165297]: 2025-09-30 17:49:43.367770087 +0000 UTC m=+0.428718004 container remove 0529ffb5cb030d5c033521a9a1517a83434d1d37b59a640422637d81c7f55fa3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:49:43 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:49:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:43 compute-0 podman[165255]: 2025-09-30 17:49:43.421448066 +0000 UTC m=+1.675379463 container create 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2)
Sep 30 17:49:43 compute-0 python3[165075]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 17:49:43 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:49:43 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.559s CPU time.
Sep 30 17:49:43 compute-0 sudo[165073]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:43 compute-0 ceph-mon[73755]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:49:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:43.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:44.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:45 compute-0 ceph-mon[73755]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:45 compute-0 ceph-mon[73755]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:49:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:49:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:45.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:46 compute-0 ceph-mon[73755]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:49:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:46.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:49:47.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:49:47 compute-0 sudo[165518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdoopepcehgtqiswlmmxyqtujgckmuaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254586.810407-854-95090613934880/AnsiballZ_stat.py'
Sep 30 17:49:47 compute-0 sudo[165518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:47 compute-0 python3.9[165520]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:49:47 compute-0 sudo[165518]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:49:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/174947 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:49:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [ALERT] 272/174947 (4) : backend 'backend' has no server available!
Sep 30 17:49:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:49:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:47.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:49:48 compute-0 sudo[165674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqdtiotvhwassuxewusmlfbygglzuewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254587.8078325-872-115401337349747/AnsiballZ_file.py'
Sep 30 17:49:48 compute-0 sudo[165674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:48 compute-0 python3.9[165676]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:48 compute-0 sudo[165674]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:48 compute-0 ceph-mon[73755]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:49:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:48.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:48 compute-0 sudo[165750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssfiaqtzhxnphaowjleabriwvkhkbmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254587.8078325-872-115401337349747/AnsiballZ_stat.py'
Sep 30 17:49:48 compute-0 sudo[165750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:48 compute-0 python3.9[165752]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:49:48 compute-0 sudo[165750]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:48] "GET /metrics HTTP/1.1" 200 46508 "" "Prometheus/2.51.0"
Sep 30 17:49:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:48] "GET /metrics HTTP/1.1" 200 46508 "" "Prometheus/2.51.0"
Sep 30 17:49:49 compute-0 sudo[165901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxujcllfinyfphaejphrdiaewkzpewac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254588.8446062-872-267555914507556/AnsiballZ_copy.py'
Sep 30 17:49:49 compute-0 sudo[165901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:49:49 compute-0 python3.9[165904]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759254588.8446062-872-267555914507556/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:49:49 compute-0 sudo[165901]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:49 compute-0 sudo[165979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bccwanvrmcyqvokmkljgkvyhgkwbwnul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254588.8446062-872-267555914507556/AnsiballZ_systemd.py'
Sep 30 17:49:49 compute-0 sudo[165979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:49.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:50 compute-0 python3.9[165981]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:49:50 compute-0 systemd[1]: Reloading.
Sep 30 17:49:50 compute-0 systemd-rc-local-generator[166011]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:49:50 compute-0 systemd-sysv-generator[166015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:49:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:50.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:50 compute-0 ceph-mon[73755]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:49:50 compute-0 sudo[165979]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:50 compute-0 sudo[166091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoipeqwnqevaotobkbrrhivnnoqeyqmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254588.8446062-872-267555914507556/AnsiballZ_systemd.py'
Sep 30 17:49:50 compute-0 sudo[166091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:49:51 compute-0 python3.9[166093]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:49:51 compute-0 systemd[1]: Reloading.
Sep 30 17:49:51 compute-0 systemd-rc-local-generator[166122]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:49:51 compute-0 systemd-sysv-generator[166125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:49:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:49:51 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Sep 30 17:49:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa37c31fa531850d6f6f1ffd5fc1d3824aace6d9ac91173ee2d12389cdd7a07/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa37c31fa531850d6f6f1ffd5fc1d3824aace6d9ac91173ee2d12389cdd7a07/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:51 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b.
Sep 30 17:49:51 compute-0 podman[166135]: 2025-09-30 17:49:51.783053956 +0000 UTC m=+0.153249114 container init 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 17:49:51 compute-0 podman[166135]: 2025-09-30 17:49:51.805854355 +0000 UTC m=+0.176049503 container start 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent)
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + sudo -E kolla_set_configs
Sep 30 17:49:51 compute-0 edpm-start-podman-container[166135]: ovn_metadata_agent
Sep 30 17:49:51 compute-0 edpm-start-podman-container[166134]: Creating additional drop-in dependency for "ovn_metadata_agent" (422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b)
Sep 30 17:49:51 compute-0 systemd[1]: Reloading.
Sep 30 17:49:51 compute-0 podman[166157]: 2025-09-30 17:49:51.918048446 +0000 UTC m=+0.095968229 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent)
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Validating config file
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Copying service configuration files
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Writing out command to execute
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Setting permission for /var/lib/neutron
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Setting permission for /var/lib/neutron/external
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: ++ cat /run_command
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + CMD=neutron-ovn-metadata-agent
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + ARGS=
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + sudo kolla_copy_cacerts
Sep 30 17:49:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:51.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + [[ ! -n '' ]]
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + . kolla_extend_start
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: Running command: 'neutron-ovn-metadata-agent'
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + umask 0022
Sep 30 17:49:51 compute-0 ovn_metadata_agent[166152]: + exec neutron-ovn-metadata-agent
Sep 30 17:49:52 compute-0 systemd-rc-local-generator[166230]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:49:52 compute-0 systemd-sysv-generator[166234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:49:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:49:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:49:52 compute-0 systemd[1]: Started ovn_metadata_agent container.
Sep 30 17:49:52 compute-0 sudo[166091]: pam_unix(sudo:session): session closed for user root
Sep 30 17:49:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:52 compute-0 ceph-mon[73755]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:49:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:49:53 compute-0 sshd-session[156880]: Connection closed by 192.168.122.30 port 38130
Sep 30 17:49:53 compute-0 sshd-session[156877]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:49:53 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Sep 30 17:49:53 compute-0 systemd[1]: session-53.scope: Consumed 53.034s CPU time.
Sep 30 17:49:53 compute-0 systemd-logind[811]: Session 53 logged out. Waiting for processes to exit.
Sep 30 17:49:53 compute-0 systemd-logind[811]: Removed session 53.
Sep 30 17:49:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:49:53 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 7.
Sep 30 17:49:53 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:49:53 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.559s CPU time.
Sep 30 17:49:53 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:49:53 compute-0 podman[166316]: 2025-09-30 17:49:53.971548656 +0000 UTC m=+0.063765461 container create 19fbc40b02e323367f93f3b9a723998ed14058b2255ee1d6e782e163789d2cb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:49:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:49:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:53.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:49:54 compute-0 podman[166316]: 2025-09-30 17:49:53.938759133 +0000 UTC m=+0.030976028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3247a9c0ecc6ea78403ada61590e9d4554f42b75d80557f90fb1eda7c85e750/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3247a9c0ecc6ea78403ada61590e9d4554f42b75d80557f90fb1eda7c85e750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3247a9c0ecc6ea78403ada61590e9d4554f42b75d80557f90fb1eda7c85e750/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3247a9c0ecc6ea78403ada61590e9d4554f42b75d80557f90fb1eda7c85e750/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:49:54 compute-0 podman[166316]: 2025-09-30 17:49:54.066810946 +0000 UTC m=+0.159027781 container init 19fbc40b02e323367f93f3b9a723998ed14058b2255ee1d6e782e163789d2cb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:49:54 compute-0 podman[166316]: 2025-09-30 17:49:54.072933251 +0000 UTC m=+0.165150056 container start 19fbc40b02e323367f93f3b9a723998ed14058b2255ee1d6e782e163789d2cb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:49:54 compute-0 bash[166316]: 19fbc40b02e323367f93f3b9a723998ed14058b2255ee1d6e782e163789d2cb5
Sep 30 17:49:54 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:49:54 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:49:54 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:49:54 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:49:54 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:49:54 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:49:54 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:49:54 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.202 166158 INFO neutron.common.config [-] Logging enabled!
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.202 166158 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 26.1.0.dev268
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.202 166158 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.12/site-packages/neutron/common/config.py:124
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.203 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.203 166158 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.203 166158 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.203 166158 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.204 166158 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] enable_signals                 = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.205 166158 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.206 166158 DEBUG neutron.agent.ovn.metadata_agent [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.207 166158 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.208 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] my_ip                          = 38.102.83.202 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] my_ipv6                        = ::1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.209 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.210 166158 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_qinq                      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.211 166158 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.212 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_requests        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_jaeger.process_tags   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_jaeger.service_name_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_otlp.service_name_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.213 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_timeout     = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.214 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.log_daemon_traceback   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.215 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.216 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.217 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.218 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mappings            = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.219 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.datapath_type              = system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood_reports         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood_unregistered    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.int_peer_patch_port        = patch-tun log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.integration_bridge         = br-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.local_ip                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_connect_timeout         = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_inactivity_probe        = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.220 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_listen_address          = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_listen_port             = 6633 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_request_timeout         = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.openflow_processed_per_port = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_debug                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.qos_meter_bandwidth        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_bandwidths = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_default_hypervisor = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.221 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_hypervisors = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_with_direction = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_without_direction = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_ca_cert_file           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_cert_file              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_key_file               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.tun_peer_patch_port        = patch-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.222 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.tunnel_bridge              = br-tun log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.vhostuser_socket_dir       = /var/run/openvswitch log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.223 166158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] agent.extensions               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.224 166158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.225 166158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.226 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.227 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.broadcast_arps_to_all_routers = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_records_ovn_owned      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.fdb_age_threshold          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.live_migration_activation_strategy = rarp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.localnet_learn_fdb         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.228 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.mac_binding_age_threshold  = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = ['tcp:127.0.0.1:6641'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_router_indirect_snat   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.229 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ['ssl:ovsdbserver-sb.openstack.svc:6642'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.fdb_removal_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.ignore_lsp_down  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.mac_binding_removal_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.base_query_rate_limit = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.base_window_duration = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.230 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.burst_query_rate_limit = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.burst_window_duration = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.ip_versions = [4] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.rate_limit_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.231 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.hostname = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_splay = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.processname = neutron-ovn-metadata-agent log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.232 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_stream_fanout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_quorum_queue = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.233 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.use_queue_manager = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.234 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.235 166158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.235 166158 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Sep 30 17:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:49:54 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.243 166158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.243 166158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.243 166158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.243 166158 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.243 166158 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.252 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name b0398922-aff5-46ba-afa7-58d09e28293c (UUID: b0398922-aff5-46ba-afa7-58d09e28293c) and ovn bridge br-int. _load_config /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:419
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.274 166158 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.274 166158 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.274 166158 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Port_Binding.logical_port autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.274 166158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.274 166158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.278 166158 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.281 166158 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.286 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'b0398922-aff5-46ba-afa7-58d09e28293c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], external_ids={}, name=b0398922-aff5-46ba-afa7-58d09e28293c, nb_cfg_timestamp=1759254525794, nb_cfg=1) old= matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 17:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.289 166158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpn3_vjzv9/privsep.sock']
Sep 30 17:49:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:54 compute-0 ceph-mon[73755]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:49:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:54 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:55.029 166158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:55.029 166158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpn3_vjzv9/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.865 166383 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.873 166383 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.877 166383 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:54.877 166383 INFO oslo.privsep.daemon [-] privsep daemon running as pid 166383
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:55.031 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[886ac4a5-27a8-44c8-9777-d898f59f02f4]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 17:49:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:55.528 166383 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:55.529 166383 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:49:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:55.529 166383 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:49:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:55.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:56.004 166383 INFO oslo_service.backend [-] Loading backend: eventlet
Sep 30 17:49:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:56.010 166383 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Sep 30 17:49:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:56.048 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[d13b8757-81dc-41b4-82ae-b7b4ed7a5d23]: (4, []) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 17:49:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:56.051 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, column=external_ids, values=({'neutron:ovn-metadata-id': '743b963b-e7e8-5710-a876-d0dee39d5d56'},)) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 17:49:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:56.064 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 17:49:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:49:56.069 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '1'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 17:49:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:56 compute-0 ceph-mon[73755]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:49:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:49:57.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:49:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:49:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:57.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:49:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:49:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:49:58.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:49:58 compute-0 sshd-session[166392]: Accepted publickey for zuul from 192.168.122.30 port 48070 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:49:58 compute-0 systemd-logind[811]: New session 54 of user zuul.
Sep 30 17:49:58 compute-0 systemd[1]: Started Session 54 of User zuul.
Sep 30 17:49:58 compute-0 ceph-mon[73755]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:49:58 compute-0 sshd-session[166392]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:49:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:58] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:49:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:49:58] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 17:49:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:49:59 compute-0 python3.9[166546]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:49:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:49:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:49:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:49:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:49:59.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 17:50:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 17:50:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Sep 30 17:50:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:00 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:50:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:00 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:50:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:00.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:00 compute-0 sudo[166701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgjsvdidipajsohvlyvhpjgzrrbxhial ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254600.1952658-48-253234837241695/AnsiballZ_command.py'
Sep 30 17:50:00 compute-0 sudo[166701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:00 compute-0 ceph-mon[73755]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:50:00 compute-0 ceph-mon[73755]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 17:50:00 compute-0 ceph-mon[73755]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 17:50:00 compute-0 ceph-mon[73755]:      osd.1 observed slow operation indications in BlueStore
Sep 30 17:50:00 compute-0 python3.9[166703]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:00 compute-0 sudo[166701]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:50:01 compute-0 sudo[166818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:50:01 compute-0 sudo[166818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:01 compute-0 sudo[166818]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:01 compute-0 sudo[166893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhgpblpndojakoxmzgwnpmsxvbmrsdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254601.3558078-70-273930948301491/AnsiballZ_systemd_service.py'
Sep 30 17:50:01 compute-0 sudo[166893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:01.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:02 compute-0 python3.9[166895]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:50:02 compute-0 systemd[1]: Reloading.
Sep 30 17:50:02 compute-0 systemd-rc-local-generator[166921]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:50:02 compute-0 systemd-sysv-generator[166924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:50:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:02.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:02 compute-0 sudo[166893]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:02 compute-0 ceph-mon[73755]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:50:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:03 compute-0 python3.9[167093]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:50:03 compute-0 network[167111]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:50:03 compute-0 network[167112]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:50:03 compute-0 network[167113]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:03 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd368000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:03.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:04 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:04.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:04 compute-0 ceph-mon[73755]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175005 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:50:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:05 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd33c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:05.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:06 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd368000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:06 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:50:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:06.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:06 compute-0 ceph-mon[73755]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:50:07.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:50:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:50:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:50:07
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'volumes', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:50:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:50:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:50:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:07 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd344000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:07.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:08 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:50:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:08.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:50:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:08] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:50:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:08] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:50:08 compute-0 ceph-mon[73755]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 17:50:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:09 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd33c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:09.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:10 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd3680021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:10 compute-0 sudo[167401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dllvgnrtycrmtxyrdojzusqjqsmycmwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254609.7925696-108-81866848831423/AnsiballZ_systemd_service.py'
Sep 30 17:50:10 compute-0 sudo[167401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:10 compute-0 ceph-mon[73755]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 17:50:10 compute-0 podman[167359]: 2025-09-30 17:50:10.181940522 +0000 UTC m=+0.146747619 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 17:50:10 compute-0 python3.9[167407]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:50:10 compute-0 sudo[167401]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:10.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:10 compute-0 sudo[167564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxlyzwmezaqvqsdjybhkumgmaryzhyow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254610.5737917-108-253719896416964/AnsiballZ_systemd_service.py'
Sep 30 17:50:10 compute-0 sudo[167564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:11 compute-0 python3.9[167566]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:50:11 compute-0 sudo[167564]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:50:11 compute-0 sudo[167719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgugixbvvwswmxzawtywtdbchowpzqrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254611.3481278-108-253142034069882/AnsiballZ_systemd_service.py'
Sep 30 17:50:11 compute-0 sudo[167719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:11 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd344001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:11 compute-0 python3.9[167721]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:50:11 compute-0 sudo[167719]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:11.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:12 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:12 compute-0 sudo[167872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjvmmoamzepvrtcbezrzjnjqqxaiwmvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254612.1778488-108-139993288451561/AnsiballZ_systemd_service.py'
Sep 30 17:50:12 compute-0 sudo[167872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:12.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:12 compute-0 python3.9[167874]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:50:12 compute-0 sudo[167872]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:12 compute-0 ceph-mon[73755]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 17:50:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175013 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:50:13 compute-0 sudo[168031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmqmvgarjcmopjymnmwqfkofdvykkkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254612.8688293-108-120356848257534/AnsiballZ_systemd_service.py'
Sep 30 17:50:13 compute-0 sudo[168031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:13 compute-0 sudo[168022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:50:13 compute-0 sudo[168022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:13 compute-0 sudo[168022]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:13 compute-0 sudo[168053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:50:13 compute-0 sudo[168053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 70 KiB/s rd, 511 B/s wr, 116 op/s
Sep 30 17:50:13 compute-0 python3.9[168050]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:50:13 compute-0 sudo[168031]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:13 compute-0 sudo[168053]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:13 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd33c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:13 compute-0 sudo[168261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsnasvebhofauiumfkcaakmsqhxfxftj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254613.583371-108-14002938496517/AnsiballZ_systemd_service.py'
Sep 30 17:50:13 compute-0 sudo[168261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:50:13 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:50:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:50:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:50:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:50:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:50:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:50:13 compute-0 ceph-mon[73755]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 70 KiB/s rd, 511 B/s wr, 116 op/s
Sep 30 17:50:13 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:50:13 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:50:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:50:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:50:13 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:50:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:50:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:50:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:50:13 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:50:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:14.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:14 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd3680021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:14 compute-0 sudo[168264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:50:14 compute-0 sudo[168264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:14 compute-0 sudo[168264]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:14 compute-0 sudo[168289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:50:14 compute-0 sudo[168289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:14 compute-0 python3.9[168263]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:50:14 compute-0 sudo[168261]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:14 compute-0 podman[168454]: 2025-09-30 17:50:14.478440507 +0000 UTC m=+0.036073227 container create 8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hugle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:50:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:14.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:14 compute-0 systemd[1]: Started libpod-conmon-8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b.scope.
Sep 30 17:50:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:50:14 compute-0 podman[168454]: 2025-09-30 17:50:14.461196189 +0000 UTC m=+0.018828929 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:50:14 compute-0 podman[168454]: 2025-09-30 17:50:14.559316102 +0000 UTC m=+0.116948842 container init 8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:50:14 compute-0 podman[168454]: 2025-09-30 17:50:14.565909499 +0000 UTC m=+0.123542219 container start 8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:50:14 compute-0 podman[168454]: 2025-09-30 17:50:14.568920586 +0000 UTC m=+0.126553326 container attach 8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 17:50:14 compute-0 thirsty_hugle[168495]: 167 167
Sep 30 17:50:14 compute-0 systemd[1]: libpod-8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b.scope: Deactivated successfully.
Sep 30 17:50:14 compute-0 podman[168454]: 2025-09-30 17:50:14.572116167 +0000 UTC m=+0.129748887 container died 8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:50:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-610ac82b15e356623584cd207f0c343e2f58bf546f9dc63fbec5efa7dd73e3b6-merged.mount: Deactivated successfully.
Sep 30 17:50:14 compute-0 sudo[168534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjhhsxfjhudvammvwlaqgdixogmdpzck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254614.2948616-108-171209766768640/AnsiballZ_systemd_service.py'
Sep 30 17:50:14 compute-0 sudo[168534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:14 compute-0 podman[168454]: 2025-09-30 17:50:14.751048543 +0000 UTC m=+0.308681263 container remove 8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hugle, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:50:14 compute-0 systemd[1]: libpod-conmon-8afc352accf818863e7ba5e224f0dcd72d5a6c1893e23cf8e0d64af39d1eb26b.scope: Deactivated successfully.
Sep 30 17:50:14 compute-0 podman[168548]: 2025-09-30 17:50:14.925248819 +0000 UTC m=+0.049418637 container create 46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 17:50:14 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:50:14 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:50:14 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:50:14 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:50:14 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:50:14 compute-0 systemd[1]: Started libpod-conmon-46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c.scope.
Sep 30 17:50:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:50:14 compute-0 podman[168548]: 2025-09-30 17:50:14.905093347 +0000 UTC m=+0.029263185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:50:14 compute-0 python3.9[168540]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7ecc55eadbf72070bb0f2d75eac6454b9ea66ded35b46f4551e9b5c74cfd86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7ecc55eadbf72070bb0f2d75eac6454b9ea66ded35b46f4551e9b5c74cfd86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7ecc55eadbf72070bb0f2d75eac6454b9ea66ded35b46f4551e9b5c74cfd86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7ecc55eadbf72070bb0f2d75eac6454b9ea66ded35b46f4551e9b5c74cfd86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7ecc55eadbf72070bb0f2d75eac6454b9ea66ded35b46f4551e9b5c74cfd86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:15 compute-0 podman[168548]: 2025-09-30 17:50:15.015373828 +0000 UTC m=+0.139543636 container init 46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 17:50:15 compute-0 podman[168548]: 2025-09-30 17:50:15.023749101 +0000 UTC m=+0.147918919 container start 46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 17:50:15 compute-0 podman[168548]: 2025-09-30 17:50:15.028103932 +0000 UTC m=+0.152273760 container attach 46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:50:15 compute-0 sudo[168534]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:15 compute-0 happy_heyrovsky[168565]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:50:15 compute-0 happy_heyrovsky[168565]: --> All data devices are unavailable
Sep 30 17:50:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 85 B/s wr, 115 op/s
Sep 30 17:50:15 compute-0 systemd[1]: libpod-46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c.scope: Deactivated successfully.
Sep 30 17:50:15 compute-0 podman[168606]: 2025-09-30 17:50:15.451126308 +0000 UTC m=+0.025594131 container died 46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:50:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7ecc55eadbf72070bb0f2d75eac6454b9ea66ded35b46f4551e9b5c74cfd86-merged.mount: Deactivated successfully.
Sep 30 17:50:15 compute-0 podman[168606]: 2025-09-30 17:50:15.497274381 +0000 UTC m=+0.071742194 container remove 46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heyrovsky, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:50:15 compute-0 systemd[1]: libpod-conmon-46b985acb91c7a1497b7102766e62c30b2c4e890f8229f75ee15a0404e9aea6c.scope: Deactivated successfully.
Sep 30 17:50:15 compute-0 sudo[168289]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:15 compute-0 sudo[168621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:50:15 compute-0 sudo[168621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:15 compute-0 sudo[168621]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:15 compute-0 sudo[168647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:50:15 compute-0 sudo[168647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:15 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd344001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:15 compute-0 ceph-mon[73755]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 85 B/s wr, 115 op/s
Sep 30 17:50:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:50:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:16.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:50:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:16 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:16 compute-0 podman[168713]: 2025-09-30 17:50:16.184747306 +0000 UTC m=+0.055543102 container create f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:50:16 compute-0 systemd[1]: Started libpod-conmon-f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861.scope.
Sep 30 17:50:16 compute-0 podman[168713]: 2025-09-30 17:50:16.16089012 +0000 UTC m=+0.031686056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:50:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:50:16 compute-0 podman[168713]: 2025-09-30 17:50:16.274904087 +0000 UTC m=+0.145699913 container init f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 17:50:16 compute-0 podman[168713]: 2025-09-30 17:50:16.281805452 +0000 UTC m=+0.152601238 container start f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:50:16 compute-0 podman[168713]: 2025-09-30 17:50:16.285334082 +0000 UTC m=+0.156129908 container attach f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:50:16 compute-0 adoring_blackwell[168761]: 167 167
Sep 30 17:50:16 compute-0 systemd[1]: libpod-f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861.scope: Deactivated successfully.
Sep 30 17:50:16 compute-0 conmon[168761]: conmon f73eba1c4392da4f6c44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861.scope/container/memory.events
Sep 30 17:50:16 compute-0 podman[168787]: 2025-09-30 17:50:16.326714373 +0000 UTC m=+0.026188076 container died f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 17:50:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-39c94e45d2408a22c4e05037c7c859a01f2407640d34556ef4a46e0ce0ebca8d-merged.mount: Deactivated successfully.
Sep 30 17:50:16 compute-0 podman[168787]: 2025-09-30 17:50:16.368492484 +0000 UTC m=+0.067966187 container remove f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:50:16 compute-0 systemd[1]: libpod-conmon-f73eba1c4392da4f6c440a57ce1b16775956c9ade6639865330b09bbf8b23861.scope: Deactivated successfully.
Sep 30 17:50:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:16.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:16 compute-0 podman[168833]: 2025-09-30 17:50:16.568561527 +0000 UTC m=+0.049745544 container create 82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:50:16 compute-0 systemd[1]: Started libpod-conmon-82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1.scope.
Sep 30 17:50:16 compute-0 sudo[168899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iztiieeobyccqlgjfhwzriiijnrbormg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254616.196551-212-176368153996168/AnsiballZ_file.py'
Sep 30 17:50:16 compute-0 sudo[168899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4d41ebe1ac19ef5479448b7a9c1b2fa41d41978b17567716d24bfb58465cc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4d41ebe1ac19ef5479448b7a9c1b2fa41d41978b17567716d24bfb58465cc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4d41ebe1ac19ef5479448b7a9c1b2fa41d41978b17567716d24bfb58465cc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4d41ebe1ac19ef5479448b7a9c1b2fa41d41978b17567716d24bfb58465cc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:16 compute-0 podman[168833]: 2025-09-30 17:50:16.550237872 +0000 UTC m=+0.031421909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:50:16 compute-0 podman[168833]: 2025-09-30 17:50:16.655049045 +0000 UTC m=+0.136233082 container init 82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:50:16 compute-0 podman[168833]: 2025-09-30 17:50:16.668092866 +0000 UTC m=+0.149276883 container start 82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:50:16 compute-0 podman[168833]: 2025-09-30 17:50:16.672404426 +0000 UTC m=+0.153588473 container attach 82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 17:50:16 compute-0 python3.9[168903]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:16 compute-0 sudo[168899]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]: {
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:     "0": [
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:         {
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "devices": [
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "/dev/loop3"
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             ],
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "lv_name": "ceph_lv0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "lv_size": "21470642176",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "name": "ceph_lv0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "tags": {
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.cluster_name": "ceph",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.crush_device_class": "",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.encrypted": "0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.osd_id": "0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.type": "block",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.vdo": "0",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:                 "ceph.with_tpm": "0"
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             },
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "type": "block",
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:             "vg_name": "ceph_vg0"
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:         }
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]:     ]
Sep 30 17:50:16 compute-0 priceless_hodgkin[168901]: }
Sep 30 17:50:16 compute-0 systemd[1]: libpod-82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1.scope: Deactivated successfully.
Sep 30 17:50:16 compute-0 podman[168833]: 2025-09-30 17:50:16.993545225 +0000 UTC m=+0.474729242 container died 82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 17:50:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:50:17.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f4d41ebe1ac19ef5479448b7a9c1b2fa41d41978b17567716d24bfb58465cc6-merged.mount: Deactivated successfully.
Sep 30 17:50:17 compute-0 podman[168833]: 2025-09-30 17:50:17.038344873 +0000 UTC m=+0.519528890 container remove 82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 17:50:17 compute-0 systemd[1]: libpod-conmon-82c32c528631a8b593325dffa73aece23a7fab3bc0c90bd1f06b05f9f7f4cfe1.scope: Deactivated successfully.
Sep 30 17:50:17 compute-0 sudo[168647]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:17 compute-0 sudo[169021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:50:17 compute-0 sudo[169021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:17 compute-0 sudo[169021]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:17 compute-0 sudo[169066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:50:17 compute-0 sudo[169066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:17 compute-0 sudo[169123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmjijhtpnflixxjsfsivhqwdquakebyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254616.990047-212-121507074075986/AnsiballZ_file.py'
Sep 30 17:50:17 compute-0 sudo[169123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 85 B/s wr, 115 op/s
Sep 30 17:50:17 compute-0 python3.9[169125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:17 compute-0 sudo[169123]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:17 compute-0 podman[169190]: 2025-09-30 17:50:17.65816129 +0000 UTC m=+0.047391405 container create aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:50:17 compute-0 systemd[1]: Started libpod-conmon-aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a.scope.
Sep 30 17:50:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:50:17 compute-0 podman[169190]: 2025-09-30 17:50:17.637035023 +0000 UTC m=+0.026265158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:50:17 compute-0 podman[169190]: 2025-09-30 17:50:17.747806577 +0000 UTC m=+0.137036712 container init aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ellis, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 17:50:17 compute-0 podman[169190]: 2025-09-30 17:50:17.755880352 +0000 UTC m=+0.145110467 container start aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ellis, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:50:17 compute-0 podman[169190]: 2025-09-30 17:50:17.758651693 +0000 UTC m=+0.147881808 container attach aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:50:17 compute-0 competent_ellis[169230]: 167 167
Sep 30 17:50:17 compute-0 systemd[1]: libpod-aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a.scope: Deactivated successfully.
Sep 30 17:50:17 compute-0 conmon[169230]: conmon aa47fef8e7afe30cbc06 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a.scope/container/memory.events
Sep 30 17:50:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:17 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd33c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:17 compute-0 podman[169235]: 2025-09-30 17:50:17.806004536 +0000 UTC m=+0.025684424 container died aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ellis, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddf7ceb87202f43500226aec2ee565e96415d113029ce5620ee5dbcc08aa4f3d-merged.mount: Deactivated successfully.
Sep 30 17:50:17 compute-0 podman[169235]: 2025-09-30 17:50:17.838734757 +0000 UTC m=+0.058414635 container remove aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:50:17 compute-0 systemd[1]: libpod-conmon-aa47fef8e7afe30cbc068c5e3a95e19d64f4462c2ae7c8c97d800954dff3898a.scope: Deactivated successfully.
Sep 30 17:50:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:18.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:18 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd3680021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:18 compute-0 podman[169291]: 2025-09-30 17:50:18.042024292 +0000 UTC m=+0.059782630 container create 96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:50:18 compute-0 systemd[1]: Started libpod-conmon-96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce.scope.
Sep 30 17:50:18 compute-0 podman[169291]: 2025-09-30 17:50:18.009805603 +0000 UTC m=+0.027563961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:50:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754420c3f8d322354017f5f4e89842f4d1f56c957af82bda822ed11c2eebce66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754420c3f8d322354017f5f4e89842f4d1f56c957af82bda822ed11c2eebce66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754420c3f8d322354017f5f4e89842f4d1f56c957af82bda822ed11c2eebce66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754420c3f8d322354017f5f4e89842f4d1f56c957af82bda822ed11c2eebce66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:18 compute-0 sudo[169377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnyopdyspkugxcqkgfaoytwickffhqky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254617.8463988-212-187194168162693/AnsiballZ_file.py'
Sep 30 17:50:18 compute-0 sudo[169377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:18 compute-0 podman[169291]: 2025-09-30 17:50:18.161794775 +0000 UTC m=+0.179553243 container init 96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:50:18 compute-0 podman[169291]: 2025-09-30 17:50:18.171592654 +0000 UTC m=+0.189350992 container start 96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mcclintock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 17:50:18 compute-0 podman[169291]: 2025-09-30 17:50:18.188614686 +0000 UTC m=+0.206373044 container attach 96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:50:18 compute-0 python3.9[169379]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:18 compute-0 sudo[169377]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:18 compute-0 ceph-mon[73755]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 85 B/s wr, 115 op/s
Sep 30 17:50:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:50:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:18.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:50:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:18] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:50:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:18] "GET /metrics HTTP/1.1" 200 46510 "" "Prometheus/2.51.0"
Sep 30 17:50:18 compute-0 sudo[169596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlhhmvtuigkvzcqhcqyumdcfawuimjoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254618.5273662-212-213958748321669/AnsiballZ_file.py'
Sep 30 17:50:18 compute-0 sudo[169596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:18 compute-0 lvm[169603]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:50:18 compute-0 lvm[169603]: VG ceph_vg0 finished
Sep 30 17:50:18 compute-0 wizardly_mcclintock[169359]: {}
Sep 30 17:50:18 compute-0 systemd[1]: libpod-96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce.scope: Deactivated successfully.
Sep 30 17:50:18 compute-0 systemd[1]: libpod-96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce.scope: Consumed 1.308s CPU time.
Sep 30 17:50:18 compute-0 podman[169291]: 2025-09-30 17:50:18.988640331 +0000 UTC m=+1.006398679 container died 96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mcclintock, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:50:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-754420c3f8d322354017f5f4e89842f4d1f56c957af82bda822ed11c2eebce66-merged.mount: Deactivated successfully.
Sep 30 17:50:19 compute-0 python3.9[169600]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:19 compute-0 sudo[169596]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:19 compute-0 podman[169291]: 2025-09-30 17:50:19.039808881 +0000 UTC m=+1.057567229 container remove 96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:50:19 compute-0 systemd[1]: libpod-conmon-96e65bfa9308696589c904b28890edc78536c5ef51f5a0f36632d4ca9b879dce.scope: Deactivated successfully.
Sep 30 17:50:19 compute-0 sudo[169066]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:50:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 85 B/s wr, 115 op/s
Sep 30 17:50:19 compute-0 sudo[169769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuzjcpeypvurgbmtvjkqssuqlwdwqpek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254619.185443-212-115520686569132/AnsiballZ_file.py'
Sep 30 17:50:19 compute-0 sudo[169769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:19 compute-0 python3.9[169771]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:19 compute-0 sudo[169769]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:19 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd344001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:20.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:20 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:20 compute-0 sudo[169922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-numguycmatvzsulhpfitibyokfggohsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254619.786023-212-278899943327901/AnsiballZ_file.py'
Sep 30 17:50:20 compute-0 sudo[169922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:20 compute-0 python3.9[169924]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:20 compute-0 sudo[169922]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:20.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:20 compute-0 sudo[170074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckqibjjxzznttcypwypdegkolczzwmyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254620.4453387-212-110161297878926/AnsiballZ_file.py'
Sep 30 17:50:20 compute-0 sudo[170074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:20 compute-0 python3.9[170076]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:20 compute-0 sudo[170074]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:50:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:50:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Sep 30 17:50:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:21 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd33c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:22.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:22 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd3680095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:22 compute-0 ceph-mon[73755]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 85 B/s wr, 115 op/s
Sep 30 17:50:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:50:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:50:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:50:22 compute-0 sudo[170103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:50:22 compute-0 sudo[170103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:22 compute-0 sudo[170103]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:22 compute-0 sudo[170104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:50:22 compute-0 sudo[170104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:22 compute-0 sudo[170104]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:22 compute-0 podman[170145]: 2025-09-30 17:50:22.516117888 +0000 UTC m=+0.052870853 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:50:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:50:23 compute-0 ceph-mon[73755]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Sep 30 17:50:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:50:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:50:23 compute-0 sudo[170299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nowcjscjqndsmxwnlrtrpckshoguqhzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254623.0787392-312-234914678429996/AnsiballZ_file.py'
Sep 30 17:50:23 compute-0 sudo[170299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Sep 30 17:50:23 compute-0 python3.9[170301]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:23 compute-0 sudo[170299]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:23 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd344002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:23 compute-0 sudo[170452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbbihdjfbbyqegdlryewxawevxekkgud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254623.6935096-312-203289093300983/AnsiballZ_file.py'
Sep 30 17:50:23 compute-0 sudo[170452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:50:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:24.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:50:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:24 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:24 compute-0 python3.9[170454]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:24 compute-0 sudo[170452]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:24 compute-0 ceph-mon[73755]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Sep 30 17:50:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:24.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:24 compute-0 sudo[170604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgobxhjetzvttqhumxedhkjofwlqcivy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254624.3392031-312-10691825242421/AnsiballZ_file.py'
Sep 30 17:50:24 compute-0 sudo[170604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:24 compute-0 python3.9[170606]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:24 compute-0 sudo[170604]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:25 compute-0 sudo[170756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuamwkjzwkhiowlumyhqkattbvgxnank ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254624.9534864-312-151866153366119/AnsiballZ_file.py'
Sep 30 17:50:25 compute-0 sudo[170756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:50:25 compute-0 python3.9[170758]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:25 compute-0 sudo[170756]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:25 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd33c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:25 compute-0 sudo[170910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vciwluojhwszjqpwopbkjkltjqwtdssf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254625.6115599-312-7092141255676/AnsiballZ_file.py'
Sep 30 17:50:25 compute-0 sudo[170910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:26.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:26 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd3680095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:26 compute-0 python3.9[170912]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:26 compute-0 sudo[170910]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:26 compute-0 ceph-mon[73755]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:50:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:26.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:26 compute-0 sudo[171062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvjgwqzvlqdozqqsepsvqridgvhyjefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254626.2896967-312-136731769185434/AnsiballZ_file.py'
Sep 30 17:50:26 compute-0 sudo[171062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:26 compute-0 python3.9[171064]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:26 compute-0 sudo[171062]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:50:27.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:50:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175027 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:50:27 compute-0 sudo[171214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuryxtgdzvqtsjsddcmylfwtdzhynnbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254626.9334295-312-181435146595512/AnsiballZ_file.py'
Sep 30 17:50:27 compute-0 sudo[171214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:50:27 compute-0 python3.9[171216]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:50:27 compute-0 sudo[171214]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:27 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd344002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:28.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:28 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:28 compute-0 sudo[171368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijfmfjeajxbjjwizylkkqvdgnewdzban ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254628.0902612-414-114496506506177/AnsiballZ_command.py'
Sep 30 17:50:28 compute-0 sudo[171368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:28 compute-0 ceph-mon[73755]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:50:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:28.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:28 compute-0 python3.9[171370]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:28 compute-0 sudo[171368]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:28] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:50:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:28] "GET /metrics HTTP/1.1" 200 46513 "" "Prometheus/2.51.0"
Sep 30 17:50:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:50:29 compute-0 python3.9[171523]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 17:50:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:29 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd33c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:30.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:30 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd36800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:30 compute-0 sudo[171674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvgusrvugvjsefcmbdopgcgpvenchkxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254629.8745623-450-50652451878658/AnsiballZ_systemd_service.py'
Sep 30 17:50:30 compute-0 sudo[171674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:30 compute-0 python3.9[171676]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:50:30 compute-0 systemd[1]: Reloading.
Sep 30 17:50:30 compute-0 ceph-mon[73755]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:50:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:50:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:30.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:50:30 compute-0 systemd-rc-local-generator[171704]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:50:30 compute-0 systemd-sysv-generator[171707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:50:30 compute-0 sudo[171674]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:31 compute-0 sudo[171861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnpkijccxqftrxptnzolkxqhgjotjxxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254631.039124-466-43290335087701/AnsiballZ_command.py'
Sep 30 17:50:31 compute-0 sudo[171861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:50:31 compute-0 python3.9[171863]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:31 compute-0 sudo[171861]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:31 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd344002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:31 compute-0 sudo[172016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpayfrwcmiidmjjblwkbkwutgzwuczkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254631.6494064-466-80038466500221/AnsiballZ_command.py'
Sep 30 17:50:31 compute-0 sudo[172016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:32.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:32 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:32 compute-0 python3.9[172018]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:32 compute-0 sudo[172016]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:32 compute-0 sudo[172169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jskwedbfsopbbzfrotvpmsfpqcirrywr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254632.216919-466-261778685140993/AnsiballZ_command.py'
Sep 30 17:50:32 compute-0 sudo[172169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:32.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:32 compute-0 ceph-mon[73755]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:50:32 compute-0 python3.9[172171]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:32 compute-0 sudo[172169]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:32 compute-0 sudo[172322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeeodupoymkimwqsrrixuflgoqvwymfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254632.7712047-466-278272434562363/AnsiballZ_command.py'
Sep 30 17:50:32 compute-0 sudo[172322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:33 compute-0 python3.9[172324]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:33 compute-0 sudo[172322]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:50:33 compute-0 sudo[172476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdtzoaisignvvwutduxefmmxhhtvicuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254633.336414-466-133418108747631/AnsiballZ_command.py'
Sep 30 17:50:33 compute-0 sudo[172476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:33 compute-0 python3.9[172478]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:33 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:33 compute-0 sudo[172476]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:34.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:34 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd36800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:34 compute-0 sudo[172630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwuvjfdfzepzinnsmznlelbmayswmxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254633.9825807-466-176523638907006/AnsiballZ_command.py'
Sep 30 17:50:34 compute-0 sudo[172630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:34 compute-0 python3.9[172632]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:34 compute-0 sudo[172630]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:34.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:34 compute-0 ceph-mon[73755]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:50:34 compute-0 sudo[172783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtmxwczjvcajotpbngpyuwusqhayiqmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254634.5848455-466-96492775922341/AnsiballZ_command.py'
Sep 30 17:50:34 compute-0 sudo[172783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:35 compute-0 python3.9[172785]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:50:35 compute-0 sudo[172783]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:50:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:35 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd344004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:36.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:36 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd354002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:36 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:50:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:36 compute-0 ceph-mon[73755]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:50:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:50:37.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:50:37 compute-0 sudo[172940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxpgyydneexrmcmdhffwllamtkkjupzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254636.5985568-574-184738338427765/AnsiballZ_getent.py'
Sep 30 17:50:37 compute-0 sudo[172940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:50:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:50:37 compute-0 python3.9[172942]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Sep 30 17:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:50:37 compute-0 sudo[172940]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:50:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:50:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:50:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:37 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd35c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:38.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:38 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd36800a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:50:38 compute-0 sudo[173095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyauzqhtzlvqbbcyuizmpdgzcmukxzry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254637.628427-590-102897912518195/AnsiballZ_group.py'
Sep 30 17:50:38 compute-0 sudo[173095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:38 compute-0 python3.9[173097]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 17:50:38 compute-0 groupadd[173098]: group added to /etc/group: name=libvirt, GID=42473
Sep 30 17:50:38 compute-0 groupadd[173098]: group added to /etc/gshadow: name=libvirt
Sep 30 17:50:38 compute-0 groupadd[173098]: new group: name=libvirt, GID=42473
Sep 30 17:50:38 compute-0 sudo[173095]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:38 compute-0 ceph-mon[73755]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:50:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:38] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:50:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:38] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:50:39 compute-0 sudo[173253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzewsizmslwvqcapzymthiejkbfinhee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254638.7924592-606-122417713392546/AnsiballZ_user.py'
Sep 30 17:50:39 compute-0 sudo[173253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:50:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:39 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:50:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:39 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:50:39 compute-0 python3.9[173255]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 17:50:39 compute-0 useradd[173258]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Sep 30 17:50:39 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:50:39 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:50:39 compute-0 sudo[173253]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[166331]: 30/09/2025 17:50:39 : epoch 68dc1842 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd36800a2b0 fd 38 proxy ignored for local
Sep 30 17:50:39 compute-0 kernel: ganesha.nfsd[167052]: segfault at 50 ip 00007fd41580832e sp 00007fd3e57f9210 error 4 in libntirpc.so.5.8[7fd4157ed000+2c000] likely on CPU 3 (core 0, socket 3)
Sep 30 17:50:39 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:50:39 compute-0 systemd[1]: Started Process Core Dump (PID 173288/UID 0).
Sep 30 17:50:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:40.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:40.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:40 compute-0 sudo[173439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyueutrsrtjdlthbvpxoypfcnfpmywmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254640.2764535-628-235175833638284/AnsiballZ_setup.py'
Sep 30 17:50:40 compute-0 sudo[173439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:40 compute-0 podman[173368]: 2025-09-30 17:50:40.621205809 +0000 UTC m=+0.156575499 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4)
Sep 30 17:50:40 compute-0 python3.9[173443]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:50:40 compute-0 ceph-mon[73755]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:50:41 compute-0 sudo[173439]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:41 compute-0 systemd-coredump[173292]: Process 166335 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007fd41580832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:50:41 compute-0 systemd[1]: systemd-coredump@7-173288-0.service: Deactivated successfully.
Sep 30 17:50:41 compute-0 systemd[1]: systemd-coredump@7-173288-0.service: Consumed 1.302s CPU time.
Sep 30 17:50:41 compute-0 podman[173457]: 2025-09-30 17:50:41.34746781 +0000 UTC m=+0.031397068 container died 19fbc40b02e323367f93f3b9a723998ed14058b2255ee1d6e782e163789d2cb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:50:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:50:41 compute-0 sudo[173544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfqghzgtwrkzbumfktrrmhrukrpdreia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254640.2764535-628-235175833638284/AnsiballZ_dnf.py'
Sep 30 17:50:41 compute-0 sudo[173544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3247a9c0ecc6ea78403ada61590e9d4554f42b75d80557f90fb1eda7c85e750-merged.mount: Deactivated successfully.
Sep 30 17:50:41 compute-0 podman[173457]: 2025-09-30 17:50:41.823604707 +0000 UTC m=+0.507533935 container remove 19fbc40b02e323367f93f3b9a723998ed14058b2255ee1d6e782e163789d2cb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 17:50:41 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:50:41 compute-0 python3.9[173546]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:50:42 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:50:42 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.729s CPU time.
Sep 30 17:50:42 compute-0 ceph-mon[73755]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:50:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:50:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:50:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:42.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:42 compute-0 sudo[173580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:50:42 compute-0 sudo[173580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:50:42 compute-0 sudo[173580]: pam_unix(sudo:session): session closed for user root
Sep 30 17:50:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:44.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:44.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:44 compute-0 ceph-mon[73755]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s
Sep 30 17:50:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175045 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:50:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [ALERT] 272/175045 (4) : backend 'backend' has no server available!
Sep 30 17:50:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:46.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:46 compute-0 ceph-mon[73755]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s
Sep 30 17:50:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:50:47.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:50:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s
Sep 30 17:50:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:48.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:48.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:48] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:50:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:48] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:50:48 compute-0 ceph-mon[73755]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s
Sep 30 17:50:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175049 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:50:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:50.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:50 compute-0 ceph-mon[73755]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:50:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:50:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:52.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:52 compute-0 ceph-mon[73755]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:50:52 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 8.
Sep 30 17:50:52 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:50:52 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.729s CPU time.
Sep 30 17:50:52 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:50:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:50:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:50:52 compute-0 podman[173676]: 2025-09-30 17:50:52.397221499 +0000 UTC m=+0.045237042 container create a717d000fc799ed16cc5db92c1a3b55b2f9058b805427a9737c7a2d3805d5cd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 17:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b43a3b17994ddfe0fa64527c20f0c4666139fdfd8b7f27346a1ac9f764df46/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b43a3b17994ddfe0fa64527c20f0c4666139fdfd8b7f27346a1ac9f764df46/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b43a3b17994ddfe0fa64527c20f0c4666139fdfd8b7f27346a1ac9f764df46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b43a3b17994ddfe0fa64527c20f0c4666139fdfd8b7f27346a1ac9f764df46/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:50:52 compute-0 podman[173676]: 2025-09-30 17:50:52.461511737 +0000 UTC m=+0.109527310 container init a717d000fc799ed16cc5db92c1a3b55b2f9058b805427a9737c7a2d3805d5cd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 17:50:52 compute-0 podman[173676]: 2025-09-30 17:50:52.467013531 +0000 UTC m=+0.115029074 container start a717d000fc799ed16cc5db92c1a3b55b2f9058b805427a9737c7a2d3805d5cd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:50:52 compute-0 bash[173676]: a717d000fc799ed16cc5db92c1a3b55b2f9058b805427a9737c7a2d3805d5cd6
Sep 30 17:50:52 compute-0 podman[173676]: 2025-09-30 17:50:52.375312377 +0000 UTC m=+0.023327940 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:50:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:52 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:50:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:52 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:50:52 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:50:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:52 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:50:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:52 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:50:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:52 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:50:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:52 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:50:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:52 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:50:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:52 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:50:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:52.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:50:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 17:50:53 compute-0 podman[173736]: 2025-09-30 17:50:53.555148573 +0000 UTC m=+0.085921375 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:50:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:54.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:54 compute-0 ceph-mon[73755]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 17:50:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:50:54.236 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:50:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:50:54.236 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:50:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:50:54.236 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:50:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:54.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:50:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:50:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:50:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:50:56 compute-0 ceph-mon[73755]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:50:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:56.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:50:57.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:50:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:50:57.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:50:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:50:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:50:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:58 compute-0 ceph-mon[73755]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:50:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:50:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:50:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:50:58.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:50:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:58 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:50:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:50:58 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:50:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:58] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:50:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:50:58] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:50:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:51:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:00.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:00.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:00 compute-0 ceph-mon[73755]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:51:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:51:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:02.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:02 compute-0 sudo[173939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:51:02 compute-0 sudo[173939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:02 compute-0 ceph-mon[73755]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:51:02 compute-0 sudo[173939]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:51:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:04.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:04.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:04 compute-0 ceph-mon[73755]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:04 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:51:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:05 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1698000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:06.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:06 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16800016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:06.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:06 compute-0 ceph-mon[73755]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:51:07.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:51:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:51:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:51:07
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', '.nfs', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta']
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:51:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:51:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175107 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:51:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:07 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f166c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 17:51:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:08.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 17:51:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:08 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1690001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:08.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:08 compute-0 ceph-mon[73755]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:08] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:51:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:08] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:51:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 17:51:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:09 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1674000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:10.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:10 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16800016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:10.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:10 compute-0 ceph-mon[73755]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:51:11 compute-0 podman[173993]: 2025-09-30 17:51:11.597369613 +0000 UTC m=+0.132007278 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 17:51:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:11 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f166c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:12.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:12 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16900025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:12.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:12 compute-0 ceph-mon[73755]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:51:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 17:51:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:13 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1674001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:14.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:14 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16800016c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:14 compute-0 ceph-mon[73755]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 17:51:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:14.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:51:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:15 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f166c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:16.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:16 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f166c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:16 compute-0 kernel: SELinux:  Converting 2774 SID table entries...
Sep 30 17:51:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:51:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 17:51:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:51:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:51:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:51:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:51:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:51:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:16.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:16 compute-0 ceph-mon[73755]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:51:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:51:17.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:51:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:51:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[173691]: 30/09/2025 17:51:17 : epoch 68dc187c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1674001ac0 fd 37 proxy ignored for local
Sep 30 17:51:17 compute-0 kernel: ganesha.nfsd[173975]: segfault at 50 ip 00007f174457132e sp 00007f17127fb210 error 4 in libntirpc.so.5.8[7f1744556000+2c000] likely on CPU 7 (core 0, socket 7)
Sep 30 17:51:17 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:51:17 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Sep 30 17:51:17 compute-0 systemd[1]: Started Process Core Dump (PID 174034/UID 0).
Sep 30 17:51:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:51:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:18.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:51:18 compute-0 ceph-mon[73755]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:51:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:18] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:51:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:18] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:51:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:51:19 compute-0 systemd-coredump[174036]: Process 173695 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 44:
                                                    #0  0x00007f174457132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:51:19 compute-0 systemd[1]: systemd-coredump@8-174034-0.service: Deactivated successfully.
Sep 30 17:51:19 compute-0 systemd[1]: systemd-coredump@8-174034-0.service: Consumed 1.640s CPU time.
Sep 30 17:51:19 compute-0 podman[174042]: 2025-09-30 17:51:19.681842138 +0000 UTC m=+0.050062058 container died a717d000fc799ed16cc5db92c1a3b55b2f9058b805427a9737c7a2d3805d5cd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:51:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4b43a3b17994ddfe0fa64527c20f0c4666139fdfd8b7f27346a1ac9f764df46-merged.mount: Deactivated successfully.
Sep 30 17:51:19 compute-0 podman[174042]: 2025-09-30 17:51:19.736252599 +0000 UTC m=+0.104472499 container remove a717d000fc799ed16cc5db92c1a3b55b2f9058b805427a9737c7a2d3805d5cd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:51:19 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:51:19 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:51:19 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.442s CPU time.
Sep 30 17:51:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:20 compute-0 ceph-mon[73755]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:51:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:20.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:51:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:51:22 compute-0 ceph-mon[73755]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:51:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:22.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:22 compute-0 sudo[174088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:51:22 compute-0 sudo[174088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:22 compute-0 sudo[174088]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:22 compute-0 sudo[174113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:51:22 compute-0 sudo[174113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:22 compute-0 sudo[174113]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:22 compute-0 sudo[174116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:51:22 compute-0 sudo[174116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:23 compute-0 sudo[174116]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 17:51:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:51:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=sqlstore.transactions t=2025-09-30T17:51:23.508171202Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Sep 30 17:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=cleanup t=2025-09-30T17:51:23.540499206Z level=info msg="Completed cleanup jobs" duration=44.194244ms
Sep 30 17:51:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 17:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T17:51:23.614532199Z level=info msg="Update check succeeded" duration=51.806873ms
Sep 30 17:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T17:51:23.616446459Z level=info msg="Update check succeeded" duration=48.948688ms
Sep 30 17:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175123 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:51:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:24.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:24 compute-0 podman[174199]: 2025-09-30 17:51:24.533619437 +0000 UTC m=+0.067677578 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Sep 30 17:51:24 compute-0 ceph-mon[73755]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:51:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:24.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:51:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:51:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:26 compute-0 ceph-mon[73755]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 17:51:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:51:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:51:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:51:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:51:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:51:27.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:51:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:51:27 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:51:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:51:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:51:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:51:27 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:51:27 compute-0 sudo[174219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:51:27 compute-0 sudo[174219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:27 compute-0 sudo[174219]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:27 compute-0 sudo[174244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:51:27 compute-0 sudo[174244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:27 compute-0 podman[174314]: 2025-09-30 17:51:27.559697637 +0000 UTC m=+0.024562422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:51:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:28.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:28 compute-0 podman[174314]: 2025-09-30 17:51:28.499630679 +0000 UTC m=+0.964495444 container create 2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:51:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 17:51:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:51:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:51:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:51:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:51:28 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:51:28 compute-0 systemd[1]: Started libpod-conmon-2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23.scope.
Sep 30 17:51:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:51:28 compute-0 podman[174314]: 2025-09-30 17:51:28.609522568 +0000 UTC m=+1.074387363 container init 2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cartwright, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 17:51:28 compute-0 podman[174314]: 2025-09-30 17:51:28.619653203 +0000 UTC m=+1.084517968 container start 2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 17:51:28 compute-0 podman[174314]: 2025-09-30 17:51:28.630628319 +0000 UTC m=+1.095493084 container attach 2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:51:28 compute-0 elated_cartwright[174332]: 167 167
Sep 30 17:51:28 compute-0 systemd[1]: libpod-2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23.scope: Deactivated successfully.
Sep 30 17:51:28 compute-0 conmon[174332]: conmon 2afdbf857b7c54e0512e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23.scope/container/memory.events
Sep 30 17:51:28 compute-0 podman[174314]: 2025-09-30 17:51:28.632903238 +0000 UTC m=+1.097768003 container died 2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cartwright, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:51:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:28.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae02b8fd5194362678e5fa205c060db852a5f3524525d0450aedac1630422a7-merged.mount: Deactivated successfully.
Sep 30 17:51:28 compute-0 podman[174314]: 2025-09-30 17:51:28.677639367 +0000 UTC m=+1.142504132 container remove 2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_cartwright, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 17:51:28 compute-0 systemd[1]: libpod-conmon-2afdbf857b7c54e0512e2b05e4783bca6d084094a2707deafdb5384a1798ae23.scope: Deactivated successfully.
Sep 30 17:51:28 compute-0 kernel: SELinux:  Converting 2774 SID table entries...
Sep 30 17:51:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:51:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 17:51:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:51:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:51:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:51:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:51:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:51:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:51:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:51:28 compute-0 podman[174358]: 2025-09-30 17:51:28.843645421 +0000 UTC m=+0.058200381 container create a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_cerf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 17:51:28 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Sep 30 17:51:28 compute-0 systemd[1]: Started libpod-conmon-a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144.scope.
Sep 30 17:51:28 compute-0 podman[174358]: 2025-09-30 17:51:28.818654008 +0000 UTC m=+0.033209068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:51:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a490f3fd04d9e2fb99fa1067c5c07074eb9ab2a7f8e80fc6589dfe04d7c0bed5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a490f3fd04d9e2fb99fa1067c5c07074eb9ab2a7f8e80fc6589dfe04d7c0bed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a490f3fd04d9e2fb99fa1067c5c07074eb9ab2a7f8e80fc6589dfe04d7c0bed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a490f3fd04d9e2fb99fa1067c5c07074eb9ab2a7f8e80fc6589dfe04d7c0bed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a490f3fd04d9e2fb99fa1067c5c07074eb9ab2a7f8e80fc6589dfe04d7c0bed5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:29 compute-0 podman[174358]: 2025-09-30 17:51:29.034856704 +0000 UTC m=+0.249411694 container init a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_cerf, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:51:29 compute-0 podman[174358]: 2025-09-30 17:51:29.045624375 +0000 UTC m=+0.260179345 container start a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_cerf, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:51:29 compute-0 podman[174358]: 2025-09-30 17:51:29.094703346 +0000 UTC m=+0.309258336 container attach a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_cerf, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:51:29 compute-0 condescending_cerf[174375]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:51:29 compute-0 condescending_cerf[174375]: --> All data devices are unavailable
Sep 30 17:51:29 compute-0 systemd[1]: libpod-a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144.scope: Deactivated successfully.
Sep 30 17:51:29 compute-0 podman[174358]: 2025-09-30 17:51:29.376667748 +0000 UTC m=+0.591222738 container died a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Sep 30 17:51:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a490f3fd04d9e2fb99fa1067c5c07074eb9ab2a7f8e80fc6589dfe04d7c0bed5-merged.mount: Deactivated successfully.
Sep 30 17:51:29 compute-0 ceph-mon[73755]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:29 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 9.
Sep 30 17:51:29 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:51:29 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.442s CPU time.
Sep 30 17:51:29 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:51:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:30.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:30 compute-0 podman[174358]: 2025-09-30 17:51:30.239214899 +0000 UTC m=+1.453769869 container remove a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_cerf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 17:51:30 compute-0 systemd[1]: libpod-conmon-a3e1502bf19c13e577b25286ebe83521739748389d3d41e48478aafb85ca7144.scope: Deactivated successfully.
Sep 30 17:51:30 compute-0 sudo[174244]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:30 compute-0 sudo[174417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:51:30 compute-0 sudo[174417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:30 compute-0 sudo[174417]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:30 compute-0 sudo[174451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:51:30 compute-0 sudo[174451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:30 compute-0 podman[174497]: 2025-09-30 17:51:30.507805882 +0000 UTC m=+0.045255992 container create 26e3596af8fd3d98333c5282db380ebf10f4eb1450e40b318453c563d5574767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:51:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6a67c203cdd86671a8caa9c9d47886d54495e523f2951994dea6f8492f97b8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6a67c203cdd86671a8caa9c9d47886d54495e523f2951994dea6f8492f97b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6a67c203cdd86671a8caa9c9d47886d54495e523f2951994dea6f8492f97b8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6a67c203cdd86671a8caa9c9d47886d54495e523f2951994dea6f8492f97b8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:30 compute-0 podman[174497]: 2025-09-30 17:51:30.571578556 +0000 UTC m=+0.109028686 container init 26e3596af8fd3d98333c5282db380ebf10f4eb1450e40b318453c563d5574767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 17:51:30 compute-0 podman[174497]: 2025-09-30 17:51:30.576996308 +0000 UTC m=+0.114446408 container start 26e3596af8fd3d98333c5282db380ebf10f4eb1450e40b318453c563d5574767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:51:30 compute-0 bash[174497]: 26e3596af8fd3d98333c5282db380ebf10f4eb1450e40b318453c563d5574767
Sep 30 17:51:30 compute-0 podman[174497]: 2025-09-30 17:51:30.488144379 +0000 UTC m=+0.025594509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:51:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:51:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:51:30 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:51:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:30.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:51:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:51:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:51:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:51:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:51:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:51:30 compute-0 ceph-mon[73755]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:30 compute-0 podman[174592]: 2025-09-30 17:51:30.857237295 +0000 UTC m=+0.046238578 container create 25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noyce, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Sep 30 17:51:30 compute-0 systemd[1]: Started libpod-conmon-25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585.scope.
Sep 30 17:51:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:51:30 compute-0 podman[174592]: 2025-09-30 17:51:30.838343552 +0000 UTC m=+0.027344855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:51:30 compute-0 podman[174592]: 2025-09-30 17:51:30.955561862 +0000 UTC m=+0.144563175 container init 25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noyce, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:51:30 compute-0 podman[174592]: 2025-09-30 17:51:30.96658014 +0000 UTC m=+0.155581423 container start 25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noyce, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:51:30 compute-0 podman[174592]: 2025-09-30 17:51:30.973701706 +0000 UTC m=+0.162702989 container attach 25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:51:30 compute-0 distracted_noyce[174609]: 167 167
Sep 30 17:51:30 compute-0 systemd[1]: libpod-25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585.scope: Deactivated successfully.
Sep 30 17:51:30 compute-0 conmon[174609]: conmon 25ffa7b7c42ff894f590 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585.scope/container/memory.events
Sep 30 17:51:30 compute-0 podman[174592]: 2025-09-30 17:51:30.978499381 +0000 UTC m=+0.167500684 container died 25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 17:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ceabfd4631aeeb04bc06fbf3ab2a0f6d75fc44518946fd29e146102eb80f047-merged.mount: Deactivated successfully.
Sep 30 17:51:31 compute-0 podman[174592]: 2025-09-30 17:51:31.023462815 +0000 UTC m=+0.212464098 container remove 25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:51:31 compute-0 systemd[1]: libpod-conmon-25ffa7b7c42ff894f5901e13bed9fda01fc26400d491a61a93f652076fb63585.scope: Deactivated successfully.
Sep 30 17:51:31 compute-0 podman[174633]: 2025-09-30 17:51:31.218039936 +0000 UTC m=+0.065347838 container create 099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sinoussi, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:51:31 compute-0 systemd[1]: Started libpod-conmon-099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81.scope.
Sep 30 17:51:31 compute-0 podman[174633]: 2025-09-30 17:51:31.187024176 +0000 UTC m=+0.034332108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:51:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c46f9d0919b73422594de9ccdd9ac4ca2a6dbb84a96f951097f40d7c33bc505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c46f9d0919b73422594de9ccdd9ac4ca2a6dbb84a96f951097f40d7c33bc505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c46f9d0919b73422594de9ccdd9ac4ca2a6dbb84a96f951097f40d7c33bc505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c46f9d0919b73422594de9ccdd9ac4ca2a6dbb84a96f951097f40d7c33bc505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:31 compute-0 podman[174633]: 2025-09-30 17:51:31.357955019 +0000 UTC m=+0.205262941 container init 099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:51:31 compute-0 podman[174633]: 2025-09-30 17:51:31.365996449 +0000 UTC m=+0.213304341 container start 099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:51:31 compute-0 podman[174633]: 2025-09-30 17:51:31.386209097 +0000 UTC m=+0.233517019 container attach 099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:51:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]: {
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:     "0": [
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:         {
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "devices": [
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "/dev/loop3"
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             ],
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "lv_name": "ceph_lv0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "lv_size": "21470642176",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "name": "ceph_lv0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "tags": {
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.cluster_name": "ceph",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.crush_device_class": "",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.encrypted": "0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.osd_id": "0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.type": "block",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.vdo": "0",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:                 "ceph.with_tpm": "0"
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             },
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "type": "block",
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:             "vg_name": "ceph_vg0"
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:         }
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]:     ]
Sep 30 17:51:31 compute-0 sweet_sinoussi[174650]: }
Sep 30 17:51:31 compute-0 systemd[1]: libpod-099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81.scope: Deactivated successfully.
Sep 30 17:51:31 compute-0 podman[174633]: 2025-09-30 17:51:31.66743814 +0000 UTC m=+0.514746042 container died 099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c46f9d0919b73422594de9ccdd9ac4ca2a6dbb84a96f951097f40d7c33bc505-merged.mount: Deactivated successfully.
Sep 30 17:51:31 compute-0 podman[174633]: 2025-09-30 17:51:31.712847445 +0000 UTC m=+0.560155347 container remove 099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 17:51:31 compute-0 systemd[1]: libpod-conmon-099cd4ea8c23ba0f75ec6226ca1b122fc3d026d16b61838743f7ed1bbdd7ea81.scope: Deactivated successfully.
Sep 30 17:51:31 compute-0 sudo[174451]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:31 compute-0 sudo[174675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:51:31 compute-0 sudo[174675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:31 compute-0 sudo[174675]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:31 compute-0 sudo[174700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:51:31 compute-0 sudo[174700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:32 compute-0 podman[174765]: 2025-09-30 17:51:32.30245002 +0000 UTC m=+0.049557685 container create 3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_neumann, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:51:32 compute-0 podman[174765]: 2025-09-30 17:51:32.278244598 +0000 UTC m=+0.025352283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:51:32 compute-0 systemd[1]: Started libpod-conmon-3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130.scope.
Sep 30 17:51:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:51:32 compute-0 podman[174765]: 2025-09-30 17:51:32.484637607 +0000 UTC m=+0.231745282 container init 3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:51:32 compute-0 podman[174765]: 2025-09-30 17:51:32.493620541 +0000 UTC m=+0.240728206 container start 3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_neumann, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:51:32 compute-0 silly_neumann[174781]: 167 167
Sep 30 17:51:32 compute-0 systemd[1]: libpod-3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130.scope: Deactivated successfully.
Sep 30 17:51:32 compute-0 podman[174765]: 2025-09-30 17:51:32.522784923 +0000 UTC m=+0.269892618 container attach 3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 17:51:32 compute-0 podman[174765]: 2025-09-30 17:51:32.523211244 +0000 UTC m=+0.270318919 container died 3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 17:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdc82c037cc938b65fada3fdb6846cff7a0e5288a7f7d7d4489943f487d00788-merged.mount: Deactivated successfully.
Sep 30 17:51:32 compute-0 podman[174765]: 2025-09-30 17:51:32.566381641 +0000 UTC m=+0.313489306 container remove 3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 17:51:32 compute-0 systemd[1]: libpod-conmon-3661d41c869cd1be006c7e089da06452d070de4d93264f10c27f13971a8fc130.scope: Deactivated successfully.
Sep 30 17:51:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:32.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:32 compute-0 podman[174807]: 2025-09-30 17:51:32.738861985 +0000 UTC m=+0.044834542 container create 33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:51:32 compute-0 systemd[1]: Started libpod-conmon-33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834.scope.
Sep 30 17:51:32 compute-0 ceph-mon[73755]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:51:32 compute-0 podman[174807]: 2025-09-30 17:51:32.720072134 +0000 UTC m=+0.026044721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:51:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ce5cb483ce36a524ad84216c1a5766dcb57e2df66e1fab403bfab9ed5912d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ce5cb483ce36a524ad84216c1a5766dcb57e2df66e1fab403bfab9ed5912d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ce5cb483ce36a524ad84216c1a5766dcb57e2df66e1fab403bfab9ed5912d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ce5cb483ce36a524ad84216c1a5766dcb57e2df66e1fab403bfab9ed5912d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:51:32 compute-0 podman[174807]: 2025-09-30 17:51:32.839438541 +0000 UTC m=+0.145411118 container init 33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gates, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:51:32 compute-0 podman[174807]: 2025-09-30 17:51:32.84591338 +0000 UTC m=+0.151885947 container start 33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:51:32 compute-0 podman[174807]: 2025-09-30 17:51:32.850172241 +0000 UTC m=+0.156144848 container attach 33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:51:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:33 compute-0 lvm[174899]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:51:33 compute-0 lvm[174899]: VG ceph_vg0 finished
Sep 30 17:51:33 compute-0 optimistic_gates[174824]: {}
Sep 30 17:51:33 compute-0 systemd[1]: libpod-33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834.scope: Deactivated successfully.
Sep 30 17:51:33 compute-0 systemd[1]: libpod-33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834.scope: Consumed 1.123s CPU time.
Sep 30 17:51:33 compute-0 podman[174807]: 2025-09-30 17:51:33.564998945 +0000 UTC m=+0.870971522 container died 33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gates, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:51:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5ce5cb483ce36a524ad84216c1a5766dcb57e2df66e1fab403bfab9ed5912d6-merged.mount: Deactivated successfully.
Sep 30 17:51:33 compute-0 podman[174807]: 2025-09-30 17:51:33.617907897 +0000 UTC m=+0.923880444 container remove 33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 17:51:33 compute-0 systemd[1]: libpod-conmon-33647713d22b4000ed58a7eef9aefcb69ca2a8753f9a57d3cc57da157dd63834.scope: Deactivated successfully.
Sep 30 17:51:33 compute-0 sudo[174700]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:51:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:51:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:33 compute-0 sudo[174916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:51:33 compute-0 sudo[174916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:33 compute-0 sudo[174916]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:34.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:34 compute-0 ceph-mon[73755]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:34 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:51:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:51:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:36.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:51:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:36.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:36 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:51:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:36 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:51:36 compute-0 ceph-mon[73755]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:51:37.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:51:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:51:37.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:51:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:51:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:51:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:51:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:38.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:38] "GET /metrics HTTP/1.1" 200 46523 "" "Prometheus/2.51.0"
Sep 30 17:51:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:38] "GET /metrics HTTP/1.1" 200 46523 "" "Prometheus/2.51.0"
Sep 30 17:51:39 compute-0 ceph-mon[73755]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:40 compute-0 ceph-mon[73755]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:40.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:42.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:42 compute-0 podman[175445]: 2025-09-30 17:51:42.563770002 +0000 UTC m=+0.093933634 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:51:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:42.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:42 compute-0 ceph-mon[73755]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:42 compute-0 sudo[175707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:51:42 compute-0 sudo[175707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:51:42 compute-0 sudo[175707]: pam_unix(sudo:session): session closed for user root
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:51:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:51:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c20000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:44 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08000fb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:51:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:44.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:51:44 compute-0 ceph-mon[73755]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:51:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175145 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:51:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:45 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:46.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:46 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:46.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:46 compute-0 ceph-mon[73755]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:51:47.017Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:51:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:51:47.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:51:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:47 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:48.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:48 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:48.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:48] "GET /metrics HTTP/1.1" 200 46523 "" "Prometheus/2.51.0"
Sep 30 17:51:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:48] "GET /metrics HTTP/1.1" 200 46523 "" "Prometheus/2.51.0"
Sep 30 17:51:48 compute-0 ceph-mon[73755]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:49 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:50 compute-0 ceph-mon[73755]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:51:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:50 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:50.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:51 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:52.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:52 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:51:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:51:52 compute-0 ceph-mon[73755]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:51:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:52.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:53 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:54.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:54 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:51:54.237 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:51:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:51:54.238 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:51:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:51:54.238 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:51:54 compute-0 ceph-mon[73755]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:51:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:54.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:51:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:55 compute-0 podman[184373]: 2025-09-30 17:51:55.512663227 +0000 UTC m=+0.055149601 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent)
Sep 30 17:51:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:55 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:56.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:56 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:56 compute-0 ceph-mon[73755]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:51:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:51:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:51:57.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:51:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:57 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:51:58.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:58 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:51:58 compute-0 ceph-mon[73755]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:51:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:51:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:51:58.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:51:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:58] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:51:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:51:58] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:51:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:51:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:51:59 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:00.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:00 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:00.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:00 compute-0 ceph-mon[73755]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:01 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:02 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:02.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:02 compute-0 ceph-mon[73755]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:02 compute-0 sudo[189347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:52:02 compute-0 sudo[189347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:02 compute-0 sudo[189347]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:52:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:03 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:04 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:04 compute-0 ceph-mon[73755]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:52:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:05 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:05 compute-0 ceph-mon[73755]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:06.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:06 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:06.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:52:07.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:52:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:52:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:52:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:52:07
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.nfs', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes']
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:52:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:07 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:08.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:08 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:08 compute-0 ceph-mon[73755]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:08.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:08] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:52:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:08] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:52:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:09 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:10.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:10 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:10 compute-0 ceph-mon[73755]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:11 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:12.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:12 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:12.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:12 compute-0 ceph-mon[73755]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:52:13 compute-0 podman[191837]: 2025-09-30 17:52:13.611844082 +0000 UTC m=+0.146786132 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 17:52:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:13 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:14.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:14 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:14.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:14 compute-0 ceph-mon[73755]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:52:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:15 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:16 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:16.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:16 compute-0 ceph-mon[73755]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:52:17.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:52:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:17 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:18 compute-0 ceph-mon[73755]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:18 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:18.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:18] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:52:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:18] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:52:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175219 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:52:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:19 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:20 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:20.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:20 compute-0 ceph-mon[73755]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:20 compute-0 kernel: SELinux:  Converting 2775 SID table entries...
Sep 30 17:52:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 17:52:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 17:52:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 17:52:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 17:52:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 17:52:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 17:52:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 17:52:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:20.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:21 compute-0 groupadd[191886]: group added to /etc/group: name=dnsmasq, GID=991
Sep 30 17:52:21 compute-0 groupadd[191886]: group added to /etc/gshadow: name=dnsmasq
Sep 30 17:52:21 compute-0 groupadd[191886]: new group: name=dnsmasq, GID=991
Sep 30 17:52:21 compute-0 useradd[191893]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Sep 30 17:52:21 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Sep 30 17:52:21 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Sep 30 17:52:21 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Sep 30 17:52:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:21 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc001aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:22 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc001aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:52:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:52:22 compute-0 ceph-mon[73755]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:52:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:22.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:22 compute-0 groupadd[191907]: group added to /etc/group: name=clevis, GID=990
Sep 30 17:52:22 compute-0 groupadd[191907]: group added to /etc/gshadow: name=clevis
Sep 30 17:52:22 compute-0 groupadd[191907]: new group: name=clevis, GID=990
Sep 30 17:52:22 compute-0 useradd[191914]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Sep 30 17:52:22 compute-0 usermod[191924]: add 'clevis' to group 'tss'
Sep 30 17:52:22 compute-0 usermod[191924]: add 'clevis' to shadow group 'tss'
Sep 30 17:52:23 compute-0 sudo[191931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:52:23 compute-0 sudo[191931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:23 compute-0 sudo[191931]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:52:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:23 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc001aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:24 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c180013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:24.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:24 compute-0 ceph-mon[73755]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:52:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:24.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:25 compute-0 polkitd[6289]: Reloading rules
Sep 30 17:52:25 compute-0 polkitd[6289]: Collecting garbage unconditionally...
Sep 30 17:52:25 compute-0 polkitd[6289]: Loading rules from directory /etc/polkit-1/rules.d
Sep 30 17:52:25 compute-0 polkitd[6289]: Loading rules from directory /usr/share/polkit-1/rules.d
Sep 30 17:52:25 compute-0 polkitd[6289]: Finished loading, compiling and executing 4 rules
Sep 30 17:52:25 compute-0 polkitd[6289]: Reloading rules
Sep 30 17:52:25 compute-0 polkitd[6289]: Collecting garbage unconditionally...
Sep 30 17:52:25 compute-0 polkitd[6289]: Loading rules from directory /etc/polkit-1/rules.d
Sep 30 17:52:25 compute-0 polkitd[6289]: Loading rules from directory /usr/share/polkit-1/rules.d
Sep 30 17:52:25 compute-0 polkitd[6289]: Finished loading, compiling and executing 4 rules
Sep 30 17:52:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:52:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:25 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:26 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:26.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:26 compute-0 groupadd[192140]: group added to /etc/group: name=ceph, GID=167
Sep 30 17:52:26 compute-0 groupadd[192140]: group added to /etc/gshadow: name=ceph
Sep 30 17:52:26 compute-0 groupadd[192140]: new group: name=ceph, GID=167
Sep 30 17:52:26 compute-0 useradd[192157]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Sep 30 17:52:26 compute-0 podman[192141]: 2025-09-30 17:52:26.344286092 +0000 UTC m=+0.060427726 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250930, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Sep 30 17:52:26 compute-0 ceph-mon[73755]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:52:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:26.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:52:27.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:52:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:52:27.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:52:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:52:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:27 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc001aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:28 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:28.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:52:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:28.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:52:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:28] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:52:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:28] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:52:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:52:29 compute-0 sshd[1010]: Received signal 15; terminating.
Sep 30 17:52:29 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Sep 30 17:52:29 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Sep 30 17:52:29 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Sep 30 17:52:29 compute-0 systemd[1]: sshd.service: Consumed 2.621s CPU time, read 532.0K from disk, written 12.0K to disk.
Sep 30 17:52:29 compute-0 ceph-mon[73755]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:52:29 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Sep 30 17:52:29 compute-0 systemd[1]: Stopping sshd-keygen.target...
Sep 30 17:52:29 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 17:52:29 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 17:52:29 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 17:52:29 compute-0 systemd[1]: Reached target sshd-keygen.target.
Sep 30 17:52:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:29 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:29 compute-0 systemd[1]: Starting OpenSSH server daemon...
Sep 30 17:52:29 compute-0 sshd[192864]: Server listening on 0.0.0.0 port 22.
Sep 30 17:52:29 compute-0 sshd[192864]: Server listening on :: port 22.
Sep 30 17:52:29 compute-0 systemd[1]: Started OpenSSH server daemon.
Sep 30 17:52:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:30.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:30.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:52:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:31 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc002f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:52:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:52:32 compute-0 systemd[1]: Reloading.
Sep 30 17:52:32 compute-0 systemd-rc-local-generator[193122]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:52:32 compute-0 systemd-sysv-generator[193126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:52:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:32 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 17:52:32 compute-0 auditd[705]: Audit daemon rotating log files
Sep 30 17:52:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:32.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:33 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:52:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:52:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:33 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:34 compute-0 ceph-mon[73755]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:52:34 compute-0 sudo[195371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:52:34 compute-0 sudo[195371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:34 compute-0 sudo[195371]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:34 compute-0 sudo[195477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 17:52:34 compute-0 sudo[195477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:34 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:34.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:34 compute-0 systemd[1]: Starting PackageKit Daemon...
Sep 30 17:52:34 compute-0 PackageKit[195837]: daemon start
Sep 30 17:52:34 compute-0 systemd[1]: Started PackageKit Daemon.
Sep 30 17:52:34 compute-0 podman[196193]: 2025-09-30 17:52:34.723403775 +0000 UTC m=+0.117329769 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Sep 30 17:52:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:34.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:34 compute-0 sudo[173544]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:34 compute-0 podman[196193]: 2025-09-30 17:52:34.833004083 +0000 UTC m=+0.226930047 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Sep 30 17:52:35 compute-0 ceph-mon[73755]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:52:35 compute-0 ceph-mon[73755]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:52:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:35 compute-0 podman[197047]: 2025-09-30 17:52:35.244572711 +0000 UTC m=+0.049665717 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:52:35 compute-0 podman[197047]: 2025-09-30 17:52:35.27969591 +0000 UTC m=+0.084788916 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:52:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:52:35 compute-0 podman[197512]: 2025-09-30 17:52:35.568685444 +0000 UTC m=+0.048908107 container exec 26e3596af8fd3d98333c5282db380ebf10f4eb1450e40b318453c563d5574767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 17:52:35 compute-0 podman[197512]: 2025-09-30 17:52:35.585727496 +0000 UTC m=+0.065950129 container exec_died 26e3596af8fd3d98333c5282db380ebf10f4eb1450e40b318453c563d5574767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:52:35 compute-0 podman[197786]: 2025-09-30 17:52:35.782147032 +0000 UTC m=+0.048979309 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:52:35 compute-0 podman[197786]: 2025-09-30 17:52:35.810727923 +0000 UTC m=+0.077560180 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 17:52:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:35 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:36 compute-0 podman[198050]: 2025-09-30 17:52:36.000267581 +0000 UTC m=+0.047477650 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 17:52:36 compute-0 ceph-mon[73755]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:52:36 compute-0 podman[198050]: 2025-09-30 17:52:36.064740631 +0000 UTC m=+0.111950690 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, build-date=2023-02-22T09:23:20)
Sep 30 17:52:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:36 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:36 compute-0 podman[198386]: 2025-09-30 17:52:36.258739445 +0000 UTC m=+0.049751300 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:52:36 compute-0 podman[198386]: 2025-09-30 17:52:36.307694293 +0000 UTC m=+0.098706138 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:52:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:36 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:52:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:36 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:52:36 compute-0 podman[198715]: 2025-09-30 17:52:36.491653167 +0000 UTC m=+0.048426076 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:52:36 compute-0 podman[198715]: 2025-09-30 17:52:36.656635189 +0000 UTC m=+0.213408078 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 17:52:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:36.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:37 compute-0 podman[199371]: 2025-09-30 17:52:37.010295828 +0000 UTC m=+0.052335326 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:52:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:52:37.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:52:37 compute-0 podman[199371]: 2025-09-30 17:52:37.044640647 +0000 UTC m=+0.086680135 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 17:52:37 compute-0 sudo[195477]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:52:37 compute-0 sudo[199693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:52:37 compute-0 sudo[199693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:37 compute-0 sudo[199693]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:37 compute-0 sudo[199791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:52:37 compute-0 sudo[199791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:52:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:52:37 compute-0 sudo[199791]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:52:37 compute-0 sudo[200500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:52:37 compute-0 sudo[200500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:37 compute-0 sudo[200500]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:37 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc0038b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:37 compute-0 sudo[200583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:52:37 compute-0 sudo[200583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:52:38 compute-0 ceph-mon[73755]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:52:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:52:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:38 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:38.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:38 compute-0 podman[201064]: 2025-09-30 17:52:38.331651327 +0000 UTC m=+0.036595418 container create 04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Sep 30 17:52:38 compute-0 systemd[1]: Started libpod-conmon-04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed.scope.
Sep 30 17:52:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:52:38 compute-0 podman[201064]: 2025-09-30 17:52:38.409137034 +0000 UTC m=+0.114081165 container init 04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poitras, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:52:38 compute-0 podman[201064]: 2025-09-30 17:52:38.314536364 +0000 UTC m=+0.019480475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:52:38 compute-0 podman[201064]: 2025-09-30 17:52:38.418159638 +0000 UTC m=+0.123103729 container start 04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poitras, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:52:38 compute-0 podman[201064]: 2025-09-30 17:52:38.421565836 +0000 UTC m=+0.126509917 container attach 04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 17:52:38 compute-0 suspicious_poitras[201173]: 167 167
Sep 30 17:52:38 compute-0 systemd[1]: libpod-04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed.scope: Deactivated successfully.
Sep 30 17:52:38 compute-0 podman[201064]: 2025-09-30 17:52:38.423697921 +0000 UTC m=+0.128642022 container died 04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Sep 30 17:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-55d604d66a277857d4b2e814728e3b10c05c2c3389ebc518edba5039fae891f9-merged.mount: Deactivated successfully.
Sep 30 17:52:38 compute-0 podman[201064]: 2025-09-30 17:52:38.469895887 +0000 UTC m=+0.174839988 container remove 04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_poitras, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:52:38 compute-0 systemd[1]: libpod-conmon-04314b27e7b00af9b8c8808df423c82a47d2280140c9514fe522e6ecc149b3ed.scope: Deactivated successfully.
Sep 30 17:52:38 compute-0 podman[201418]: 2025-09-30 17:52:38.642742863 +0000 UTC m=+0.051159886 container create d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:52:38 compute-0 systemd[1]: Started libpod-conmon-d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060.scope.
Sep 30 17:52:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f5763ed28a6cc2574c25c64a66a1305f86c2fc05bbf2fe78804fcb3c12bd62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f5763ed28a6cc2574c25c64a66a1305f86c2fc05bbf2fe78804fcb3c12bd62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f5763ed28a6cc2574c25c64a66a1305f86c2fc05bbf2fe78804fcb3c12bd62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f5763ed28a6cc2574c25c64a66a1305f86c2fc05bbf2fe78804fcb3c12bd62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f5763ed28a6cc2574c25c64a66a1305f86c2fc05bbf2fe78804fcb3c12bd62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:38 compute-0 podman[201418]: 2025-09-30 17:52:38.715200849 +0000 UTC m=+0.123617882 container init d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 17:52:38 compute-0 podman[201418]: 2025-09-30 17:52:38.623653438 +0000 UTC m=+0.032070461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:52:38 compute-0 podman[201418]: 2025-09-30 17:52:38.72140283 +0000 UTC m=+0.129819843 container start d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:52:38 compute-0 podman[201418]: 2025-09-30 17:52:38.725765793 +0000 UTC m=+0.134182816 container attach d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:52:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:38.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:38] "GET /metrics HTTP/1.1" 200 46532 "" "Prometheus/2.51.0"
Sep 30 17:52:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:38] "GET /metrics HTTP/1.1" 200 46532 "" "Prometheus/2.51.0"
Sep 30 17:52:39 compute-0 kind_robinson[201531]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:52:39 compute-0 kind_robinson[201531]: --> All data devices are unavailable
Sep 30 17:52:39 compute-0 systemd[1]: libpod-d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060.scope: Deactivated successfully.
Sep 30 17:52:39 compute-0 podman[201418]: 2025-09-30 17:52:39.067144703 +0000 UTC m=+0.475561726 container died d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6f5763ed28a6cc2574c25c64a66a1305f86c2fc05bbf2fe78804fcb3c12bd62-merged.mount: Deactivated successfully.
Sep 30 17:52:39 compute-0 podman[201418]: 2025-09-30 17:52:39.112915809 +0000 UTC m=+0.521332832 container remove d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:52:39 compute-0 systemd[1]: libpod-conmon-d6c292895ff0bf58e33ed600b1d36863896bf6315a786ca17ba8444f00392060.scope: Deactivated successfully.
Sep 30 17:52:39 compute-0 sudo[200583]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:39 compute-0 sudo[202061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:52:39 compute-0 sudo[202061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:39 compute-0 sudo[202061]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:39 compute-0 sudo[202141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:52:39 compute-0 sudo[202141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:39 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:52:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:52:39 compute-0 podman[202487]: 2025-09-30 17:52:39.646267601 +0000 UTC m=+0.051576777 container create d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_bouman, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:52:39 compute-0 systemd[1]: Started libpod-conmon-d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2.scope.
Sep 30 17:52:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:52:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:52:39 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.639s CPU time.
Sep 30 17:52:39 compute-0 systemd[1]: run-r62e4b4a7ab9d4776a7d3ea253d249fdf.service: Deactivated successfully.
Sep 30 17:52:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:52:39 compute-0 podman[202487]: 2025-09-30 17:52:39.705039433 +0000 UTC m=+0.110348629 container init d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:52:39 compute-0 podman[202487]: 2025-09-30 17:52:39.712842885 +0000 UTC m=+0.118152071 container start d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:52:39 compute-0 podman[202487]: 2025-09-30 17:52:39.716563081 +0000 UTC m=+0.121872287 container attach d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:52:39 compute-0 elastic_bouman[202505]: 167 167
Sep 30 17:52:39 compute-0 systemd[1]: libpod-d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2.scope: Deactivated successfully.
Sep 30 17:52:39 compute-0 podman[202487]: 2025-09-30 17:52:39.71883491 +0000 UTC m=+0.124144106 container died d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 17:52:39 compute-0 podman[202487]: 2025-09-30 17:52:39.628117141 +0000 UTC m=+0.033426337 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b916d85e7305433a6a05f1e14a4e3379024cc40de0912db42053d59cc04f03-merged.mount: Deactivated successfully.
Sep 30 17:52:39 compute-0 podman[202487]: 2025-09-30 17:52:39.752739488 +0000 UTC m=+0.158048664 container remove d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:52:39 compute-0 systemd[1]: libpod-conmon-d5243094cc8e2b5aec76ddb0fb52c8317d15181ccc5499384b67abdc1898b1d2.scope: Deactivated successfully.
Sep 30 17:52:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:39 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:39 compute-0 podman[202530]: 2025-09-30 17:52:39.981069391 +0000 UTC m=+0.054878902 container create 1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_franklin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:52:40 compute-0 systemd[1]: Started libpod-conmon-1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c.scope.
Sep 30 17:52:40 compute-0 podman[202530]: 2025-09-30 17:52:39.954278488 +0000 UTC m=+0.028088039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:52:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1cf22551c1dd216775d585e3889d33260ba5358e95abb3cc18b142c04507de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1cf22551c1dd216775d585e3889d33260ba5358e95abb3cc18b142c04507de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1cf22551c1dd216775d585e3889d33260ba5358e95abb3cc18b142c04507de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1cf22551c1dd216775d585e3889d33260ba5358e95abb3cc18b142c04507de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:40 compute-0 podman[202530]: 2025-09-30 17:52:40.101808288 +0000 UTC m=+0.175617809 container init 1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_franklin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:52:40 compute-0 podman[202530]: 2025-09-30 17:52:40.112639399 +0000 UTC m=+0.186448910 container start 1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_franklin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:52:40 compute-0 podman[202530]: 2025-09-30 17:52:40.115155654 +0000 UTC m=+0.188965165 container attach 1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 17:52:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:40 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:40 compute-0 boring_franklin[202547]: {
Sep 30 17:52:40 compute-0 boring_franklin[202547]:     "0": [
Sep 30 17:52:40 compute-0 boring_franklin[202547]:         {
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "devices": [
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "/dev/loop3"
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             ],
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "lv_name": "ceph_lv0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "lv_size": "21470642176",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "name": "ceph_lv0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "tags": {
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.cluster_name": "ceph",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.crush_device_class": "",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.encrypted": "0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.osd_id": "0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.type": "block",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.vdo": "0",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:                 "ceph.with_tpm": "0"
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             },
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "type": "block",
Sep 30 17:52:40 compute-0 boring_franklin[202547]:             "vg_name": "ceph_vg0"
Sep 30 17:52:40 compute-0 boring_franklin[202547]:         }
Sep 30 17:52:40 compute-0 boring_franklin[202547]:     ]
Sep 30 17:52:40 compute-0 boring_franklin[202547]: }
Sep 30 17:52:40 compute-0 systemd[1]: libpod-1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c.scope: Deactivated successfully.
Sep 30 17:52:40 compute-0 podman[202530]: 2025-09-30 17:52:40.486445899 +0000 UTC m=+0.560255450 container died 1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb1cf22551c1dd216775d585e3889d33260ba5358e95abb3cc18b142c04507de-merged.mount: Deactivated successfully.
Sep 30 17:52:40 compute-0 podman[202530]: 2025-09-30 17:52:40.537608414 +0000 UTC m=+0.611417925 container remove 1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 17:52:40 compute-0 systemd[1]: libpod-conmon-1aa363b9dc9d76b3f20939fa0a14e4a7651186590f9d7d83e0acbd974fe8159c.scope: Deactivated successfully.
Sep 30 17:52:40 compute-0 ceph-mon[73755]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:52:40 compute-0 sudo[202141]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:40 compute-0 sudo[202567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:52:40 compute-0 sudo[202567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:40 compute-0 sudo[202567]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:40.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:40 compute-0 sudo[202592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:52:40 compute-0 sudo[202592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:41 compute-0 podman[202658]: 2025-09-30 17:52:41.28654265 +0000 UTC m=+0.100209457 container create 1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 17:52:41 compute-0 podman[202658]: 2025-09-30 17:52:41.21360052 +0000 UTC m=+0.027267357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:52:41 compute-0 systemd[1]: Started libpod-conmon-1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2.scope.
Sep 30 17:52:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:52:41 compute-0 podman[202658]: 2025-09-30 17:52:41.457148848 +0000 UTC m=+0.270815675 container init 1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:52:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 17:52:41 compute-0 podman[202658]: 2025-09-30 17:52:41.468725437 +0000 UTC m=+0.282392274 container start 1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:52:41 compute-0 kind_turing[202675]: 167 167
Sep 30 17:52:41 compute-0 systemd[1]: libpod-1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2.scope: Deactivated successfully.
Sep 30 17:52:41 compute-0 podman[202658]: 2025-09-30 17:52:41.485792649 +0000 UTC m=+0.299459486 container attach 1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:52:41 compute-0 podman[202658]: 2025-09-30 17:52:41.486302473 +0000 UTC m=+0.299969310 container died 1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-463c41e7382c91651671347aeb5328802b5f87f471bcdd90ba33b39f0c27b0fe-merged.mount: Deactivated successfully.
Sep 30 17:52:41 compute-0 podman[202658]: 2025-09-30 17:52:41.546214344 +0000 UTC m=+0.359881151 container remove 1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:52:41 compute-0 systemd[1]: libpod-conmon-1dc89e621aca9acda7506202f097bb855ebda7c4bb18710c5b06873f3df8cfc2.scope: Deactivated successfully.
Sep 30 17:52:41 compute-0 podman[202703]: 2025-09-30 17:52:41.768216163 +0000 UTC m=+0.048096536 container create 77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_spence, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 17:52:41 compute-0 systemd[1]: Started libpod-conmon-77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb.scope.
Sep 30 17:52:41 compute-0 podman[202703]: 2025-09-30 17:52:41.745954637 +0000 UTC m=+0.025835060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:52:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977c91969e483bc314537bdbe3479a87ae5fb368867ad55efd82b95422b4b60e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977c91969e483bc314537bdbe3479a87ae5fb368867ad55efd82b95422b4b60e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977c91969e483bc314537bdbe3479a87ae5fb368867ad55efd82b95422b4b60e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977c91969e483bc314537bdbe3479a87ae5fb368867ad55efd82b95422b4b60e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:52:41 compute-0 podman[202703]: 2025-09-30 17:52:41.862555766 +0000 UTC m=+0.142436199 container init 77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_spence, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:52:41 compute-0 podman[202703]: 2025-09-30 17:52:41.871887198 +0000 UTC m=+0.151767571 container start 77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_spence, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:52:41 compute-0 podman[202703]: 2025-09-30 17:52:41.875521082 +0000 UTC m=+0.155401465 container attach 77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_spence, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:52:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:41 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:42.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:42 compute-0 lvm[202793]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:52:42 compute-0 lvm[202793]: VG ceph_vg0 finished
Sep 30 17:52:42 compute-0 flamboyant_spence[202719]: {}
Sep 30 17:52:42 compute-0 systemd[1]: libpod-77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb.scope: Deactivated successfully.
Sep 30 17:52:42 compute-0 systemd[1]: libpod-77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb.scope: Consumed 1.242s CPU time.
Sep 30 17:52:42 compute-0 podman[202703]: 2025-09-30 17:52:42.632146386 +0000 UTC m=+0.912026759 container died 77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-977c91969e483bc314537bdbe3479a87ae5fb368867ad55efd82b95422b4b60e-merged.mount: Deactivated successfully.
Sep 30 17:52:42 compute-0 podman[202703]: 2025-09-30 17:52:42.67325775 +0000 UTC m=+0.953138123 container remove 77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_spence, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:52:42 compute-0 systemd[1]: libpod-conmon-77696cff5515c129274c469d231ab93d631b616e193b5bf83668052b07edc3eb.scope: Deactivated successfully.
Sep 30 17:52:42 compute-0 sudo[202592]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:52:42 compute-0 ceph-mon[73755]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 17:52:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:52:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:42.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:42 compute-0 sudo[202807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:52:42 compute-0 sudo[202807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:42 compute-0 sudo[202807]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:43 compute-0 sudo[202832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:52:43 compute-0 sudo[202832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:52:43 compute-0 sudo[202832]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:52:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:52:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc003bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:44 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:44.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:44 compute-0 podman[202859]: 2025-09-30 17:52:44.589949648 +0000 UTC m=+0.127891843 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 17:52:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:52:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:44.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:52:44 compute-0 ceph-mon[73755]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:52:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175245 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:52:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:52:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:45 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18003d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:46 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:46.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:46.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:46 compute-0 ceph-mon[73755]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:52:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:52:47.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:52:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:52:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:47 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc003bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:48 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:48.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:48.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:48] "GET /metrics HTTP/1.1" 200 46532 "" "Prometheus/2.51.0"
Sep 30 17:52:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:48] "GET /metrics HTTP/1.1" 200 46532 "" "Prometheus/2.51.0"
Sep 30 17:52:48 compute-0 ceph-mon[73755]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:52:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:52:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:49 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:49 compute-0 ceph-mon[73755]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:52:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:50 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:52:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:50.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:52:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:50.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:52:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:51 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc003bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:52 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 17:52:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:52.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 17:52:52 compute-0 ceph-mon[73755]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:52:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:52:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:52:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:52.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:52:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:52:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:53 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18003d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:54 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:54.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:52:54.238 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:52:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:52:54.239 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:52:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:52:54.239 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:52:54 compute-0 ceph-mon[73755]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:52:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:54.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:54 compute-0 sudo[203024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulpyhttmzxyqbsikesjbywvyggzremes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254774.1352649-652-143976261049187/AnsiballZ_systemd.py'
Sep 30 17:52:54 compute-0 sudo[203024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:52:55 compute-0 python3.9[203026]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:52:55 compute-0 systemd[1]: Reloading.
Sep 30 17:52:55 compute-0 systemd-rc-local-generator[203051]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:52:55 compute-0 systemd-sysv-generator[203057]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:52:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:52:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:55 compute-0 sudo[203024]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:55 compute-0 sudo[203216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfkdbguquqifjamthajuprwylmbieitd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254775.597099-652-179869253142785/AnsiballZ_systemd.py'
Sep 30 17:52:55 compute-0 sudo[203216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:52:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:55 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:56 compute-0 python3.9[203218]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:52:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:56 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:56.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:56 compute-0 systemd[1]: Reloading.
Sep 30 17:52:56 compute-0 systemd-rc-local-generator[203247]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:52:56 compute-0 systemd-sysv-generator[203250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:52:56 compute-0 sudo[203216]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:56 compute-0 podman[203258]: 2025-09-30 17:52:56.607046491 +0000 UTC m=+0.054156764 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4)
Sep 30 17:52:56 compute-0 ceph-mon[73755]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:56.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:56 compute-0 sudo[203426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbxiyxrixofaqgiuvnagibzzosphgxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254776.6954803-652-72588510430317/AnsiballZ_systemd.py'
Sep 30 17:52:56 compute-0 sudo[203426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:52:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:52:57.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:52:57 compute-0 python3.9[203428]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:52:57 compute-0 systemd[1]: Reloading.
Sep 30 17:52:57 compute-0 systemd-sysv-generator[203460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:52:57 compute-0 systemd-rc-local-generator[203457]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:52:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:57 compute-0 sudo[203426]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:57 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:58 compute-0 sudo[203618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nezobiahpppzuuaoparrozsaujqsmfch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254777.7835238-652-246057640925832/AnsiballZ_systemd.py'
Sep 30 17:52:58 compute-0 sudo[203618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:52:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:58 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:52:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:52:58.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:58 compute-0 python3.9[203620]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:52:58 compute-0 systemd[1]: Reloading.
Sep 30 17:52:58 compute-0 systemd-rc-local-generator[203652]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:52:58 compute-0 systemd-sysv-generator[203656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:52:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:52:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:52:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:52:58.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:52:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:58] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:52:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:52:58] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:52:58 compute-0 sudo[203618]: pam_unix(sudo:session): session closed for user root
Sep 30 17:52:58 compute-0 ceph-mon[73755]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:52:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:52:59 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:00 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:00.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:00.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:00 compute-0 ceph-mon[73755]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:01 compute-0 sudo[203813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dryfyhgyuyhrqlzsvibygciksziqrzpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254781.5700378-710-204184171946029/AnsiballZ_systemd.py'
Sep 30 17:53:01 compute-0 sudo[203813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:01 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003980 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:02 compute-0 python3.9[203815]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:02 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:02 compute-0 systemd[1]: Reloading.
Sep 30 17:53:02 compute-0 systemd-sysv-generator[203850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:53:02 compute-0 systemd-rc-local-generator[203846]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:53:02 compute-0 sudo[203813]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:02.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:02 compute-0 ceph-mon[73755]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:03 compute-0 sudo[204003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbeyfgelhpgrccghocpwerlfgsyfkmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254782.7572558-710-4741744721638/AnsiballZ_systemd.py'
Sep 30 17:53:03 compute-0 sudo[204003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:03 compute-0 sudo[204006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:53:03 compute-0 sudo[204006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:03 compute-0 sudo[204006]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:03 compute-0 python3.9[204005]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:03 compute-0 systemd[1]: Reloading.
Sep 30 17:53:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:03 compute-0 systemd-rc-local-generator[204064]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:53:03 compute-0 systemd-sysv-generator[204069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:53:03 compute-0 sudo[204003]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:03 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:03 compute-0 ceph-mon[73755]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:04 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:04 compute-0 sudo[204221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpizdbqxepxcequysqbejjperfkdykgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254783.9181566-710-165066277201691/AnsiballZ_systemd.py'
Sep 30 17:53:04 compute-0 sudo[204221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:04.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:04 compute-0 python3.9[204223]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:04 compute-0 systemd[1]: Reloading.
Sep 30 17:53:04 compute-0 systemd-sysv-generator[204257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:53:04 compute-0 systemd-rc-local-generator[204251]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:53:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:04.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:04 compute-0 sudo[204221]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:05 compute-0 sudo[204412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-invwvjbeyjuwxgmnjqcbxoqvrvldsnyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254785.0962758-710-52515231264757/AnsiballZ_systemd.py'
Sep 30 17:53:05 compute-0 sudo[204412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:05 compute-0 python3.9[204414]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:05 compute-0 sudo[204412]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:05 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:06 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:06.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:06 compute-0 sudo[204569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uevruwhwyoqypfyraxldzlnlvuuztasa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254785.9770079-710-215783613509625/AnsiballZ_systemd.py'
Sep 30 17:53:06 compute-0 sudo[204569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:06 compute-0 ceph-mon[73755]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:06 compute-0 python3.9[204571]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:06 compute-0 systemd[1]: Reloading.
Sep 30 17:53:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:53:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:06.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:53:06 compute-0 systemd-rc-local-generator[204601]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:53:06 compute-0 systemd-sysv-generator[204605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:53:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:53:07.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:53:07 compute-0 sudo[204569]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:53:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:53:07
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', '.nfs', '.rgw.root', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:53:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:53:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:07 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:08 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14002130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:08.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:08 compute-0 ceph-mon[73755]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:53:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:53:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:09 compute-0 sudo[204761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjrdtplnivhlzoowjfslaehlkngjeib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254788.8934991-782-212692839045618/AnsiballZ_systemd.py'
Sep 30 17:53:09 compute-0 sudo[204761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:09 compute-0 python3.9[204763]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 17:53:09 compute-0 systemd[1]: Reloading.
Sep 30 17:53:09 compute-0 systemd-rc-local-generator[204798]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:53:09 compute-0 systemd-sysv-generator[204802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:53:09 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Sep 30 17:53:09 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Sep 30 17:53:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:09 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:09 compute-0 sudo[204761]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:10 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:53:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:10.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:53:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:10 compute-0 sudo[204956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmsdsugdtruogfzuhubbpxjkbstiosnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254790.2977223-798-171464009668868/AnsiballZ_systemd.py'
Sep 30 17:53:10 compute-0 sudo[204956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:10 compute-0 ceph-mon[73755]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:10.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:10 compute-0 python3.9[204958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:10 compute-0 sudo[204956]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:11 compute-0 sudo[205112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dibokvibrkjjjucmtlcswbckrfgtzxyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254791.122964-798-54536447868719/AnsiballZ_systemd.py'
Sep 30 17:53:11 compute-0 sudo[205112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:11 compute-0 python3.9[205114]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:11 compute-0 sudo[205112]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:11 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:12 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14002130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:12.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:12 compute-0 sudo[205268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxfsnorlmjbumfrjinuntmcmcfnmffqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254792.0179436-798-188502601320086/AnsiballZ_systemd.py'
Sep 30 17:53:12 compute-0 sudo[205268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:12 compute-0 python3.9[205270]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:12 compute-0 ceph-mon[73755]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:12 compute-0 sudo[205268]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:12.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:13 compute-0 sudo[205423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgkkfwjskawiudlzwjcjnuzvngvpxmak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254792.8063362-798-133375157538571/AnsiballZ_systemd.py'
Sep 30 17:53:13 compute-0 sudo[205423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:13 compute-0 python3.9[205425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:13 compute-0 sudo[205423]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:13 compute-0 sudo[205580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cctmgudkqljaopwplsmquddjyzxqwujg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254793.6064208-798-33314905637917/AnsiballZ_systemd.py'
Sep 30 17:53:13 compute-0 sudo[205580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:13 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:14 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:14.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:14 compute-0 python3.9[205582]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:14 compute-0 sudo[205580]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:14 compute-0 ceph-mon[73755]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:14 compute-0 sudo[205751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlisuvbkkqsgncmoawnmdglgaqnxdbut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254794.466609-798-11086635465737/AnsiballZ_systemd.py'
Sep 30 17:53:14 compute-0 sudo[205751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:53:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:53:14 compute-0 podman[205709]: 2025-09-30 17:53:14.831868988 +0000 UTC m=+0.083004141 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 17:53:15 compute-0 python3.9[205758]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:15 compute-0 sudo[205751]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:15 compute-0 sudo[205918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkcjgeeswsbjxjiirbhlgufbsyzlfpro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254795.2951143-798-37555167776336/AnsiballZ_systemd.py'
Sep 30 17:53:15 compute-0 sudo[205918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:15 compute-0 python3.9[205920]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:15 compute-0 sudo[205918]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:15 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:16 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14002130 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:16.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:16 compute-0 sudo[206074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwludnhjfffkmqxlwpgpofcvloiinlno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254796.0972903-798-94878593771506/AnsiballZ_systemd.py'
Sep 30 17:53:16 compute-0 sudo[206074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:16 compute-0 ceph-mon[73755]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:16 compute-0 python3.9[206076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:16 compute-0 sudo[206074]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:16.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:53:17.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:53:17 compute-0 sudo[206229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbogaodkdwgnhlekbvfukxpqdeqqlufx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254796.9337077-798-105349783109893/AnsiballZ_systemd.py'
Sep 30 17:53:17 compute-0 sudo[206229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:17 compute-0 python3.9[206231]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:17 compute-0 sudo[206229]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:17 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:18 compute-0 sudo[206386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctjlxjrrxzkluvpzqjdyasmquwxpgcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254797.7145338-798-274601393037684/AnsiballZ_systemd.py'
Sep 30 17:53:18 compute-0 sudo[206386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:18 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:18.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:18 compute-0 python3.9[206388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:18 compute-0 sudo[206386]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:18 compute-0 ceph-mon[73755]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:53:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:53:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:18.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:18 compute-0 sudo[206541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-escoclygxavgmallkccexfxrxcelpjcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254798.5559793-798-265015617222530/AnsiballZ_systemd.py'
Sep 30 17:53:18 compute-0 sudo[206541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:19 compute-0 python3.9[206543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:19 compute-0 sudo[206541]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:19 compute-0 sudo[206697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyzgzsccpomzezoqznbxvozwygvyshrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254799.3363647-798-162032318455098/AnsiballZ_systemd.py'
Sep 30 17:53:19 compute-0 sudo[206697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:19 compute-0 python3.9[206699]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:19 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:19 compute-0 sudo[206697]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:20 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c140022d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:20.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:20 compute-0 sudo[206853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbtkwknnjsscvalutdupfehcztuwhib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254800.1098778-798-49463125397468/AnsiballZ_systemd.py'
Sep 30 17:53:20 compute-0 sudo[206853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:20 compute-0 python3.9[206855]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:20 compute-0 ceph-mon[73755]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:20 compute-0 sudo[206853]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:20.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:21 compute-0 sudo[207008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwfpnaletyjparumyjweningwrdsade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254800.8813372-798-197967158897611/AnsiballZ_systemd.py'
Sep 30 17:53:21 compute-0 sudo[207008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:21 compute-0 python3.9[207010]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 17:53:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:21 compute-0 sudo[207008]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:21 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:22 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:22.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:53:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:53:22 compute-0 ceph-mon[73755]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:53:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:22.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:23 compute-0 sudo[207165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdsfbbicwtomcelzxcxlyvbhixvsrkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254802.995959-1002-254766273532651/AnsiballZ_file.py'
Sep 30 17:53:23 compute-0 sudo[207165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:23 compute-0 sudo[207168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:53:23 compute-0 sudo[207168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:23 compute-0 sudo[207168]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:23 compute-0 python3.9[207167]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:53:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:23 compute-0 sudo[207165]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:23 compute-0 sudo[207344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmhguddykoxcildgsamnccyeegidljo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254803.6267133-1002-241057181995582/AnsiballZ_file.py'
Sep 30 17:53:23 compute-0 sudo[207344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:23 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:24 compute-0 python3.9[207346]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:53:24 compute-0 sudo[207344]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:24 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003a10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:24.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:24 compute-0 sudo[207496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwivcmmxknpdyunhryqvjmfmzsatseev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254804.2886236-1002-100259623280495/AnsiballZ_file.py'
Sep 30 17:53:24 compute-0 sudo[207496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:24 compute-0 ceph-mon[73755]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:24 compute-0 python3.9[207498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:53:24 compute-0 sudo[207496]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:24.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:25 compute-0 sudo[207648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olincubrwmsdzusdlghauenxebebdcds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254804.9603782-1002-50520353825631/AnsiballZ_file.py'
Sep 30 17:53:25 compute-0 sudo[207648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:25 compute-0 python3.9[207650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:53:25 compute-0 sudo[207648]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:25 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:25 compute-0 sudo[207802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdtiofmwldbduheyitfbibcgewzqznjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254805.6161945-1002-73264073146284/AnsiballZ_file.py'
Sep 30 17:53:25 compute-0 sudo[207802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:26 compute-0 python3.9[207805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:53:26 compute-0 sudo[207802]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:26 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:26.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:26 compute-0 sudo[207967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwwtnronprkenbpepapwhsodnneimme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254806.3555396-1002-251908636233569/AnsiballZ_file.py'
Sep 30 17:53:26 compute-0 sudo[207967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:26 compute-0 podman[207929]: 2025-09-30 17:53:26.760762119 +0000 UTC m=+0.084798237 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:53:26 compute-0 ceph-mon[73755]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:26.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:26 compute-0 python3.9[207975]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:53:26 compute-0 sudo[207967]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:53:27.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:53:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:27 compute-0 sudo[208127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pumwerkxnvzbryirqxwulmxusihgspva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254807.4474092-1088-24469901667996/AnsiballZ_stat.py'
Sep 30 17:53:27 compute-0 sudo[208127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:27 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc0034c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:28 compute-0 python3.9[208129]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:28 compute-0 sudo[208127]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:28 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14003c20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:28.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:28 compute-0 sudo[208252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnagwdcrfnoawvpxhtymqahvktqqghxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254807.4474092-1088-24469901667996/AnsiballZ_copy.py'
Sep 30 17:53:28 compute-0 sudo[208252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:53:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:53:28 compute-0 ceph-mon[73755]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:28.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:28 compute-0 python3.9[208254]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759254807.4474092-1088-24469901667996/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:28 compute-0 sudo[208252]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:29 compute-0 sudo[208405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxlwxfiokqapspraagilvhufobzvoure ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254809.159487-1088-16010994672356/AnsiballZ_stat.py'
Sep 30 17:53:29 compute-0 sudo[208405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:29 compute-0 python3.9[208407]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:29 compute-0 sudo[208405]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:29 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:30 compute-0 sudo[208531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exaneltluabquzemztvyedyzjavnkgbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254809.159487-1088-16010994672356/AnsiballZ_copy.py'
Sep 30 17:53:30 compute-0 sudo[208531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:30 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:30 compute-0 python3.9[208533]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759254809.159487-1088-16010994672356/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:30 compute-0 sudo[208531]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:30 compute-0 sudo[208683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhoejofmjaohpzydjwuiwmgqwimxiru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254810.4998267-1088-58994926118278/AnsiballZ_stat.py'
Sep 30 17:53:30 compute-0 sudo[208683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:30 compute-0 ceph-mon[73755]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:30 compute-0 python3.9[208685]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:31 compute-0 sudo[208683]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:31 compute-0 sudo[208809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cafyyppcpanikdlvlohlgxyreyjmnyrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254810.4998267-1088-58994926118278/AnsiballZ_copy.py'
Sep 30 17:53:31 compute-0 sudo[208809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:31 compute-0 python3.9[208811]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759254810.4998267-1088-58994926118278/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:31 compute-0 sudo[208809]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:31 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc0034c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:32 compute-0 sudo[208962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphnramsalkoogjhlbdsoxhxctbzgigy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254811.8366623-1088-195275424334251/AnsiballZ_stat.py'
Sep 30 17:53:32 compute-0 sudo[208962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:32 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:32.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:32 compute-0 python3.9[208964]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:32 compute-0 sudo[208962]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:32 compute-0 sudo[209087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsapqeztdssmekfbjvlegkayiysyerqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254811.8366623-1088-195275424334251/AnsiballZ_copy.py'
Sep 30 17:53:32 compute-0 sudo[209087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:32 compute-0 ceph-mon[73755]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:32.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:33 compute-0 python3.9[209089]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759254811.8366623-1088-195275424334251/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:33 compute-0 sudo[209087]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:33 compute-0 sudo[209240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icbjvfeqfurijjfrlwtrcpegjehllbmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254813.1897633-1088-266103629545015/AnsiballZ_stat.py'
Sep 30 17:53:33 compute-0 sudo[209240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:33 compute-0 python3.9[209242]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:33 compute-0 sudo[209240]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:33 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:34 compute-0 sudo[209366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trjccbyowdfvzgvnhuxurpqujdgnlqgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254813.1897633-1088-266103629545015/AnsiballZ_copy.py'
Sep 30 17:53:34 compute-0 sudo[209366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:34 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:34.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:34 compute-0 python3.9[209368]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759254813.1897633-1088-266103629545015/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:34 compute-0 sudo[209366]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:34 compute-0 sudo[209518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awsiugwyeyrldqequkiuxuorteblxdtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254814.5207787-1088-129719204297732/AnsiballZ_stat.py'
Sep 30 17:53:34 compute-0 sudo[209518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:34 compute-0 ceph-mon[73755]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:35 compute-0 python3.9[209520]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:35 compute-0 sudo[209518]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:35 compute-0 sudo[209644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufrmiqislcdevvqkxrddplbompootton ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254814.5207787-1088-129719204297732/AnsiballZ_copy.py'
Sep 30 17:53:35 compute-0 sudo[209644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:35 compute-0 python3.9[209646]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759254814.5207787-1088-129719204297732/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:35 compute-0 sudo[209644]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:35 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc0034c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:36 compute-0 sudo[209797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmesridkofpgthjywhaadnddxpvkpfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254815.8638844-1088-20790393767712/AnsiballZ_stat.py'
Sep 30 17:53:36 compute-0 sudo[209797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:36 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:36.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:36 compute-0 python3.9[209799]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:36 compute-0 sudo[209797]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:36 compute-0 sudo[209920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdpkqipfjjqqgyrknbguowdajblsmpuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254815.8638844-1088-20790393767712/AnsiballZ_copy.py'
Sep 30 17:53:36 compute-0 sudo[209920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:36 compute-0 ceph-mon[73755]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:36 compute-0 python3.9[209922]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759254815.8638844-1088-20790393767712/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:36 compute-0 sudo[209920]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:53:37.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:53:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:53:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:53:37 compute-0 sudo[210073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eatbdaibedwerwykoytaqktlibddxwkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254817.1157417-1088-73442552003122/AnsiballZ_stat.py'
Sep 30 17:53:37 compute-0 sudo[210073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:37 compute-0 python3.9[210075]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:37 compute-0 sudo[210073]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:53:37 compute-0 sudo[210199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaudmamhgcyqfdfrpfpffwfvgarfisuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254817.1157417-1088-73442552003122/AnsiballZ_copy.py'
Sep 30 17:53:37 compute-0 sudo[210199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:37 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:38 compute-0 python3.9[210201]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759254817.1157417-1088-73442552003122/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:38 compute-0 sudo[210199]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:38 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:38] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:53:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:38] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:53:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:38.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:38 compute-0 ceph-mon[73755]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:39 compute-0 sudo[210352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdjfbqwyochjzjixfhticgaxmvvudqxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254819.093393-1314-68663924789264/AnsiballZ_command.py'
Sep 30 17:53:39 compute-0 sudo[210352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:39 compute-0 python3.9[210355]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Sep 30 17:53:39 compute-0 sudo[210352]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:39 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c080008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:40 compute-0 sudo[210507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubzwxqbckmxjmfsdwmcehrbjlxyguxvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254819.867292-1332-147627670222284/AnsiballZ_file.py'
Sep 30 17:53:40 compute-0 sudo[210507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:40 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:40 compute-0 python3.9[210509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:40 compute-0 sudo[210507]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:40 compute-0 sudo[210659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzjcakrxdoobgmoggonlvcltitmjlwsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254820.501987-1332-133330655963232/AnsiballZ_file.py'
Sep 30 17:53:40 compute-0 sudo[210659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:40.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:40 compute-0 ceph-mon[73755]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:41 compute-0 python3.9[210661]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:41 compute-0 sudo[210659]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:41 compute-0 sudo[210812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glrqtsxlikjpdluheppmwexubgiuocvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254821.2043452-1332-132898384132894/AnsiballZ_file.py'
Sep 30 17:53:41 compute-0 sudo[210812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:41 compute-0 python3.9[210814]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:41 compute-0 sudo[210812]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:41 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:42 compute-0 sudo[210965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yplpvpabfpbwtgvyvtjksaieknvswrsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254821.8638194-1332-219389680533671/AnsiballZ_file.py'
Sep 30 17:53:42 compute-0 sudo[210965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:42 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:42.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:42 compute-0 python3.9[210967]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:42 compute-0 sudo[210965]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:42 compute-0 sudo[211117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlnjstxdmgaykhdoyblqxlxnbipmljse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254822.5322134-1332-125183677861169/AnsiballZ_file.py'
Sep 30 17:53:42 compute-0 sudo[211117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:42.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:42 compute-0 ceph-mon[73755]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:43 compute-0 sudo[211120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:53:43 compute-0 sudo[211120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:43 compute-0 sudo[211120]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:43 compute-0 python3.9[211119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:43 compute-0 sudo[211117]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:43 compute-0 sudo[211145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:53:43 compute-0 sudo[211145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:43 compute-0 sudo[211340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fofkvpofeygyfcncxxmldhipqxxilpsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254823.165893-1332-129892595573998/AnsiballZ_file.py'
Sep 30 17:53:43 compute-0 sudo[211340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:43 compute-0 sudo[211336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:53:43 compute-0 sudo[211336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:43 compute-0 sudo[211336]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:43 compute-0 sudo[211145]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:43 compute-0 python3.9[211358]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:43 compute-0 sudo[211340]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:53:43 compute-0 sudo[211425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:53:43 compute-0 sudo[211425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:43 compute-0 sudo[211425]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:43 compute-0 sudo[211474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:53:43 compute-0 sudo[211474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:53:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:53:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:53:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:43 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c080008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:44 compute-0 sudo[211580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xocgnusjvwelwlgbbovqqftyczsekoyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254823.750139-1332-172884897367268/AnsiballZ_file.py'
Sep 30 17:53:44 compute-0 sudo[211580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:44 compute-0 python3.9[211582]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:44 compute-0 podman[211626]: 2025-09-30 17:53:44.224392201 +0000 UTC m=+0.036544357 container create 16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_dirac, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:53:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:44 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:44 compute-0 sudo[211580]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:44.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:44 compute-0 systemd[1]: Started libpod-conmon-16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844.scope.
Sep 30 17:53:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:53:44 compute-0 podman[211626]: 2025-09-30 17:53:44.207918834 +0000 UTC m=+0.020071010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:53:44 compute-0 podman[211626]: 2025-09-30 17:53:44.31512006 +0000 UTC m=+0.127272246 container init 16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:53:44 compute-0 podman[211626]: 2025-09-30 17:53:44.321725222 +0000 UTC m=+0.133877378 container start 16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 17:53:44 compute-0 podman[211626]: 2025-09-30 17:53:44.325668314 +0000 UTC m=+0.137820470 container attach 16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_dirac, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 17:53:44 compute-0 goofy_dirac[211642]: 167 167
Sep 30 17:53:44 compute-0 systemd[1]: libpod-16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844.scope: Deactivated successfully.
Sep 30 17:53:44 compute-0 podman[211626]: 2025-09-30 17:53:44.327279305 +0000 UTC m=+0.139431461 container died 16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_dirac, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 17:53:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8162dab74c7c8269a56e702f96ad7944c7f100ae5d4f3ef7c326c95b0e8e0b9-merged.mount: Deactivated successfully.
Sep 30 17:53:44 compute-0 podman[211626]: 2025-09-30 17:53:44.362140488 +0000 UTC m=+0.174292644 container remove 16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_dirac, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 17:53:44 compute-0 systemd[1]: libpod-conmon-16fc528c9c516f212c59fc651b4f1fb082b3d9087f6611eb17534faa2b20b844.scope: Deactivated successfully.
Sep 30 17:53:44 compute-0 podman[211764]: 2025-09-30 17:53:44.517138162 +0000 UTC m=+0.040827198 container create 314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:53:44 compute-0 systemd[1]: Started libpod-conmon-314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6.scope.
Sep 30 17:53:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8eeefaf03bacf0bb0c481332d52c8a1831e692e60f57a36e1d6fb11f70b0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8eeefaf03bacf0bb0c481332d52c8a1831e692e60f57a36e1d6fb11f70b0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8eeefaf03bacf0bb0c481332d52c8a1831e692e60f57a36e1d6fb11f70b0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8eeefaf03bacf0bb0c481332d52c8a1831e692e60f57a36e1d6fb11f70b0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8eeefaf03bacf0bb0c481332d52c8a1831e692e60f57a36e1d6fb11f70b0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:44 compute-0 podman[211764]: 2025-09-30 17:53:44.499234508 +0000 UTC m=+0.022923544 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:53:44 compute-0 podman[211764]: 2025-09-30 17:53:44.604493344 +0000 UTC m=+0.128182410 container init 314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:53:44 compute-0 podman[211764]: 2025-09-30 17:53:44.619726069 +0000 UTC m=+0.143415105 container start 314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jennings, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:53:44 compute-0 sudo[211833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbotkqqizwhtssftljoigqdpmbxocogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254824.362612-1332-244426186700047/AnsiballZ_file.py'
Sep 30 17:53:44 compute-0 podman[211764]: 2025-09-30 17:53:44.623914847 +0000 UTC m=+0.147603923 container attach 314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jennings, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 17:53:44 compute-0 sudo[211833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:44 compute-0 python3.9[211837]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:53:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:44.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:53:44 compute-0 sudo[211833]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:44 compute-0 ceph-mon[73755]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:53:44 compute-0 mystifying_jennings[211804]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:53:44 compute-0 mystifying_jennings[211804]: --> All data devices are unavailable
Sep 30 17:53:44 compute-0 systemd[1]: libpod-314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6.scope: Deactivated successfully.
Sep 30 17:53:44 compute-0 podman[211764]: 2025-09-30 17:53:44.994356371 +0000 UTC m=+0.518045407 container died 314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe8eeefaf03bacf0bb0c481332d52c8a1831e692e60f57a36e1d6fb11f70b0f-merged.mount: Deactivated successfully.
Sep 30 17:53:45 compute-0 podman[211764]: 2025-09-30 17:53:45.046187843 +0000 UTC m=+0.569876879 container remove 314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:53:45 compute-0 systemd[1]: libpod-conmon-314603d04ffa038cc5bfca438917ebf71efedb4321857eb17f29a2b95d08dcd6.scope: Deactivated successfully.
Sep 30 17:53:45 compute-0 sudo[211474]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:45 compute-0 podman[211896]: 2025-09-30 17:53:45.138251457 +0000 UTC m=+0.104346953 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Sep 30 17:53:45 compute-0 sudo[211971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:53:45 compute-0 sudo[211971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:45 compute-0 sudo[211971]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:45 compute-0 sudo[212016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:53:45 compute-0 sudo[212016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:45 compute-0 sudo[212085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fslvhggvgwyzigaguofcxhkuuvhfwwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254824.9990697-1332-234321142921433/AnsiballZ_file.py'
Sep 30 17:53:45 compute-0 sudo[212085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:45 compute-0 python3.9[212087]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:45 compute-0 sudo[212085]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:45 compute-0 podman[212166]: 2025-09-30 17:53:45.661237731 +0000 UTC m=+0.040214502 container create eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:53:45 compute-0 systemd[1]: Started libpod-conmon-eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757.scope.
Sep 30 17:53:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:53:45 compute-0 podman[212166]: 2025-09-30 17:53:45.643669656 +0000 UTC m=+0.022646457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:53:45 compute-0 podman[212166]: 2025-09-30 17:53:45.742918956 +0000 UTC m=+0.121895747 container init eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_diffie, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 17:53:45 compute-0 podman[212166]: 2025-09-30 17:53:45.750596485 +0000 UTC m=+0.129573256 container start eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:53:45 compute-0 podman[212166]: 2025-09-30 17:53:45.753842909 +0000 UTC m=+0.132819770 container attach eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:53:45 compute-0 vigilant_diffie[212222]: 167 167
Sep 30 17:53:45 compute-0 systemd[1]: libpod-eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757.scope: Deactivated successfully.
Sep 30 17:53:45 compute-0 podman[212166]: 2025-09-30 17:53:45.758912701 +0000 UTC m=+0.137889462 container died eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Sep 30 17:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-73756d5ed097a3c2eea8f288e0969fb0fa1c1db8a8e7d9b844642376a5afa6d1-merged.mount: Deactivated successfully.
Sep 30 17:53:45 compute-0 podman[212166]: 2025-09-30 17:53:45.793316611 +0000 UTC m=+0.172293392 container remove eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 17:53:45 compute-0 systemd[1]: libpod-conmon-eaa0da34eb7e2a25caa72837b0cb308d00d37c03d3b7a6430a18a86347a5d757.scope: Deactivated successfully.
Sep 30 17:53:45 compute-0 sudo[212314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psuyiglxgzmimqmycvuccyxggvsvnwau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254825.6331055-1332-278760267689463/AnsiballZ_file.py'
Sep 30 17:53:45 compute-0 sudo[212314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:45 compute-0 podman[212321]: 2025-09-30 17:53:45.977008139 +0000 UTC m=+0.046397263 container create f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:53:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:45 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:46 compute-0 ceph-mon[73755]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:46 compute-0 systemd[1]: Started libpod-conmon-f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450.scope.
Sep 30 17:53:46 compute-0 podman[212321]: 2025-09-30 17:53:45.958331925 +0000 UTC m=+0.027721039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:53:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8ffe18e64bca2516681bc0a49790a2ee7b63291001cd14431d1689cdf2fcdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8ffe18e64bca2516681bc0a49790a2ee7b63291001cd14431d1689cdf2fcdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8ffe18e64bca2516681bc0a49790a2ee7b63291001cd14431d1689cdf2fcdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8ffe18e64bca2516681bc0a49790a2ee7b63291001cd14431d1689cdf2fcdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:46 compute-0 podman[212321]: 2025-09-30 17:53:46.083672451 +0000 UTC m=+0.153061575 container init f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_curie, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:53:46 compute-0 podman[212321]: 2025-09-30 17:53:46.098838924 +0000 UTC m=+0.168228018 container start f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:53:46 compute-0 podman[212321]: 2025-09-30 17:53:46.102098798 +0000 UTC m=+0.171487892 container attach f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:53:46 compute-0 python3.9[212322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:46 compute-0 sudo[212314]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:46 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:46.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:46 compute-0 pedantic_curie[212338]: {
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:     "0": [
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:         {
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "devices": [
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "/dev/loop3"
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             ],
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "lv_name": "ceph_lv0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "lv_size": "21470642176",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "name": "ceph_lv0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "tags": {
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.cluster_name": "ceph",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.crush_device_class": "",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.encrypted": "0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.osd_id": "0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.type": "block",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.vdo": "0",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:                 "ceph.with_tpm": "0"
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             },
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "type": "block",
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:             "vg_name": "ceph_vg0"
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:         }
Sep 30 17:53:46 compute-0 pedantic_curie[212338]:     ]
Sep 30 17:53:46 compute-0 pedantic_curie[212338]: }
Sep 30 17:53:46 compute-0 systemd[1]: libpod-f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450.scope: Deactivated successfully.
Sep 30 17:53:46 compute-0 podman[212321]: 2025-09-30 17:53:46.50415845 +0000 UTC m=+0.573547554 container died f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8ffe18e64bca2516681bc0a49790a2ee7b63291001cd14431d1689cdf2fcdb-merged.mount: Deactivated successfully.
Sep 30 17:53:46 compute-0 podman[212321]: 2025-09-30 17:53:46.559192844 +0000 UTC m=+0.628581948 container remove f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_curie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:53:46 compute-0 systemd[1]: libpod-conmon-f1b4b22a819566cb3fbcff01b18f01293d1b7c673784fab6aba0f7c90f5bf450.scope: Deactivated successfully.
Sep 30 17:53:46 compute-0 sudo[212016]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:46 compute-0 sudo[212516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okkypabtzhxtmcwnxeuqkcjahvcvcmvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254826.3207314-1332-86616550344918/AnsiballZ_file.py'
Sep 30 17:53:46 compute-0 sudo[212516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:46 compute-0 sudo[212505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:53:46 compute-0 sudo[212505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:46 compute-0 sudo[212505]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:46 compute-0 sudo[212538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:53:46 compute-0 sudo[212538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:46 compute-0 python3.9[212535]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:46.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:46 compute-0 sudo[212516]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:53:47.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:53:47 compute-0 podman[212664]: 2025-09-30 17:53:47.122690208 +0000 UTC m=+0.042279906 container create d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:53:47 compute-0 systemd[1]: Started libpod-conmon-d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6.scope.
Sep 30 17:53:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:53:47 compute-0 podman[212664]: 2025-09-30 17:53:47.104813764 +0000 UTC m=+0.024403483 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:53:47 compute-0 podman[212664]: 2025-09-30 17:53:47.215134601 +0000 UTC m=+0.134724339 container init d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rosalind, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:53:47 compute-0 podman[212664]: 2025-09-30 17:53:47.222745729 +0000 UTC m=+0.142335427 container start d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:53:47 compute-0 podman[212664]: 2025-09-30 17:53:47.226441094 +0000 UTC m=+0.146030802 container attach d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 17:53:47 compute-0 hopeful_rosalind[212719]: 167 167
Sep 30 17:53:47 compute-0 systemd[1]: libpod-d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6.scope: Deactivated successfully.
Sep 30 17:53:47 compute-0 podman[212664]: 2025-09-30 17:53:47.229528804 +0000 UTC m=+0.149118502 container died d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 17:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-821675a01371dfe9b61b268109ff118c8b1b38ca3f79069dbc7ce04ba3dff890-merged.mount: Deactivated successfully.
Sep 30 17:53:47 compute-0 podman[212664]: 2025-09-30 17:53:47.264694625 +0000 UTC m=+0.184284323 container remove d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rosalind, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:53:47 compute-0 systemd[1]: libpod-conmon-d0903007039217bfbe27346551f390dc36572988157c5c58c886f126672fa9a6.scope: Deactivated successfully.
Sep 30 17:53:47 compute-0 sudo[212787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqqekcsmptentlxpqeyodxoxzukjycj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254827.0263338-1332-211745339956932/AnsiballZ_file.py'
Sep 30 17:53:47 compute-0 sudo[212787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:47 compute-0 podman[212796]: 2025-09-30 17:53:47.458417892 +0000 UTC m=+0.051702300 container create 6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 17:53:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:47 compute-0 systemd[1]: Started libpod-conmon-6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c.scope.
Sep 30 17:53:47 compute-0 podman[212796]: 2025-09-30 17:53:47.438015633 +0000 UTC m=+0.031300041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:53:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28ab1db98c22aec206a5d43c91e08129f1bc9402d4d97611c7c69bedcdce7a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28ab1db98c22aec206a5d43c91e08129f1bc9402d4d97611c7c69bedcdce7a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28ab1db98c22aec206a5d43c91e08129f1bc9402d4d97611c7c69bedcdce7a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28ab1db98c22aec206a5d43c91e08129f1bc9402d4d97611c7c69bedcdce7a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:53:47 compute-0 python3.9[212789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:47 compute-0 podman[212796]: 2025-09-30 17:53:47.555087605 +0000 UTC m=+0.148372013 container init 6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 17:53:47 compute-0 podman[212796]: 2025-09-30 17:53:47.563311708 +0000 UTC m=+0.156596096 container start 6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 17:53:47 compute-0 podman[212796]: 2025-09-30 17:53:47.566910831 +0000 UTC m=+0.160195239 container attach 6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:53:47 compute-0 sudo[212787]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:47 compute-0 sudo[213002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkqktuccmvftyvgtmrojxjtlauuilwhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254827.716582-1332-74680636053853/AnsiballZ_file.py'
Sep 30 17:53:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:47 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:47 compute-0 sudo[213002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:48 compute-0 python3.9[213006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:48 compute-0 sudo[213002]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:48 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:48.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:48 compute-0 lvm[213048]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:53:48 compute-0 lvm[213048]: VG ceph_vg0 finished
Sep 30 17:53:48 compute-0 boring_varahamihira[212813]: {}
Sep 30 17:53:48 compute-0 systemd[1]: libpod-6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c.scope: Deactivated successfully.
Sep 30 17:53:48 compute-0 systemd[1]: libpod-6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c.scope: Consumed 1.232s CPU time.
Sep 30 17:53:48 compute-0 podman[212796]: 2025-09-30 17:53:48.392125152 +0000 UTC m=+0.985409540 container died 6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_varahamihira, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 17:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b28ab1db98c22aec206a5d43c91e08129f1bc9402d4d97611c7c69bedcdce7a6-merged.mount: Deactivated successfully.
Sep 30 17:53:48 compute-0 podman[212796]: 2025-09-30 17:53:48.437092777 +0000 UTC m=+1.030377165 container remove 6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_varahamihira, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:53:48 compute-0 systemd[1]: libpod-conmon-6bac01e21e7cf14ec6b0abc439e37e93cfdad6711bae6f08c4b87dd0f3505e5c.scope: Deactivated successfully.
Sep 30 17:53:48 compute-0 sudo[212538]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:53:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:53:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:53:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:53:48 compute-0 sudo[213177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:53:48 compute-0 sudo[213177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:53:48 compute-0 sudo[213177]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:48 compute-0 ceph-mon[73755]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:53:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:53:48 compute-0 sudo[213229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qupzgyympxmeuxqwuooolmtcztabbtyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254828.333306-1332-214109997521140/AnsiballZ_file.py'
Sep 30 17:53:48 compute-0 sudo[213229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:48 compute-0 python3.9[213231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:48 compute-0 sudo[213229]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:48] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:53:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:48] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:53:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:48.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:49 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:50 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:50.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:50 compute-0 sudo[213383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgnirkgxcdpgekzmqxrbjivyspsdldnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254830.147435-1530-61196038312650/AnsiballZ_stat.py'
Sep 30 17:53:50 compute-0 sudo[213383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:50 compute-0 ceph-mon[73755]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:50 compute-0 python3.9[213385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:50 compute-0 sudo[213383]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:50.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:50 compute-0 sudo[213506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flieqsbufkkqhnfbavurtoqurvtdcdur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254830.147435-1530-61196038312650/AnsiballZ_copy.py'
Sep 30 17:53:50 compute-0 sudo[213506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:51 compute-0 python3.9[213508]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254830.147435-1530-61196038312650/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:51 compute-0 sudo[213506]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:51 compute-0 sudo[213659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efcbqryigqeiayqcmgsrhseynerkdrcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254831.318657-1530-1365208064378/AnsiballZ_stat.py'
Sep 30 17:53:51 compute-0 sudo[213659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:51 compute-0 python3.9[213661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:51 compute-0 sudo[213659]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:51 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:52 compute-0 sudo[213783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcfdggxhafjncnmszqoebmifhjugvdvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254831.318657-1530-1365208064378/AnsiballZ_copy.py'
Sep 30 17:53:52 compute-0 sudo[213783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:52 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c14004540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:53:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:53:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:53:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:53:52 compute-0 python3.9[213785]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254831.318657-1530-1365208064378/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:52 compute-0 sudo[213783]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:52 compute-0 ceph-mon[73755]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:53:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:53:52 compute-0 sudo[213935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqiiajrinegqvobdmkbijwkzvxcfjkhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254832.518444-1530-188457549173484/AnsiballZ_stat.py'
Sep 30 17:53:52 compute-0 sudo[213935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:53 compute-0 python3.9[213937]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:53 compute-0 sudo[213935]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:53 compute-0 sudo[214059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifroqrqmonvtyjyvlnqojzofqesvlqfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254832.518444-1530-188457549173484/AnsiballZ_copy.py'
Sep 30 17:53:53 compute-0 sudo[214059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:53:53 compute-0 python3.9[214061]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254832.518444-1530-188457549173484/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:53 compute-0 sudo[214059]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:53 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:54 compute-0 sudo[214212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywdktnjeqeuouikefobqllnidgzukjxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254833.8081985-1530-195762746875970/AnsiballZ_stat.py'
Sep 30 17:53:54 compute-0 sudo[214212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:53:54.240 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:53:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:53:54.241 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:53:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:53:54.242 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:53:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:54 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:54.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:54 compute-0 python3.9[214214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:54 compute-0 sudo[214212]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:54 compute-0 ceph-mon[73755]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:53:54 compute-0 sudo[214336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwdzawutdzsvhvqwedwmnyrjucdrgod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254833.8081985-1530-195762746875970/AnsiballZ_copy.py'
Sep 30 17:53:54 compute-0 sudo[214336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:54 compute-0 python3.9[214338]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254833.8081985-1530-195762746875970/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:54 compute-0 sudo[214336]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:53:55 compute-0 sudo[214488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtmcdkuhutjotvwjsjvfizqerwqwsvos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254834.9962778-1530-195445482553593/AnsiballZ_stat.py'
Sep 30 17:53:55 compute-0 sudo[214488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175355 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:53:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:53:55 compute-0 python3.9[214490]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:55 compute-0 sudo[214488]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:55 compute-0 sudo[214613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vntfxtfonxdqvcgfnbvhruyypqmhzgxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254834.9962778-1530-195445482553593/AnsiballZ_copy.py'
Sep 30 17:53:55 compute-0 sudo[214613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:55 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:56 compute-0 python3.9[214615]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254834.9962778-1530-195445482553593/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:56 compute-0 sudo[214613]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:56 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:56 compute-0 ceph-mon[73755]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:53:56 compute-0 sudo[214765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkghjlsaiodwomyniesdcsgeagqbjkjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254836.3436747-1530-5395915930299/AnsiballZ_stat.py'
Sep 30 17:53:56 compute-0 sudo[214765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:56 compute-0 python3.9[214767]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:56.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:56 compute-0 sudo[214765]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:53:57.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:53:57 compute-0 sudo[214904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ianuftuirutmcuzfwskmgszefpxifoek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254836.3436747-1530-5395915930299/AnsiballZ_copy.py'
Sep 30 17:53:57 compute-0 podman[214862]: 2025-09-30 17:53:57.325401965 +0000 UTC m=+0.077906547 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 17:53:57 compute-0 sudo[214904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:53:57 compute-0 python3.9[214909]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254836.3436747-1530-5395915930299/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:57 compute-0 sudo[214904]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:57 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:58 compute-0 sudo[215062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psmnxpoctmqyriskuoeedvhmdzhhodxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254837.758534-1530-56327691868123/AnsiballZ_stat.py'
Sep 30 17:53:58 compute-0 sudo[215062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:58 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:53:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:53:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:53:58.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:53:58 compute-0 python3.9[215064]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:58 compute-0 sudo[215062]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:58 compute-0 ceph-mon[73755]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:53:58 compute-0 sudo[215185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fktgdjuofbkhzuevxmcfadpcalqfyvuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254837.758534-1530-56327691868123/AnsiballZ_copy.py'
Sep 30 17:53:58 compute-0 sudo[215185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:58] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:53:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:53:58] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:53:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:53:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:53:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:53:58.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:53:58 compute-0 python3.9[215187]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254837.758534-1530-56327691868123/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:53:59 compute-0 sudo[215185]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:59 compute-0 sudo[215340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irdkfiesazcdydpzhqlmhqjpfruaashu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254839.175864-1530-182546425344879/AnsiballZ_stat.py'
Sep 30 17:53:59 compute-0 sudo[215340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:53:59 compute-0 python3.9[215342]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:53:59 compute-0 sudo[215340]: pam_unix(sudo:session): session closed for user root
Sep 30 17:53:59 compute-0 sudo[215464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgimucflwwnhhanqctdrfuhzdpzhzjtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254839.175864-1530-182546425344879/AnsiballZ_copy.py'
Sep 30 17:53:59 compute-0 sudo[215464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:53:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:53:59 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:00 compute-0 python3.9[215466]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254839.175864-1530-182546425344879/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:00 compute-0 sudo[215464]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:00 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08002220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:00 compute-0 sudo[215616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivmlcgxcterjyijsgzlpwonvcwmaevag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254840.2861419-1530-111786735539243/AnsiballZ_stat.py'
Sep 30 17:54:00 compute-0 sudo[215616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:00 compute-0 sshd-session[215205]: Invalid user username from 45.252.249.158 port 35178
Sep 30 17:54:00 compute-0 sshd-session[215205]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 17:54:00 compute-0 sshd-session[215205]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 17:54:00 compute-0 ceph-mon[73755]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:54:00 compute-0 python3.9[215618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:00 compute-0 sudo[215616]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:00.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:01 compute-0 sudo[215739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfeozzhnmoiquaegfvkczudehmvidoiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254840.2861419-1530-111786735539243/AnsiballZ_copy.py'
Sep 30 17:54:01 compute-0 sudo[215739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:01 compute-0 python3.9[215741]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254840.2861419-1530-111786735539243/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:01 compute-0 sudo[215739]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:54:01 compute-0 sudo[215893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ritnbtqqgpfagyhlhymwvmzexshszulh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254841.4885404-1530-203223373791671/AnsiballZ_stat.py'
Sep 30 17:54:01 compute-0 sudo[215893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:01 compute-0 python3.9[215895]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:01 compute-0 sudo[215893]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:01 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:02 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:02.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:02 compute-0 sudo[216016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybdjctfzniptapbsrkdkqhnqshjznhoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254841.4885404-1530-203223373791671/AnsiballZ_copy.py'
Sep 30 17:54:02 compute-0 sudo[216016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:02 compute-0 python3.9[216018]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254841.4885404-1530-203223373791671/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:02 compute-0 sudo[216016]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:02 compute-0 ceph-mon[73755]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:54:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:02 compute-0 sudo[216168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouxdkgqedzptrwsjirwcpcjqlfafkjfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254842.6564958-1530-104317551323824/AnsiballZ_stat.py'
Sep 30 17:54:02 compute-0 sudo[216168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:03 compute-0 python3.9[216170]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:03 compute-0 sshd-session[215205]: Failed password for invalid user username from 45.252.249.158 port 35178 ssh2
Sep 30 17:54:03 compute-0 sudo[216168]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:03 compute-0 sudo[216292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnsxmvckyyzpmnxtotrjibpzngevzrmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254842.6564958-1530-104317551323824/AnsiballZ_copy.py'
Sep 30 17:54:03 compute-0 sudo[216292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:03 compute-0 sudo[216293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:54:03 compute-0 sudo[216293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:03 compute-0 sudo[216293]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:03 compute-0 python3.9[216300]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254842.6564958-1530-104317551323824/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:03 compute-0 sudo[216292]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:03 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:04 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:54:04 compute-0 sudo[216470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqsrglqxbpjdjjwspgmgpqjqpxjidyxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254843.8249607-1530-49898811415275/AnsiballZ_stat.py'
Sep 30 17:54:04 compute-0 sudo[216470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:04 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:04 compute-0 python3.9[216472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:04 compute-0 sudo[216470]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:04 compute-0 sudo[216593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgijppdiodhxcymdcnnexsovdlbiivjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254843.8249607-1530-49898811415275/AnsiballZ_copy.py'
Sep 30 17:54:04 compute-0 sudo[216593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:04 compute-0 ceph-mon[73755]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:04 compute-0 python3.9[216595]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254843.8249607-1530-49898811415275/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:04 compute-0 sudo[216593]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:04.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:05 compute-0 sudo[216745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwuefypfpndvxhknkumzeqouvneboht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254844.950898-1530-119717074801671/AnsiballZ_stat.py'
Sep 30 17:54:05 compute-0 sudo[216745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:05 compute-0 python3.9[216747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:05 compute-0 sudo[216745]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:05 compute-0 sshd-session[215205]: Received disconnect from 45.252.249.158 port 35178:11: Bye Bye [preauth]
Sep 30 17:54:05 compute-0 sshd-session[215205]: Disconnected from invalid user username 45.252.249.158 port 35178 [preauth]
Sep 30 17:54:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:05 compute-0 sudo[216870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjdwvdfveoxcxrbiscscpshqxfftuqcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254844.950898-1530-119717074801671/AnsiballZ_copy.py'
Sep 30 17:54:05 compute-0 sudo[216870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:05 compute-0 python3.9[216872]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254844.950898-1530-119717074801671/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:05 compute-0 sudo[216870]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:05 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:06 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c18001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:06.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:06 compute-0 sudo[217022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlwxikjbrhagtuhcfgualoccxtooqzcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254846.1198487-1530-80340419275186/AnsiballZ_stat.py'
Sep 30 17:54:06 compute-0 sudo[217022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:06 compute-0 python3.9[217024]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:06 compute-0 sudo[217022]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:06 compute-0 ceph-mon[73755]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:06.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:07 compute-0 sudo[217145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtfyiwffonfuxthuiwlqjxozlkrzecek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254846.1198487-1530-80340419275186/AnsiballZ_copy.py'
Sep 30 17:54:07 compute-0 sudo[217145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:54:07.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:54:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:07 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:54:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:07 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:54:07 compute-0 python3.9[217147]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254846.1198487-1530-80340419275186/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:54:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:54:07 compute-0 sudo[217145]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:54:07
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', '.nfs', 'default.rgw.meta', 'vms', '.mgr', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data']
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:54:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:54:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:07 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:08 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c08003c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:08.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:08 compute-0 ceph-mon[73755]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:08] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:54:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:08] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:54:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:08.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:09 compute-0 python3.9[217300]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:54:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:09 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:10 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:54:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:10 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:10.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:10 compute-0 sudo[217455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxydwhepngfowpcjuhllguqalgfrbaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254849.9077218-1942-211360772347178/AnsiballZ_seboolean.py'
Sep 30 17:54:10 compute-0 sudo[217455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:10 compute-0 python3.9[217457]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Sep 30 17:54:10 compute-0 ceph-mon[73755]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:10.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:11 compute-0 sudo[217455]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:12 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bfc0014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:12 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:12.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:12 compute-0 sudo[217613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktttoyvuoexvvbxwcystmzazzyngvyvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254852.048413-1958-146592017649027/AnsiballZ_copy.py'
Sep 30 17:54:12 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Sep 30 17:54:12 compute-0 sudo[217613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:12 compute-0 python3.9[217615]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:12 compute-0 sudo[217613]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:12 compute-0 ceph-mon[73755]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:54:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:12.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:54:13 compute-0 sudo[217765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pananzpdaubrivnshoaydfwiiksvfgin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254852.8136082-1958-193007065725063/AnsiballZ_copy.py'
Sep 30 17:54:13 compute-0 sudo[217765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:13 compute-0 python3.9[217767]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:13 compute-0 sudo[217765]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:13 compute-0 sudo[217919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlxkqmoqaughaygigtjdfiivhzqpxgbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254853.5455074-1958-248659515140477/AnsiballZ_copy.py'
Sep 30 17:54:13 compute-0 sudo[217919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:14 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:14 compute-0 python3.9[217921]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:14 compute-0 sudo[217919]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:14 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:14 compute-0 sudo[218073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgnappqncdqawcixzheogrpvdwwrchxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254854.2744906-1958-91670443619197/AnsiballZ_copy.py'
Sep 30 17:54:14 compute-0 sudo[218073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:14 compute-0 python3.9[218075]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:14 compute-0 sudo[218073]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:14 compute-0 ceph-mon[73755]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:14.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:15 compute-0 sudo[218236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxhzhfcbtilswlkpnhixusaxqqkhmmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254855.000419-1958-18880449059743/AnsiballZ_copy.py'
Sep 30 17:54:15 compute-0 sudo[218236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175415 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:54:15 compute-0 podman[218199]: 2025-09-30 17:54:15.338408504 +0000 UTC m=+0.096845136 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:54:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:15 compute-0 python3.9[218246]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:15 compute-0 sudo[218236]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:16 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:16 compute-0 sudo[218406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luacbeyqaavrutxthubvxtmyoqgblvwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254855.8201537-2030-20983287552339/AnsiballZ_copy.py'
Sep 30 17:54:16 compute-0 sudo[218406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:16 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:16.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:16 compute-0 python3.9[218408]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:16 compute-0 sudo[218406]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:16 compute-0 sudo[218558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsohtafabjkhmwypuwjbhmcnltcwipwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254856.4776561-2030-51908212006071/AnsiballZ_copy.py'
Sep 30 17:54:16 compute-0 sudo[218558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:16 compute-0 ceph-mon[73755]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:16 compute-0 python3.9[218560]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:17 compute-0 sudo[218558]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:54:17.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:54:17 compute-0 sudo[218711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgndimqcfnzxnlsxsnziiufzzkgangzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254857.1447604-2030-54220094052360/AnsiballZ_copy.py'
Sep 30 17:54:17 compute-0 sudo[218711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:17 compute-0 python3.9[218713]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:17 compute-0 sudo[218711]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:18 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1be8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:18 compute-0 sudo[218864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcriuqkrfoptelcsgjoraofbaivqmphi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254857.9008844-2030-30107597580283/AnsiballZ_copy.py'
Sep 30 17:54:18 compute-0 sudo[218864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:18 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1c180027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:18.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:18 compute-0 python3.9[218866]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:18 compute-0 sudo[218864]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:18] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:54:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:18] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:54:18 compute-0 sudo[219016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztwipdfxgxaimcyvlmtekizeqfavxbeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254858.5984592-2030-39700353543072/AnsiballZ_copy.py'
Sep 30 17:54:18 compute-0 sudo[219016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:18.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:18 compute-0 ceph-mon[73755]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:19 compute-0 python3.9[219018]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:19 compute-0 sudo[219016]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:19 compute-0 sudo[219170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkauvdqsujynkvrmveiyzljysqlswhoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254859.6080875-2102-196370064254063/AnsiballZ_systemd.py'
Sep 30 17:54:19 compute-0 sudo[219170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:20 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf4003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:20 compute-0 python3.9[219172]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:54:20 compute-0 systemd[1]: Reloading.
Sep 30 17:54:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:20 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:20.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:20 compute-0 systemd-rc-local-generator[219199]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:54:20 compute-0 systemd-sysv-generator[219202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:54:20 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Sep 30 17:54:20 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Sep 30 17:54:20 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Sep 30 17:54:20 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Sep 30 17:54:20 compute-0 systemd[1]: Starting libvirt logging daemon...
Sep 30 17:54:20 compute-0 systemd[1]: Started libvirt logging daemon.
Sep 30 17:54:20 compute-0 sudo[219170]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:20.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:20 compute-0 ceph-mon[73755]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:54:21 compute-0 sudo[219363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suxnmjcpxnqhbjamqdgbvauzvgtiujik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254860.9808218-2102-249287368423976/AnsiballZ_systemd.py'
Sep 30 17:54:21 compute-0 sudo[219363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:21 compute-0 python3.9[219365]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:54:21 compute-0 systemd[1]: Reloading.
Sep 30 17:54:21 compute-0 systemd-rc-local-generator[219395]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:54:21 compute-0 systemd-sysv-generator[219398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:54:22 compute-0 kernel: ganesha.nfsd[176436]: segfault at 50 ip 00007f1ccce5732e sp 00007f1c857f9210 error 4 in libntirpc.so.5.8[7f1ccce3c000+2c000] likely on CPU 2 (core 0, socket 2)
Sep 30 17:54:22 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:54:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[174512]: 30/09/2025 17:54:22 : epoch 68dc18a2 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1bf0003c50 fd 38 proxy ignored for local
Sep 30 17:54:22 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Sep 30 17:54:22 compute-0 systemd[1]: Started Process Core Dump (PID 219403/UID 0).
Sep 30 17:54:22 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Sep 30 17:54:22 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Sep 30 17:54:22 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Sep 30 17:54:22 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Sep 30 17:54:22 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Sep 30 17:54:22 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Sep 30 17:54:22 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Sep 30 17:54:22 compute-0 systemd[1]: Started libvirt nodedev daemon.
Sep 30 17:54:22 compute-0 sudo[219363]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:54:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:54:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:22.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:22 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Sep 30 17:54:22 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Sep 30 17:54:22 compute-0 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Sep 30 17:54:22 compute-0 sudo[219590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncoijwwsjvhtvuaufotqdyhzntrbzbpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254862.3508122-2102-29690935604494/AnsiballZ_systemd.py'
Sep 30 17:54:22 compute-0 sudo[219590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:54:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:22.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:54:22 compute-0 python3.9[219592]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:54:22 compute-0 ceph-mon[73755]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:54:22 compute-0 systemd[1]: Reloading.
Sep 30 17:54:23 compute-0 systemd-sysv-generator[219625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:54:23 compute-0 systemd-rc-local-generator[219620]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:54:23 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Sep 30 17:54:23 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Sep 30 17:54:23 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Sep 30 17:54:23 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Sep 30 17:54:23 compute-0 systemd[1]: Starting libvirt proxy daemon...
Sep 30 17:54:23 compute-0 systemd[1]: Started libvirt proxy daemon.
Sep 30 17:54:23 compute-0 sudo[219590]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:23 compute-0 systemd-coredump[219407]: Process 174516 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f1ccce5732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:54:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:23 compute-0 systemd[1]: systemd-coredump@9-219403-0.service: Deactivated successfully.
Sep 30 17:54:23 compute-0 systemd[1]: systemd-coredump@9-219403-0.service: Consumed 1.406s CPU time.
Sep 30 17:54:23 compute-0 setroubleshoot[219405]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2a31e0e7-5355-41f2-85ea-655cf0da4a8e
Sep 30 17:54:23 compute-0 setroubleshoot[219405]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Sep 30 17:54:23 compute-0 sudo[219706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:54:23 compute-0 sudo[219706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:23 compute-0 sudo[219706]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:23 compute-0 podman[219711]: 2025-09-30 17:54:23.643892726 +0000 UTC m=+0.038402072 container died 26e3596af8fd3d98333c5282db380ebf10f4eb1450e40b318453c563d5574767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 17:54:23 compute-0 setroubleshoot[219405]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2a31e0e7-5355-41f2-85ea-655cf0da4a8e
Sep 30 17:54:23 compute-0 setroubleshoot[219405]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Sep 30 17:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b6a67c203cdd86671a8caa9c9d47886d54495e523f2951994dea6f8492f97b8-merged.mount: Deactivated successfully.
Sep 30 17:54:23 compute-0 podman[219711]: 2025-09-30 17:54:23.700128361 +0000 UTC m=+0.094637697 container remove 26e3596af8fd3d98333c5282db380ebf10f4eb1450e40b318453c563d5574767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:54:23 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:54:23 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:54:23 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.843s CPU time.
Sep 30 17:54:23 compute-0 sudo[219875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iexmuuuxxoguhokyarmpakzakhwaqkjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254863.661416-2102-62080934755014/AnsiballZ_systemd.py'
Sep 30 17:54:23 compute-0 ceph-mon[73755]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:54:23 compute-0 sudo[219875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:24 compute-0 python3.9[219877]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:54:24 compute-0 systemd[1]: Reloading.
Sep 30 17:54:24 compute-0 systemd-sysv-generator[219907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:54:24 compute-0 systemd-rc-local-generator[219902]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:54:24 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Sep 30 17:54:24 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Sep 30 17:54:24 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Sep 30 17:54:24 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Sep 30 17:54:24 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Sep 30 17:54:24 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Sep 30 17:54:24 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Sep 30 17:54:24 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Sep 30 17:54:24 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Sep 30 17:54:24 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Sep 30 17:54:24 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Sep 30 17:54:24 compute-0 systemd[1]: Started libvirt QEMU daemon.
Sep 30 17:54:24 compute-0 sudo[219875]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:24.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:25 compute-0 sudo[220088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeituhvqvlepupzassqpzeelnqgubdmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254865.019919-2102-81497104552330/AnsiballZ_systemd.py'
Sep 30 17:54:25 compute-0 sudo[220088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:54:25 compute-0 python3.9[220090]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:54:25 compute-0 systemd[1]: Reloading.
Sep 30 17:54:25 compute-0 systemd-rc-local-generator[220117]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:54:25 compute-0 systemd-sysv-generator[220120]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:54:25 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Sep 30 17:54:25 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Sep 30 17:54:25 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Sep 30 17:54:25 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Sep 30 17:54:25 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Sep 30 17:54:25 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Sep 30 17:54:25 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 17:54:26 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 17:54:26 compute-0 sudo[220088]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:26 compute-0 ceph-mon[73755]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:54:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:26.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:54:27.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:54:27 compute-0 sudo[220300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nimoxaaslddbewwikmaqwikknbzwnwyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254866.862221-2176-49252609447258/AnsiballZ_file.py'
Sep 30 17:54:27 compute-0 sudo[220300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:27 compute-0 python3.9[220302]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:27 compute-0 sudo[220300]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:54:27 compute-0 podman[220304]: 2025-09-30 17:54:27.53931362 +0000 UTC m=+0.070230911 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 17:54:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175428 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:54:28 compute-0 sudo[220475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykmqoopduylythpeqeqretpmzhrjzxym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254867.720812-2192-212442459042729/AnsiballZ_find.py'
Sep 30 17:54:28 compute-0 sudo[220475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:28 compute-0 python3.9[220477]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 17:54:28 compute-0 sudo[220475]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:28 compute-0 ceph-mon[73755]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:54:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:54:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:54:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:28.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:28 compute-0 sudo[220627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peqdtlkggapxmeybzuzwvxapporozbpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254868.6454785-2208-137710588399218/AnsiballZ_command.py'
Sep 30 17:54:28 compute-0 sudo[220627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:29 compute-0 python3.9[220629]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:54:29 compute-0 sudo[220627]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:54:30 compute-0 python3.9[220785]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 17:54:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:30 compute-0 ceph-mon[73755]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:54:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:30 compute-0 python3.9[220937]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:54:31 compute-0 python3.9[221058]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254870.4806128-2246-18468893931130/.source.xml follow=False _original_basename=secret.xml.j2 checksum=5cde69df6d2b570990e604ddf8058f3ae944d5fb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:31 compute-0 unix_chkpwd[221138]: password check failed for user (root)
Sep 30 17:54:31 compute-0 sshd-session[220906]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=115.190.39.222  user=root
Sep 30 17:54:32 compute-0 sudo[221211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swldomafcgzrporvfrxbvlmafxocdzfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254871.8966863-2276-171095728778475/AnsiballZ_command.py'
Sep 30 17:54:32 compute-0 sudo[221211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:32.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:32 compute-0 python3.9[221213]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 63d32c6a-fa18-54ed-8711-9a3915cc367b
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:54:32 compute-0 polkitd[6289]: Registered Authentication Agent for unix-process:221215:347117 (system bus name :1.3080 [/usr/bin/pkttyagent --process 221215 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 30 17:54:32 compute-0 polkitd[6289]: Unregistered Authentication Agent for unix-process:221215:347117 (system bus name :1.3080, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 30 17:54:32 compute-0 polkitd[6289]: Registered Authentication Agent for unix-process:221214:347117 (system bus name :1.3081 [/usr/bin/pkttyagent --process 221214 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 30 17:54:32 compute-0 polkitd[6289]: Unregistered Authentication Agent for unix-process:221214:347117 (system bus name :1.3081, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 30 17:54:32 compute-0 sudo[221211]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:32 compute-0 ceph-mon[73755]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:54:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:33 compute-0 python3.9[221375]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:33 compute-0 sshd-session[220906]: Failed password for root from 115.190.39.222 port 43124 ssh2
Sep 30 17:54:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:54:33 compute-0 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Sep 30 17:54:33 compute-0 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.056s CPU time.
Sep 30 17:54:33 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Sep 30 17:54:33 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 10.
Sep 30 17:54:33 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:54:33 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.843s CPU time.
Sep 30 17:54:33 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:54:34 compute-0 sudo[221557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptoeqqzjprnnfljyfxrkyuvcpbhkxdub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254873.7974584-2308-209647064070664/AnsiballZ_command.py'
Sep 30 17:54:34 compute-0 sudo[221557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:34 compute-0 podman[221576]: 2025-09-30 17:54:34.183820064 +0000 UTC m=+0.042243352 container create bd3169a87f54e6ed03482de0bdeeb00c28e39f86782caa51e37ed398095f0399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 17:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f144f70bf38175666d289454660f7f98df4b31e9c74fea157d96c048d9b15e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f144f70bf38175666d289454660f7f98df4b31e9c74fea157d96c048d9b15e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f144f70bf38175666d289454660f7f98df4b31e9c74fea157d96c048d9b15e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:34 compute-0 podman[221576]: 2025-09-30 17:54:34.165955588 +0000 UTC m=+0.024378906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f144f70bf38175666d289454660f7f98df4b31e9c74fea157d96c048d9b15e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:34 compute-0 podman[221576]: 2025-09-30 17:54:34.276250283 +0000 UTC m=+0.134673591 container init bd3169a87f54e6ed03482de0bdeeb00c28e39f86782caa51e37ed398095f0399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:54:34 compute-0 podman[221576]: 2025-09-30 17:54:34.281968802 +0000 UTC m=+0.140392080 container start bd3169a87f54e6ed03482de0bdeeb00c28e39f86782caa51e37ed398095f0399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:54:34 compute-0 bash[221576]: bd3169a87f54e6ed03482de0bdeeb00c28e39f86782caa51e37ed398095f0399
Sep 30 17:54:34 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:54:34 compute-0 sudo[221557]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:34.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:54:34 compute-0 ceph-mon[73755]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:54:34 compute-0 sudo[221784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apbdnsxfngwlcguzbdhwecgiilqyaflk ; FSID=63d32c6a-fa18-54ed-8711-9a3915cc367b KEY=AQDxFNxoAAAAABAAAVqrvevrN1uM+kO3r0Scwg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254874.6259522-2324-37101925416894/AnsiballZ_command.py'
Sep 30 17:54:34 compute-0 sudo[221784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:34.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:35 compute-0 polkitd[6289]: Registered Authentication Agent for unix-process:221787:347383 (system bus name :1.3084 [/usr/bin/pkttyagent --process 221787 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 30 17:54:35 compute-0 polkitd[6289]: Unregistered Authentication Agent for unix-process:221787:347383 (system bus name :1.3084, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 30 17:54:35 compute-0 sudo[221784]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:54:35 compute-0 sudo[221944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghbifywjccgcystwnxhykjimebaaxbhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254875.49768-2340-86291778866321/AnsiballZ_copy.py'
Sep 30 17:54:35 compute-0 sudo[221944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:36 compute-0 python3.9[221946]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:36 compute-0 sudo[221944]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:36.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:36 compute-0 sudo[222096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgursxvpbqanofiwusgiswguxkjigfxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254876.3022277-2356-116856615864918/AnsiballZ_stat.py'
Sep 30 17:54:36 compute-0 sudo[222096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:36 compute-0 ceph-mon[73755]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:54:36 compute-0 python3.9[222098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:36 compute-0 sudo[222096]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:36.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:54:37.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:54:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:54:37.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:54:37 compute-0 sudo[222219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgqxxrendmpreuocbdsfkmtinlkextlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254876.3022277-2356-116856615864918/AnsiballZ_copy.py'
Sep 30 17:54:37 compute-0 sudo[222219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:54:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:54:37 compute-0 python3.9[222221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254876.3022277-2356-116856615864918/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:54:37 compute-0 sudo[222219]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:54:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:54:38 compute-0 sudo[222373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eezlxlshhrbrsurqjhlhdsvvtucdjhyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254877.782953-2388-6921184291279/AnsiballZ_file.py'
Sep 30 17:54:38 compute-0 sudo[222373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 17:54:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 17:54:38 compute-0 python3.9[222375]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:38 compute-0 sudo[222373]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:38] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:54:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:38] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:54:38 compute-0 ceph-mon[73755]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:54:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:38.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:38 compute-0 sudo[222525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwxuuiapernyqtqlnexbacxaedabfpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254878.6720245-2404-173834943326704/AnsiballZ_stat.py'
Sep 30 17:54:38 compute-0 sudo[222525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:39 compute-0 python3.9[222527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:39 compute-0 sudo[222525]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:39 compute-0 sudo[222604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scptxetaebnysyckehxjmrpysowjcbvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254878.6720245-2404-173834943326704/AnsiballZ_file.py'
Sep 30 17:54:39 compute-0 sudo[222604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:54:39 compute-0 python3.9[222606]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:39 compute-0 sudo[222604]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:40 compute-0 sudo[222758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpsrpmtdscsizwvchhrgzptazzmhompu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254879.962659-2428-189671784809280/AnsiballZ_stat.py'
Sep 30 17:54:40 compute-0 sudo[222758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:40 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:54:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:40 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:54:40 compute-0 python3.9[222760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:40 compute-0 sudo[222758]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:40 compute-0 sudo[222836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lomgupfssvrlqbxmdxjetemtttbryxua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254879.962659-2428-189671784809280/AnsiballZ_file.py'
Sep 30 17:54:40 compute-0 sudo[222836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:40 compute-0 ceph-mon[73755]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:54:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:41 compute-0 python3.9[222838]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.a1n0xch6 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:41 compute-0 sudo[222836]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:54:41 compute-0 sudo[222990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pstubgtlwpoutjbdccraqnkmgvamjnfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254881.3479092-2452-101248010413098/AnsiballZ_stat.py'
Sep 30 17:54:41 compute-0 sudo[222990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:41 compute-0 python3.9[222992]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:41 compute-0 sudo[222990]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:42 compute-0 sudo[223068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bslscaiqowcdzhymzyazselsyqpwphqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254881.3479092-2452-101248010413098/AnsiballZ_file.py'
Sep 30 17:54:42 compute-0 sudo[223068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:42 compute-0 python3.9[223070]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:42 compute-0 sudo[223068]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:42 compute-0 ceph-mon[73755]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 17:54:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:42.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:43 compute-0 sudo[223220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpbnhzjgimowvcldncdyqucdwhfqthhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254882.7679784-2478-249749006329261/AnsiballZ_command.py'
Sep 30 17:54:43 compute-0 sudo[223220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:43 compute-0 python3.9[223222]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:54:43 compute-0 sudo[223220]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:43 compute-0 sudo[223302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:54:43 compute-0 sudo[223302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:43 compute-0 sudo[223302]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:44 compute-0 sudo[223400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vstprweabhgwwmmucrjfznencxrvucnl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759254883.539898-2494-88550508718649/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 17:54:44 compute-0 sudo[223400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:44 compute-0 python3[223402]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 17:54:44 compute-0 sudo[223400]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:44 compute-0 ceph-mon[73755]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:44 compute-0 sudo[223552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eabhgbouqttsajvethydcqadwdfziboi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254884.5761087-2510-119264999078988/AnsiballZ_stat.py'
Sep 30 17:54:44 compute-0 sudo[223552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:44.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:45 compute-0 python3.9[223554]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:45 compute-0 sudo[223552]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:45 compute-0 sudo[223631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzfammlilnsfeenpbpllsogngbhkveiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254884.5761087-2510-119264999078988/AnsiballZ_file.py'
Sep 30 17:54:45 compute-0 sudo[223631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:45 compute-0 podman[223633]: 2025-09-30 17:54:45.584614388 +0000 UTC m=+0.143009958 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:54:45 compute-0 python3.9[223634]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:45 compute-0 sudo[223631]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:46 compute-0 sudo[223811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkckejwdrabjpbtybaofkpumgdgjkksh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254886.0020962-2534-88941407592093/AnsiballZ_stat.py'
Sep 30 17:54:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:46 compute-0 sudo[223811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:54:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:54:46 compute-0 python3.9[223813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:46 compute-0 sudo[223811]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:46 compute-0 sudo[223902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idtmfznxvqyndofdnydjtiuofsmbyhkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254886.0020962-2534-88941407592093/AnsiballZ_file.py'
Sep 30 17:54:46 compute-0 sudo[223902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:46 compute-0 ceph-mon[73755]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:46.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:47 compute-0 python3.9[223904]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:54:47.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:54:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:54:47.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:54:47 compute-0 sudo[223902]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:47 compute-0 sudo[224056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koklxqwyfhfyrwcdasstjzwntiwyqyzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254887.3763463-2558-200634738175561/AnsiballZ_stat.py'
Sep 30 17:54:47 compute-0 sudo[224056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:47 compute-0 python3.9[224058]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:47 compute-0 sudo[224056]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:48 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3580000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:48 compute-0 sudo[224137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckiwriiyfepyiqrafeetyhuriyqmshmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254887.3763463-2558-200634738175561/AnsiballZ_file.py'
Sep 30 17:54:48 compute-0 sudo[224137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:48 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:48 compute-0 python3.9[224139]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:48 compute-0 sudo[224137]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:48] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:54:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:48] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:54:48 compute-0 sudo[224185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:54:48 compute-0 sudo[224185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:48 compute-0 sudo[224185]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:48 compute-0 ceph-mon[73755]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:48 compute-0 sudo[224241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:54:48 compute-0 sudo[224241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:48.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:49 compute-0 sudo[224339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzdcffdaiapxkujznrycuadrgaamvedi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254888.766448-2582-151244754341791/AnsiballZ_stat.py'
Sep 30 17:54:49 compute-0 sudo[224339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:49 compute-0 python3.9[224341]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:49 compute-0 sudo[224339]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:49 compute-0 sudo[224241]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:54:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:54:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:54:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:54:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:54:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:54:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:54:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:54:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:54:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:54:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:54:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:54:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:54:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:54:49 compute-0 sudo[224457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tepcnlyxtdkahurhfgboufamlmqdrinj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254888.766448-2582-151244754341791/AnsiballZ_file.py'
Sep 30 17:54:49 compute-0 sudo[224457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:49 compute-0 sudo[224445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:54:49 compute-0 sudo[224445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:49 compute-0 sudo[224445]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:49 compute-0 sudo[224478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:54:49 compute-0 sudo[224478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:54:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:54:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:54:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:54:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:54:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:54:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:54:49 compute-0 python3.9[224475]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:49 compute-0 sudo[224457]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175450 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:54:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:50 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:50 compute-0 podman[224574]: 2025-09-30 17:54:50.290608199 +0000 UTC m=+0.052716355 container create d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jones, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.291703) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254890291767, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3775, "num_deletes": 502, "total_data_size": 7269790, "memory_usage": 7384008, "flush_reason": "Manual Compaction"}
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Sep 30 17:54:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:50 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254890325238, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4078032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12818, "largest_seqno": 16592, "table_properties": {"data_size": 4067031, "index_size": 6087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3845, "raw_key_size": 30205, "raw_average_key_size": 19, "raw_value_size": 4040626, "raw_average_value_size": 2656, "num_data_blocks": 273, "num_entries": 1521, "num_filter_entries": 1521, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759254484, "oldest_key_time": 1759254484, "file_creation_time": 1759254890, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 33597 microseconds, and 8967 cpu microseconds.
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.325303) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4078032 bytes OK
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.325333) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.327486) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.327503) EVENT_LOG_v1 {"time_micros": 1759254890327497, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.327526) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7254155, prev total WAL file size 7254155, number of live WAL files 2.
Sep 30 17:54:50 compute-0 systemd[1]: Started libpod-conmon-d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc.scope.
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.329647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(3982KB)], [32(10080KB)]
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254890329725, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 14399989, "oldest_snapshot_seqno": -1}
Sep 30 17:54:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:50 compute-0 podman[224574]: 2025-09-30 17:54:50.267859156 +0000 UTC m=+0.029967362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:54:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4725 keys, 11302649 bytes, temperature: kUnknown
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254890409876, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 11302649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11269859, "index_size": 19880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 117927, "raw_average_key_size": 24, "raw_value_size": 11182976, "raw_average_value_size": 2366, "num_data_blocks": 840, "num_entries": 4725, "num_filter_entries": 4725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759254890, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.410125) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 11302649 bytes
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.412142) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.5 rd, 140.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 9.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(6.3) write-amplify(2.8) OK, records in: 5551, records dropped: 826 output_compression: NoCompression
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.412306) EVENT_LOG_v1 {"time_micros": 1759254890412154, "job": 14, "event": "compaction_finished", "compaction_time_micros": 80215, "compaction_time_cpu_micros": 29046, "output_level": 6, "num_output_files": 1, "total_output_size": 11302649, "num_input_records": 5551, "num_output_records": 4725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:54:50 compute-0 podman[224574]: 2025-09-30 17:54:50.413015029 +0000 UTC m=+0.175123235 container init d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jones, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254890413452, "job": 14, "event": "table_file_deletion", "file_number": 34}
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254890415038, "job": 14, "event": "table_file_deletion", "file_number": 32}
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.329518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.415203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.415211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.415213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.415217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:54:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:54:50.415219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:54:50 compute-0 podman[224574]: 2025-09-30 17:54:50.42225366 +0000 UTC m=+0.184361806 container start d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jones, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 17:54:50 compute-0 podman[224574]: 2025-09-30 17:54:50.426905641 +0000 UTC m=+0.189013807 container attach d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jones, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:54:50 compute-0 exciting_jones[224636]: 167 167
Sep 30 17:54:50 compute-0 systemd[1]: libpod-d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc.scope: Deactivated successfully.
Sep 30 17:54:50 compute-0 podman[224645]: 2025-09-30 17:54:50.478179968 +0000 UTC m=+0.030436955 container died d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jones, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:54:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd144332a8750d85067079e037fd0e448fd06d8213a1924e2d69cf95c6e5bd26-merged.mount: Deactivated successfully.
Sep 30 17:54:50 compute-0 podman[224645]: 2025-09-30 17:54:50.527324178 +0000 UTC m=+0.079581145 container remove d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 17:54:50 compute-0 systemd[1]: libpod-conmon-d4944a9d9ca8689c6d315212ad7873f4f73a322c1d1e7e9e9ba8d4d95341b6bc.scope: Deactivated successfully.
Sep 30 17:54:50 compute-0 sudo[224734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usoswxjbzooricdmgtlrqgwgfoiduzpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254890.2506435-2606-7126330390466/AnsiballZ_stat.py'
Sep 30 17:54:50 compute-0 sudo[224734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:50 compute-0 podman[224742]: 2025-09-30 17:54:50.714566868 +0000 UTC m=+0.051513603 container create 9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_albattani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:54:50 compute-0 systemd[1]: Started libpod-conmon-9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c.scope.
Sep 30 17:54:50 compute-0 podman[224742]: 2025-09-30 17:54:50.69698559 +0000 UTC m=+0.033932345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:54:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60909c415fe5c4a0939483caec24fe9473ea1b189ced6be579675d8ac7fcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60909c415fe5c4a0939483caec24fe9473ea1b189ced6be579675d8ac7fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60909c415fe5c4a0939483caec24fe9473ea1b189ced6be579675d8ac7fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60909c415fe5c4a0939483caec24fe9473ea1b189ced6be579675d8ac7fcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de60909c415fe5c4a0939483caec24fe9473ea1b189ced6be579675d8ac7fcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:50 compute-0 podman[224742]: 2025-09-30 17:54:50.820284024 +0000 UTC m=+0.157230779 container init 9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_albattani, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 17:54:50 compute-0 podman[224742]: 2025-09-30 17:54:50.83472458 +0000 UTC m=+0.171671335 container start 9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:54:50 compute-0 podman[224742]: 2025-09-30 17:54:50.840979943 +0000 UTC m=+0.177926698 container attach 9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:54:50 compute-0 python3.9[224737]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:50 compute-0 sudo[224734]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:50 compute-0 ceph-mon[73755]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:54:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:50.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:51 compute-0 ecstatic_albattani[224759]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:54:51 compute-0 ecstatic_albattani[224759]: --> All data devices are unavailable
Sep 30 17:54:51 compute-0 sudo[224895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bawtbwgvcwffkcbrbbignxlodduigxtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254890.2506435-2606-7126330390466/AnsiballZ_copy.py'
Sep 30 17:54:51 compute-0 sudo[224895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:51 compute-0 systemd[1]: libpod-9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c.scope: Deactivated successfully.
Sep 30 17:54:51 compute-0 podman[224742]: 2025-09-30 17:54:51.283909076 +0000 UTC m=+0.620855811 container died 9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 17:54:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3de60909c415fe5c4a0939483caec24fe9473ea1b189ced6be579675d8ac7fcf-merged.mount: Deactivated successfully.
Sep 30 17:54:51 compute-0 podman[224742]: 2025-09-30 17:54:51.338108909 +0000 UTC m=+0.675055644 container remove 9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:54:51 compute-0 systemd[1]: libpod-conmon-9aad49aadac19871ac9f80bda404491cee850596dc844e4a3db977f8cc01936c.scope: Deactivated successfully.
Sep 30 17:54:51 compute-0 sudo[224478]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:51 compute-0 sudo[224910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:54:51 compute-0 sudo[224910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:51 compute-0 python3.9[224898]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759254890.2506435-2606-7126330390466/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:51 compute-0 sudo[224910]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:51 compute-0 sudo[224895]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:54:51 compute-0 sudo[224935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:54:51 compute-0 sudo[224935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:51 compute-0 sshd-session[224693]: Invalid user ahmed from 14.225.220.107 port 57032
Sep 30 17:54:51 compute-0 sshd-session[224693]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 17:54:51 compute-0 sshd-session[224693]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 17:54:51 compute-0 podman[225049]: 2025-09-30 17:54:51.987189386 +0000 UTC m=+0.047653113 container create aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:54:52 compute-0 systemd[1]: Started libpod-conmon-aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a.scope.
Sep 30 17:54:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:52 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:54:52 compute-0 podman[225049]: 2025-09-30 17:54:51.963322294 +0000 UTC m=+0.023786041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:54:52 compute-0 podman[225049]: 2025-09-30 17:54:52.072810137 +0000 UTC m=+0.133273934 container init aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:54:52 compute-0 podman[225049]: 2025-09-30 17:54:52.082434808 +0000 UTC m=+0.142898545 container start aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:54:52 compute-0 podman[225049]: 2025-09-30 17:54:52.085655062 +0000 UTC m=+0.146118889 container attach aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:54:52 compute-0 sad_johnson[225095]: 167 167
Sep 30 17:54:52 compute-0 systemd[1]: libpod-aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a.scope: Deactivated successfully.
Sep 30 17:54:52 compute-0 podman[225049]: 2025-09-30 17:54:52.0890335 +0000 UTC m=+0.149497257 container died aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f511cae6bfb940a2aec1db580cde180f4272ad0bdd72dad0f2b26517d0a2c89-merged.mount: Deactivated successfully.
Sep 30 17:54:52 compute-0 podman[225049]: 2025-09-30 17:54:52.137591506 +0000 UTC m=+0.198055243 container remove aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:54:52 compute-0 systemd[1]: libpod-conmon-aa21da90188ea6631ba878b36da61265bb19f0ef3255866c465c81e4c36f523a.scope: Deactivated successfully.
Sep 30 17:54:52 compute-0 sudo[225187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzylijkjnczrfmcemyntwfqawtrwfli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254891.9249387-2636-101433966935266/AnsiballZ_file.py'
Sep 30 17:54:52 compute-0 sudo[225187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:54:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:54:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:52 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:52 compute-0 podman[225195]: 2025-09-30 17:54:52.337119976 +0000 UTC m=+0.059472851 container create b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_brown, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 17:54:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:52.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:52 compute-0 systemd[1]: Started libpod-conmon-b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474.scope.
Sep 30 17:54:52 compute-0 podman[225195]: 2025-09-30 17:54:52.311374375 +0000 UTC m=+0.033727340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:54:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2835500842d1c4a12fe1934f10dac001a317d7f10c64c5ae19094c06995f60a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2835500842d1c4a12fe1934f10dac001a317d7f10c64c5ae19094c06995f60a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2835500842d1c4a12fe1934f10dac001a317d7f10c64c5ae19094c06995f60a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2835500842d1c4a12fe1934f10dac001a317d7f10c64c5ae19094c06995f60a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:52 compute-0 podman[225195]: 2025-09-30 17:54:52.437756429 +0000 UTC m=+0.160109344 container init b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 17:54:52 compute-0 podman[225195]: 2025-09-30 17:54:52.44739595 +0000 UTC m=+0.169748835 container start b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:54:52 compute-0 podman[225195]: 2025-09-30 17:54:52.451644251 +0000 UTC m=+0.173997176 container attach b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_brown, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 17:54:52 compute-0 python3.9[225194]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:52 compute-0 sudo[225187]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:52 compute-0 nice_brown[225212]: {
Sep 30 17:54:52 compute-0 nice_brown[225212]:     "0": [
Sep 30 17:54:52 compute-0 nice_brown[225212]:         {
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "devices": [
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "/dev/loop3"
Sep 30 17:54:52 compute-0 nice_brown[225212]:             ],
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "lv_name": "ceph_lv0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "lv_size": "21470642176",
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "name": "ceph_lv0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "tags": {
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.cluster_name": "ceph",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.crush_device_class": "",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.encrypted": "0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.osd_id": "0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.type": "block",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.vdo": "0",
Sep 30 17:54:52 compute-0 nice_brown[225212]:                 "ceph.with_tpm": "0"
Sep 30 17:54:52 compute-0 nice_brown[225212]:             },
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "type": "block",
Sep 30 17:54:52 compute-0 nice_brown[225212]:             "vg_name": "ceph_vg0"
Sep 30 17:54:52 compute-0 nice_brown[225212]:         }
Sep 30 17:54:52 compute-0 nice_brown[225212]:     ]
Sep 30 17:54:52 compute-0 nice_brown[225212]: }
Sep 30 17:54:52 compute-0 systemd[1]: libpod-b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474.scope: Deactivated successfully.
Sep 30 17:54:52 compute-0 podman[225195]: 2025-09-30 17:54:52.83453046 +0000 UTC m=+0.556883395 container died b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2835500842d1c4a12fe1934f10dac001a317d7f10c64c5ae19094c06995f60a-merged.mount: Deactivated successfully.
Sep 30 17:54:52 compute-0 podman[225195]: 2025-09-30 17:54:52.892624614 +0000 UTC m=+0.614977489 container remove b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_brown, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:54:52 compute-0 systemd[1]: libpod-conmon-b55abbfff3f4a92b1c7886961d6cb4777eaa3dc67c60e25f669e0bd0013b1474.scope: Deactivated successfully.
Sep 30 17:54:52 compute-0 sudo[224935]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:52 compute-0 ceph-mon[73755]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:54:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:54:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:52.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:53 compute-0 sudo[225354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:54:53 compute-0 sudo[225354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:53 compute-0 sudo[225354]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:53 compute-0 sudo[225411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aupjlltrvuhbkrbwdcosvwyahxykpccc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254892.7099266-2652-229510617335002/AnsiballZ_command.py'
Sep 30 17:54:53 compute-0 sudo[225411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:53 compute-0 sudo[225404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:54:53 compute-0 sudo[225404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:53 compute-0 python3.9[225429]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:54:53 compute-0 sudo[225411]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:54:53 compute-0 podman[225548]: 2025-09-30 17:54:53.624467458 +0000 UTC m=+0.054881701 container create 9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:54:53 compute-0 sshd-session[224693]: Failed password for invalid user ahmed from 14.225.220.107 port 57032 ssh2
Sep 30 17:54:53 compute-0 systemd[1]: Started libpod-conmon-9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af.scope.
Sep 30 17:54:53 compute-0 podman[225548]: 2025-09-30 17:54:53.596326915 +0000 UTC m=+0.026741178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:54:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:54:53 compute-0 podman[225548]: 2025-09-30 17:54:53.716057835 +0000 UTC m=+0.146472058 container init 9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:54:53 compute-0 podman[225548]: 2025-09-30 17:54:53.726208 +0000 UTC m=+0.156622203 container start 9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:54:53 compute-0 podman[225548]: 2025-09-30 17:54:53.730514172 +0000 UTC m=+0.160928425 container attach 9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaplygin, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:54:53 compute-0 affectionate_chaplygin[225574]: 167 167
Sep 30 17:54:53 compute-0 systemd[1]: libpod-9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af.scope: Deactivated successfully.
Sep 30 17:54:53 compute-0 podman[225548]: 2025-09-30 17:54:53.732648738 +0000 UTC m=+0.163062951 container died 9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaplygin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:54:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5feb2c74f01497083f9513219be6e14edb39458c20da5463332e51ec78a2cedc-merged.mount: Deactivated successfully.
Sep 30 17:54:53 compute-0 podman[225548]: 2025-09-30 17:54:53.781067679 +0000 UTC m=+0.211481892 container remove 9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chaplygin, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 17:54:53 compute-0 systemd[1]: libpod-conmon-9b6ab6662d248ac3ad0f1d961deda6ec361a7837d71ead5eec94a41fcc03b0af.scope: Deactivated successfully.
Sep 30 17:54:53 compute-0 sshd-session[224693]: Received disconnect from 14.225.220.107 port 57032:11: Bye Bye [preauth]
Sep 30 17:54:53 compute-0 sshd-session[224693]: Disconnected from invalid user ahmed 14.225.220.107 port 57032 [preauth]
Sep 30 17:54:53 compute-0 sudo[225681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trpbfjuacuwbyaaxxsmhjzeqsgakvuba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254893.5105474-2668-11812550840331/AnsiballZ_blockinfile.py'
Sep 30 17:54:53 compute-0 sudo[225681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:53 compute-0 podman[225645]: 2025-09-30 17:54:53.98445622 +0000 UTC m=+0.057140760 container create 238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shtern, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 17:54:54 compute-0 ceph-mon[73755]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Sep 30 17:54:54 compute-0 systemd[1]: Started libpod-conmon-238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392.scope.
Sep 30 17:54:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:54 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:54 compute-0 podman[225645]: 2025-09-30 17:54:53.960542017 +0000 UTC m=+0.033226597 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:54:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06248da1d1e62bef4b64b3913c7828b36933dd8d0de3bd149731ebdcc214bdf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06248da1d1e62bef4b64b3913c7828b36933dd8d0de3bd149731ebdcc214bdf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06248da1d1e62bef4b64b3913c7828b36933dd8d0de3bd149731ebdcc214bdf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06248da1d1e62bef4b64b3913c7828b36933dd8d0de3bd149731ebdcc214bdf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:54:54 compute-0 podman[225645]: 2025-09-30 17:54:54.082081525 +0000 UTC m=+0.154766085 container init 238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 17:54:54 compute-0 podman[225645]: 2025-09-30 17:54:54.090498094 +0000 UTC m=+0.163182634 container start 238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:54:54 compute-0 podman[225645]: 2025-09-30 17:54:54.093144753 +0000 UTC m=+0.165829293 container attach 238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:54:54 compute-0 python3.9[225684]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:54 compute-0 sudo[225681]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:54:54.243 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:54:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:54:54.243 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:54:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:54:54.243 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:54:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:54 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35500016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:54.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:54 compute-0 lvm[225888]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:54:54 compute-0 lvm[225888]: VG ceph_vg0 finished
Sep 30 17:54:54 compute-0 determined_shtern[225690]: {}
Sep 30 17:54:54 compute-0 sudo[225918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hszjiktolojimtbnmflxxzqdjyhtiqcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254894.4859765-2686-12709209916776/AnsiballZ_command.py'
Sep 30 17:54:54 compute-0 systemd[1]: libpod-238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392.scope: Deactivated successfully.
Sep 30 17:54:54 compute-0 systemd[1]: libpod-238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392.scope: Consumed 1.379s CPU time.
Sep 30 17:54:54 compute-0 podman[225645]: 2025-09-30 17:54:54.943117194 +0000 UTC m=+1.015801764 container died 238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shtern, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:54:54 compute-0 sudo[225918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:54.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-06248da1d1e62bef4b64b3913c7828b36933dd8d0de3bd149731ebdcc214bdf7-merged.mount: Deactivated successfully.
Sep 30 17:54:54 compute-0 podman[225645]: 2025-09-30 17:54:54.99857297 +0000 UTC m=+1.071257500 container remove 238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:54:55 compute-0 systemd[1]: libpod-conmon-238a9f9579694db4050255284ef107d50c67bef9f0f8f5bd4fa404c13311e392.scope: Deactivated successfully.
Sep 30 17:54:55 compute-0 sudo[225404]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:54:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:54:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:54:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:54:55 compute-0 sudo[225935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:54:55 compute-0 sudo[225935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:54:55 compute-0 sudo[225935]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:55 compute-0 python3.9[225921]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:54:55 compute-0 sudo[225918]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:54:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:54:55 compute-0 sudo[226112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bammpyvdzqdrjpsfuxghgcvdnjasocml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254895.496625-2702-68319372154666/AnsiballZ_stat.py'
Sep 30 17:54:55 compute-0 sudo[226112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:56 compute-0 python3.9[226114]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:54:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:56 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:56 compute-0 sudo[226112]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:54:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:54:56 compute-0 ceph-mon[73755]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:54:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:56 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:56 compute-0 sudo[226266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghmvayblfzatesjizpyyqorzuycrhksq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254896.2710772-2718-131608574124111/AnsiballZ_command.py'
Sep 30 17:54:56 compute-0 sudo[226266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:56 compute-0 python3.9[226268]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:54:56 compute-0 sudo[226266]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:56.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:54:57.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:54:57 compute-0 sudo[226422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwroamlfjrkvhurkhoyggqomvmpwopib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254897.0907207-2734-11720137246471/AnsiballZ_file.py'
Sep 30 17:54:57 compute-0 sudo[226422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:54:57 compute-0 python3.9[226424]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:57 compute-0 sudo[226422]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:58 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:58 compute-0 sshd[192864]: Timeout before authentication for connection from 36.112.157.37 to 38.102.83.202, pid = 203006
Sep 30 17:54:58 compute-0 sudo[226585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sahyddbrlbbzbsmclieijzcuqmquzmfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254897.955277-2750-75967756230799/AnsiballZ_stat.py'
Sep 30 17:54:58 compute-0 sudo[226585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:54:58 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35500016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:54:58 compute-0 podman[226549]: 2025-09-30 17:54:58.327478959 +0000 UTC m=+0.087775628 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Sep 30 17:54:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:54:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:54:58.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:54:58 compute-0 python3.9[226594]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:58 compute-0 sudo[226585]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:58 compute-0 ceph-mon[73755]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:54:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:58] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:54:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:54:58] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:54:58 compute-0 sudo[226717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xemzvkfzfnlexetgopeyxeeoqwsoeseo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254897.955277-2750-75967756230799/AnsiballZ_copy.py'
Sep 30 17:54:58 compute-0 sudo[226717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:54:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:54:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:54:58.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:54:59 compute-0 python3.9[226719]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254897.955277-2750-75967756230799/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:54:59 compute-0 sudo[226717]: pam_unix(sudo:session): session closed for user root
Sep 30 17:54:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:54:59 compute-0 sudo[226871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqubakgnlqybfcapfchmzjhyamrxmuxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254899.409923-2780-174986062682023/AnsiballZ_stat.py'
Sep 30 17:54:59 compute-0 sudo[226871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:54:59 compute-0 python3.9[226873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:54:59 compute-0 sudo[226871]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:00 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:00 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:00.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:00 compute-0 sudo[226994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixczfahcohxcdvlikjhxtyogxjawekad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254899.409923-2780-174986062682023/AnsiballZ_copy.py'
Sep 30 17:55:00 compute-0 sudo[226994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:00 compute-0 python3.9[226996]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254899.409923-2780-174986062682023/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:00 compute-0 sudo[226994]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:00 compute-0 ceph-mon[73755]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:55:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:00.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:01 compute-0 sudo[227146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuuwjrtuxdkavmrcuplyfbvwwrvpzwzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254900.896881-2810-20773184500418/AnsiballZ_stat.py'
Sep 30 17:55:01 compute-0 sudo[227146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:01 compute-0 python3.9[227148]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:01 compute-0 sudo[227146]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:01 compute-0 sudo[227271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqbqelfihrpyrjuurhhxgzetpowffdlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254900.896881-2810-20773184500418/AnsiballZ_copy.py'
Sep 30 17:55:01 compute-0 sudo[227271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:02 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:02 compute-0 python3.9[227273]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254900.896881-2810-20773184500418/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:02 compute-0 sudo[227271]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:02 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35500016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:02.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:02 compute-0 ceph-mon[73755]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:02 compute-0 sudo[227423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whbmbwwszrxpvjddqmaoptlqqxcrsuor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254902.3522086-2840-248335873356754/AnsiballZ_systemd.py'
Sep 30 17:55:02 compute-0 sudo[227423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:02.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:03 compute-0 python3.9[227425]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:55:03 compute-0 systemd[1]: Reloading.
Sep 30 17:55:03 compute-0 systemd-sysv-generator[227452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:55:03 compute-0 systemd-rc-local-generator[227449]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:55:03 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Sep 30 17:55:03 compute-0 sudo[227423]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:03 compute-0 sudo[227516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:55:03 compute-0 sudo[227516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:03 compute-0 sudo[227516]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:04 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:04 compute-0 sudo[227642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfkzgzdrbcsgmvzloaaxdwkqrhpubaje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254903.7531478-2856-145169532130452/AnsiballZ_systemd.py'
Sep 30 17:55:04 compute-0 sudo[227642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:04 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:55:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:04.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:55:04 compute-0 python3.9[227644]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Sep 30 17:55:04 compute-0 systemd[1]: Reloading.
Sep 30 17:55:04 compute-0 systemd-sysv-generator[227675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:55:04 compute-0 systemd-rc-local-generator[227669]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:55:04 compute-0 ceph-mon[73755]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:04 compute-0 systemd[1]: Reloading.
Sep 30 17:55:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:04.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:05 compute-0 systemd-rc-local-generator[227712]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:55:05 compute-0 systemd-sysv-generator[227716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:55:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:05 compute-0 sudo[227642]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:05 compute-0 sshd-session[166395]: Connection closed by 192.168.122.30 port 48070
Sep 30 17:55:05 compute-0 sshd-session[166392]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:55:05 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Sep 30 17:55:05 compute-0 systemd[1]: session-54.scope: Consumed 3min 39.540s CPU time.
Sep 30 17:55:05 compute-0 systemd-logind[811]: Session 54 logged out. Waiting for processes to exit.
Sep 30 17:55:05 compute-0 systemd-logind[811]: Removed session 54.
Sep 30 17:55:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:06 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:06 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:06.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:06 compute-0 ceph-mon[73755]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:06.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:55:07.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:55:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:55:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:55:07
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.control', '.nfs', 'default.rgw.log']
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:55:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:55:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:08 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:08 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:08 compute-0 ceph-mon[73755]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:08] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:55:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:08] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:55:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:08.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:10 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:10 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:10.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:10 compute-0 ceph-mon[73755]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:10.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:11 compute-0 sshd-session[227749]: Accepted publickey for zuul from 192.168.122.30 port 53656 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:55:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:11 compute-0 systemd-logind[811]: New session 55 of user zuul.
Sep 30 17:55:11 compute-0 systemd[1]: Started Session 55 of User zuul.
Sep 30 17:55:11 compute-0 sshd-session[227749]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:55:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:12 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:12 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:12 compute-0 python3.9[227903]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:55:12 compute-0 ceph-mon[73755]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:12.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:13 compute-0 sudo[228059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhozdjyxiyuxmgfnbluidxspgngvesgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254913.2958457-48-180101682575229/AnsiballZ_file.py'
Sep 30 17:55:13 compute-0 sudo[228059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:14 compute-0 python3.9[228061]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:14 compute-0 sudo[228059]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:14 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:14 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:14.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:14 compute-0 sudo[228211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cigfzazllmswvypvtloigxciluldjufw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254914.1839073-48-14044700403851/AnsiballZ_file.py'
Sep 30 17:55:14 compute-0 sudo[228211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:14 compute-0 python3.9[228213]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:14 compute-0 sudo[228211]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:14 compute-0 ceph-mon[73755]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:15 compute-0 sudo[228363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzoyyjjoihnrhgvvoyhpktzvqamqviiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254914.8565605-48-61495893145481/AnsiballZ_file.py'
Sep 30 17:55:15 compute-0 sudo[228363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:15 compute-0 python3.9[228365]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:15 compute-0 sudo[228363]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:15 compute-0 sudo[228527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyueozbolvzandrqkweujhgiogoxdlyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254915.5287173-48-245083034034261/AnsiballZ_file.py'
Sep 30 17:55:15 compute-0 sudo[228527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:15 compute-0 podman[228491]: 2025-09-30 17:55:15.976549033 +0000 UTC m=+0.150402061 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Sep 30 17:55:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:16 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:16 compute-0 python3.9[228534]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 17:55:16 compute-0 sudo[228527]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:16 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:55:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:55:16 compute-0 sudo[228695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzhkiwwkqxndjvjpdykyzuflaasaept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254916.2590108-48-109383002130847/AnsiballZ_file.py'
Sep 30 17:55:16 compute-0 sudo[228695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:16 compute-0 ceph-mon[73755]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:16 compute-0 python3.9[228697]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:16 compute-0 sudo[228695]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:17.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:55:17.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:55:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:17 compute-0 sudo[228848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkhpmgvnujcmyblwmwykfexmsvtpxznb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254917.158709-120-231798557449917/AnsiballZ_stat.py'
Sep 30 17:55:17 compute-0 sudo[228848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:17 compute-0 python3.9[228850]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:55:17 compute-0 sudo[228848]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:18 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:18 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:18 compute-0 ceph-mon[73755]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:18 compute-0 sudo[229003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edjzckgvcszefgyyknewhznuhsulpuwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254918.1483092-136-227661788229587/AnsiballZ_systemd.py'
Sep 30 17:55:18 compute-0 sudo[229003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:18] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:55:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:18] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:55:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:19.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:19 compute-0 python3.9[229005]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:55:19 compute-0 systemd[1]: Reloading.
Sep 30 17:55:19 compute-0 systemd-rc-local-generator[229028]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:55:19 compute-0 systemd-sysv-generator[229033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:55:19 compute-0 sudo[229003]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:20 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:20 compute-0 sudo[229195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgkpiwxjdwkbgshyjfprmhwkgikebxek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254919.7647405-152-209241383698520/AnsiballZ_service_facts.py'
Sep 30 17:55:20 compute-0 sudo[229195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:20 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:20 compute-0 python3.9[229197]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:55:20 compute-0 network[229214]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:55:20 compute-0 network[229215]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:55:20 compute-0 network[229216]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:55:20 compute-0 ceph-mon[73755]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:21.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:22 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3580001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:55:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:55:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:22 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:55:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:22.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:55:22 compute-0 ceph-mon[73755]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:55:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:23.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:23 compute-0 sudo[229286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:55:23 compute-0 sudo[229286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:23 compute-0 sudo[229286]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:24 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:24 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:24 compute-0 ceph-mon[73755]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:25.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:25 compute-0 sudo[229195]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:26 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3580000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:26 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:26 compute-0 ceph-mon[73755]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:27.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:55:27.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:55:27 compute-0 sudo[229520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevrzitqebayzaboupkaddpiboocrdma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254927.118621-168-246351734525514/AnsiballZ_systemd.py'
Sep 30 17:55:27 compute-0 sudo[229520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:27 compute-0 python3.9[229522]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:55:27 compute-0 systemd[1]: Reloading.
Sep 30 17:55:27 compute-0 systemd-sysv-generator[229554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:55:27 compute-0 systemd-rc-local-generator[229549]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:55:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:28 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:28 compute-0 sudo[229520]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:28 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:28 compute-0 podman[229584]: 2025-09-30 17:55:28.529124745 +0000 UTC m=+0.064752308 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Sep 30 17:55:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:55:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:55:28 compute-0 ceph-mon[73755]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:29.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:29 compute-0 python3.9[229728]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:55:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:29 compute-0 sudo[229880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioqncgicujqbcsfqdptygtcjcukkbyyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254929.3852577-202-19805731907295/AnsiballZ_podman_container.py'
Sep 30 17:55:29 compute-0 sudo[229880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:30 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:30 compute-0 python3.9[229882]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Sep 30 17:55:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:30 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:55:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:30 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:30.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:30 compute-0 podman[229894]: 2025-09-30 17:55:30.72175225 +0000 UTC m=+0.517056846 image pull f8ff303843ab104c2f5f56920f311c0b22efd49dc54152d8e2ede3a7218e9091 38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Sep 30 17:55:30 compute-0 podman[229952]: 2025-09-30 17:55:30.863815603 +0000 UTC m=+0.046003870 container create c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.8883] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/22)
Sep 30 17:55:30 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Sep 30 17:55:30 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Sep 30 17:55:30 compute-0 kernel: veth0: entered allmulticast mode
Sep 30 17:55:30 compute-0 kernel: veth0: entered promiscuous mode
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9067] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Sep 30 17:55:30 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Sep 30 17:55:30 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9094] device (veth0): carrier: link connected
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9095] device (podman0): carrier: link connected
Sep 30 17:55:30 compute-0 ceph-mon[73755]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:30 compute-0 systemd-udevd[229976]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 17:55:30 compute-0 systemd-udevd[229979]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9383] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 17:55:30 compute-0 podman[229952]: 2025-09-30 17:55:30.843425272 +0000 UTC m=+0.025613569 image pull f8ff303843ab104c2f5f56920f311c0b22efd49dc54152d8e2ede3a7218e9091 38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9416] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9427] device (podman0): Activation: starting connection 'podman0' (8088261f-44fb-4cf3-ac0e-35c374aa5082)
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9429] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9432] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9433] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9439] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Sep 30 17:55:30 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 17:55:30 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9814] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9817] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Sep 30 17:55:30 compute-0 NetworkManager[45059]: <info>  [1759254930.9828] device (podman0): Activation: successful, device activated.
Sep 30 17:55:30 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Sep 30 17:55:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:31.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:31 compute-0 systemd[1]: Started libpod-conmon-c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654.scope.
Sep 30 17:55:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:55:31 compute-0 podman[229952]: 2025-09-30 17:55:31.249546026 +0000 UTC m=+0.431734323 container init c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 17:55:31 compute-0 podman[229952]: 2025-09-30 17:55:31.258927971 +0000 UTC m=+0.441116248 container start c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 17:55:31 compute-0 podman[229952]: 2025-09-30 17:55:31.262240657 +0000 UTC m=+0.444428934 container attach c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Sep 30 17:55:31 compute-0 iscsid_config[230110]: iqn.1994-05.com.redhat:97738a1b0fe
Sep 30 17:55:31 compute-0 systemd[1]: libpod-c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654.scope: Deactivated successfully.
Sep 30 17:55:31 compute-0 podman[229952]: 2025-09-30 17:55:31.265098422 +0000 UTC m=+0.447286699 container died c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930)
Sep 30 17:55:31 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Sep 30 17:55:31 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Sep 30 17:55:31 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Sep 30 17:55:31 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Sep 30 17:55:31 compute-0 NetworkManager[45059]: <info>  [1759254931.3213] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 17:55:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:31 compute-0 systemd[1]: run-netns-netns\x2d11268f00\x2d4340\x2da793\x2d3182\x2d37350030afb0.mount: Deactivated successfully.
Sep 30 17:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654-userdata-shm.mount: Deactivated successfully.
Sep 30 17:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-83fcbb6e3e65aa5e50a887a9e137b7b09f97299d2685dd0d620b35378f2fcb07-merged.mount: Deactivated successfully.
Sep 30 17:55:31 compute-0 podman[229952]: 2025-09-30 17:55:31.644000257 +0000 UTC m=+0.826188534 container remove c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 17:55:31 compute-0 python3.9[229882]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True 38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest /usr/sbin/iscsi-iname
Sep 30 17:55:31 compute-0 systemd[1]: libpod-conmon-c9ff0019b05b1e666571aac6713abad68c5abbf3a08bc63b698bf60cf2df4654.scope: Deactivated successfully.
Sep 30 17:55:31 compute-0 python3.9[229882]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Sep 30 17:55:31 compute-0 sudo[229880]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:32 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:32 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3580000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:32.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:32 compute-0 sudo[230353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzmynqkrodeivbqiuqcytkaefdzroswf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254932.1760387-218-224106639042680/AnsiballZ_stat.py'
Sep 30 17:55:32 compute-0 sudo[230353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:32 compute-0 python3.9[230355]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:32 compute-0 sudo[230353]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:32 compute-0 ceph-mon[73755]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:33.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:33 compute-0 sudo[230476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhumarrgcljaxwllhencxyqfehxbddle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254932.1760387-218-224106639042680/AnsiballZ_copy.py'
Sep 30 17:55:33 compute-0 sudo[230476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:33 compute-0 python3.9[230478]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254932.1760387-218-224106639042680/.source.iscsi _original_basename=.uboa8ctd follow=False checksum=5087e1db5827d5565aeaf3868f972b3322a330da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:33 compute-0 sudo[230476]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:33 compute-0 sudo[230630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxticuyxcchhidlfccbbcscyynrrsvpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254933.5737746-248-107847343488019/AnsiballZ_file.py'
Sep 30 17:55:33 compute-0 sudo[230630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:34 compute-0 python3.9[230632]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:34 compute-0 sudo[230630]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:34 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:34.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:34 compute-0 python3.9[230782]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:55:34 compute-0 ceph-mon[73755]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:35.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:35 compute-0 sudo[230935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrldgjkalksmqawybwqdwoukxgecosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254935.0554793-282-155837395556204/AnsiballZ_lineinfile.py'
Sep 30 17:55:35 compute-0 sudo[230935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:35 compute-0 python3.9[230937]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:35 compute-0 sudo[230935]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:36 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:36 compute-0 sudo[231088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpcesyvedmtcbsyowpodiagcfmkcfsij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254936.0359387-300-18070281206160/AnsiballZ_file.py'
Sep 30 17:55:36 compute-0 sudo[231088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:36 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3580000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:36 compute-0 python3.9[231090]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:36 compute-0 sudo[231088]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:36 compute-0 ceph-mon[73755]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:37.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:55:37.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:55:37 compute-0 sudo[231240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxbulicukzbbuefgjxbwfjwwuqwlnrzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254936.8652382-316-48305073799944/AnsiballZ_stat.py'
Sep 30 17:55:37 compute-0 sudo[231240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:55:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:55:37 compute-0 python3.9[231242]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:55:37 compute-0 sudo[231240]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:55:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:37 compute-0 sudo[231319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdfapxhddwlarscfpuayrpnltylrtvwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254936.8652382-316-48305073799944/AnsiballZ_file.py'
Sep 30 17:55:37 compute-0 sudo[231319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:37 compute-0 python3.9[231321]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:37 compute-0 sudo[231319]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.022903) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254938022990, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 627, "num_deletes": 251, "total_data_size": 877397, "memory_usage": 888904, "flush_reason": "Manual Compaction"}
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254938031463, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 866515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16593, "largest_seqno": 17219, "table_properties": {"data_size": 863146, "index_size": 1277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7608, "raw_average_key_size": 19, "raw_value_size": 856457, "raw_average_value_size": 2151, "num_data_blocks": 57, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759254890, "oldest_key_time": 1759254890, "file_creation_time": 1759254938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 8609 microseconds, and 3706 cpu microseconds.
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.031515) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 866515 bytes OK
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.031540) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.033191) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.033205) EVENT_LOG_v1 {"time_micros": 1759254938033200, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.033225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 874065, prev total WAL file size 874065, number of live WAL files 2.
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.033852) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(846KB)], [35(10MB)]
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254938033920, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 12169164, "oldest_snapshot_seqno": -1}
Sep 30 17:55:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:38 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4609 keys, 10124423 bytes, temperature: kUnknown
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254938088256, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 10124423, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10093340, "index_size": 18429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 116124, "raw_average_key_size": 25, "raw_value_size": 10009436, "raw_average_value_size": 2171, "num_data_blocks": 773, "num_entries": 4609, "num_filter_entries": 4609, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759254938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.088511) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 10124423 bytes
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.089929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.6 rd, 186.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.8 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(25.7) write-amplify(11.7) OK, records in: 5123, records dropped: 514 output_compression: NoCompression
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.089952) EVENT_LOG_v1 {"time_micros": 1759254938089939, "job": 16, "event": "compaction_finished", "compaction_time_micros": 54434, "compaction_time_cpu_micros": 21994, "output_level": 6, "num_output_files": 1, "total_output_size": 10124423, "num_input_records": 5123, "num_output_records": 4609, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254938090210, "job": 16, "event": "table_file_deletion", "file_number": 37}
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254938092528, "job": 16, "event": "table_file_deletion", "file_number": 35}
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.033742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.092632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.092638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.092640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.092642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:55:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:55:38.092644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:55:38 compute-0 sudo[231472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaiaebwzaruedkbpvbvadbqjxngdymtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254937.9530308-316-171675867118803/AnsiballZ_stat.py'
Sep 30 17:55:38 compute-0 sudo[231472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:38 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:38.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:38 compute-0 python3.9[231474]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:38 compute-0 sudo[231472]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:38 compute-0 sudo[231550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwlwyuxnmqyisjjucsfiftzwwtjfztuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254937.9530308-316-171675867118803/AnsiballZ_file.py'
Sep 30 17:55:38 compute-0 sudo[231550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:38] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:55:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:38] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:55:38 compute-0 python3.9[231552]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:38 compute-0 sudo[231550]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:39 compute-0 ceph-mon[73755]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:55:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:39.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:55:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:39 compute-0 sudo[231704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfswhxcyrmxbswmwblwnoihrltauzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254939.4204245-362-8732674859588/AnsiballZ_file.py'
Sep 30 17:55:39 compute-0 sudo[231704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:39 compute-0 python3.9[231706]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:39 compute-0 sudo[231704]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:40 compute-0 ceph-mon[73755]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:40 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:40 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f35800091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:40.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:40 compute-0 sudo[231856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fevsccerfxkpiipzponbmztszeqqqgml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254940.1743333-378-200219917081868/AnsiballZ_stat.py'
Sep 30 17:55:40 compute-0 sudo[231856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:40 compute-0 python3.9[231858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:40 compute-0 sudo[231856]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:40 compute-0 sudo[231934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znjccypehuyimxovmkhjegsgwqmnpnls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254940.1743333-378-200219917081868/AnsiballZ_file.py'
Sep 30 17:55:40 compute-0 sudo[231934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:41.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:41 compute-0 python3.9[231936]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:41 compute-0 sudo[231934]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:41 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 17:55:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:41 compute-0 sudo[232088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfdgynfcdzlvteshkjcvxpcrrukitpsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254941.5213606-402-131085458131583/AnsiballZ_stat.py'
Sep 30 17:55:41 compute-0 sudo[232088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:41 compute-0 python3.9[232090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:42 compute-0 sudo[232088]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:42 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:42 compute-0 sudo[232166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzycqcmtbnyuewhzmsqxctdkhmaorkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254941.5213606-402-131085458131583/AnsiballZ_file.py'
Sep 30 17:55:42 compute-0 sudo[232166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:42 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:42.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:42 compute-0 python3.9[232168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:42 compute-0 sudo[232166]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:42 compute-0 ceph-mon[73755]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:43 compute-0 sudo[232318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yncvnuysejhffbryoexbjkshrzyainnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254942.9197192-426-45004708438911/AnsiballZ_systemd.py'
Sep 30 17:55:43 compute-0 sudo[232318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:43 compute-0 python3.9[232320]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:55:43 compute-0 systemd[1]: Reloading.
Sep 30 17:55:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:43 compute-0 systemd-rc-local-generator[232347]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:55:43 compute-0 systemd-sysv-generator[232351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:55:43 compute-0 sudo[232318]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:44 compute-0 sudo[232385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:55:44 compute-0 sudo[232385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:44 compute-0 sudo[232385]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:44 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:44 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:44.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:44 compute-0 sudo[232535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmkqrfebxxlacuhtwgycjngwnaexbmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254944.2268293-442-219636965113002/AnsiballZ_stat.py'
Sep 30 17:55:44 compute-0 sudo[232535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:44 compute-0 ceph-mon[73755]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:44 compute-0 python3.9[232537]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:44 compute-0 sudo[232535]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:45 compute-0 sudo[232613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-useaalywyctlgzqojrcxdajnptawpgml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254944.2268293-442-219636965113002/AnsiballZ_file.py'
Sep 30 17:55:45 compute-0 sudo[232613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:45.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:45 compute-0 python3.9[232615]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:45 compute-0 sudo[232613]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:45 compute-0 sudo[232767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faeycaedbcyaeuoqrwqwcppozpzrxxfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254945.5007331-466-144250760958004/AnsiballZ_stat.py'
Sep 30 17:55:45 compute-0 sudo[232767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:45 compute-0 python3.9[232769]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:46 compute-0 sudo[232767]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:46 compute-0 sudo[232856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvpxsaglzbwuruusldxvgfxrfeykvzqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254945.5007331-466-144250760958004/AnsiballZ_file.py'
Sep 30 17:55:46 compute-0 sudo[232856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:46 compute-0 podman[232819]: 2025-09-30 17:55:46.330238927 +0000 UTC m=+0.110382668 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20250930, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 17:55:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:46 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3580009ad0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:46.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:46 compute-0 python3.9[232865]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:46 compute-0 sudo[232856]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:46 compute-0 ceph-mon[73755]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:46 compute-0 sudo[233024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhfxfpjyylyqucqrmjsbqslfbkradgdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254946.7067673-490-134750944022185/AnsiballZ_systemd.py'
Sep 30 17:55:46 compute-0 sudo[233024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:55:47.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:55:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:55:47.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:55:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:47.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:47 compute-0 python3.9[233026]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:55:47 compute-0 systemd[1]: Reloading.
Sep 30 17:55:47 compute-0 systemd-rc-local-generator[233052]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:55:47 compute-0 systemd-sysv-generator[233057]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:55:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:47 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 17:55:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 17:55:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 17:55:47 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 17:55:47 compute-0 sudo[233024]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:48 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f355c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:48 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.003000078s ======
Sep 30 17:55:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:48.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Sep 30 17:55:48 compute-0 sudo[233219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iizirmzmpikrdsintmllyzmwfdizjfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254948.2509854-510-75995530634552/AnsiballZ_file.py'
Sep 30 17:55:48 compute-0 sudo[233219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:48 compute-0 python3.9[233221]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:48 compute-0 sudo[233219]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:48] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:55:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:48] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:55:48 compute-0 ceph-mon[73755]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 17:55:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:49.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 17:55:49 compute-0 sudo[233371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elhcqswfcxigaypumrlymxxswnvyqlfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254948.9842908-526-223518731770741/AnsiballZ_stat.py'
Sep 30 17:55:49 compute-0 sudo[233371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:49 compute-0 python3.9[233373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:49 compute-0 sudo[233371]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:49 compute-0 sudo[233496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spapbblwcbdvofkdfsfibxxqayesaupf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254948.9842908-526-223518731770741/AnsiballZ_copy.py'
Sep 30 17:55:49 compute-0 sudo[233496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:50 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:50 compute-0 python3.9[233498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759254948.9842908-526-223518731770741/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:50 compute-0 sudo[233496]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:50 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3580009ad0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:50.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:50 compute-0 sudo[233649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huodizifongwgussnntqkmwirelgeavu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254950.4699204-560-101453716222518/AnsiballZ_file.py'
Sep 30 17:55:50 compute-0 sudo[233649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:50 compute-0 ceph-mon[73755]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:51 compute-0 python3.9[233651]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:55:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:51.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:51 compute-0 sudo[233649]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:51 compute-0 sudo[233802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcmqlgufzqwxduojaipausyycldyhenp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254951.2645411-576-19058266770349/AnsiballZ_stat.py'
Sep 30 17:55:51 compute-0 sudo[233802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:51 compute-0 python3.9[233804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:55:51 compute-0 sudo[233802]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:52 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002690 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:52 compute-0 ceph-mon[73755]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:52 compute-0 sudo[233927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxusbsobmsxpyjuslvuheyhlrygdhcfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254951.2645411-576-19058266770349/AnsiballZ_copy.py'
Sep 30 17:55:52 compute-0 sudo[233927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:55:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:55:52 compute-0 python3.9[233929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254951.2645411-576-19058266770349/.source.json _original_basename=.g3omapo6 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:52 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:52 compute-0 sudo[233927]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:52.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:52 compute-0 sudo[234079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbhfpkvqauprsupdfisrwdcpremfflrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254952.6018145-606-2700053781913/AnsiballZ_file.py'
Sep 30 17:55:52 compute-0 sudo[234079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:53.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:53 compute-0 python3.9[234081]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:55:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:55:53 compute-0 sudo[234079]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:53 compute-0 sudo[234233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbowsufkrkhmidmofemiydqwwzxqyre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254953.3925543-622-180096114534337/AnsiballZ_stat.py'
Sep 30 17:55:53 compute-0 sudo[234233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:53 compute-0 sudo[234233]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:54 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:54 compute-0 ceph-mon[73755]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:55:54.245 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:55:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:55:54.245 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:55:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:55:54.245 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:55:54 compute-0 sudo[234357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqqrurgdhpqasipdnyrecfbbwmlwioii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254953.3925543-622-180096114534337/AnsiballZ_copy.py'
Sep 30 17:55:54 compute-0 sudo[234357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:54 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:54.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:54 compute-0 sudo[234357]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:55.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:55:55 compute-0 sudo[234476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:55:55 compute-0 sudo[234476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:55 compute-0 sudo[234476]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:55 compute-0 sudo[234541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iftwljitjvfaasnrmteucqgfdqlnjaip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254954.9579215-656-4965740128779/AnsiballZ_container_config_data.py'
Sep 30 17:55:55 compute-0 sudo[234541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:55 compute-0 sudo[234530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:55:55 compute-0 sudo[234530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:55 compute-0 python3.9[234560]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Sep 30 17:55:55 compute-0 sudo[234541]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:56 compute-0 sudo[234530]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:56 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:55:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:55:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:55:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:55:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:55:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:55:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:55:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:55:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:55:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:55:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:55:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:55:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:55:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:55:56 compute-0 sudo[234694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:55:56 compute-0 sudo[234694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:56 compute-0 sudo[234694]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:56 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:56 compute-0 sudo[234743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:55:56 compute-0 sudo[234743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:56 compute-0 sudo[234793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvfyvztuvxoyipgdegiyhqpccmrcubfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254955.9465375-674-280605062509795/AnsiballZ_container_config_hash.py'
Sep 30 17:55:56 compute-0 sudo[234793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:55:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:56.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:55:56 compute-0 python3.9[234796]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 17:55:56 compute-0 ceph-mon[73755]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:55:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:55:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:55:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:55:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:55:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:55:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:55:56 compute-0 sudo[234793]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:56 compute-0 podman[234864]: 2025-09-30 17:55:56.826584519 +0000 UTC m=+0.057873810 container create 8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:55:56 compute-0 systemd[1]: Started libpod-conmon-8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348.scope.
Sep 30 17:55:56 compute-0 podman[234864]: 2025-09-30 17:55:56.794486972 +0000 UTC m=+0.025776293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:55:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:55:56 compute-0 podman[234864]: 2025-09-30 17:55:56.93023807 +0000 UTC m=+0.161527381 container init 8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 17:55:56 compute-0 podman[234864]: 2025-09-30 17:55:56.940918529 +0000 UTC m=+0.172207810 container start 8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:55:56 compute-0 youthful_kirch[234880]: 167 167
Sep 30 17:55:56 compute-0 systemd[1]: libpod-8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348.scope: Deactivated successfully.
Sep 30 17:55:56 compute-0 podman[234864]: 2025-09-30 17:55:56.949929814 +0000 UTC m=+0.181219125 container attach 8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:55:56 compute-0 podman[234864]: 2025-09-30 17:55:56.950971341 +0000 UTC m=+0.182260632 container died 8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cdc02350108c01246d0d44e3d61956feceb5bb72dbeb73a12341b0cb975618b-merged.mount: Deactivated successfully.
Sep 30 17:55:57 compute-0 podman[234864]: 2025-09-30 17:55:57.020595455 +0000 UTC m=+0.251884726 container remove 8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:55:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:55:57.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:55:57 compute-0 systemd[1]: libpod-conmon-8d5811f42142ca269873a18a2c7925511d1fb9b3e29b9d357000983307535348.scope: Deactivated successfully.
Sep 30 17:55:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:57 compute-0 podman[234956]: 2025-09-30 17:55:57.222918019 +0000 UTC m=+0.060192400 container create f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:55:57 compute-0 systemd[1]: Started libpod-conmon-f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016.scope.
Sep 30 17:55:57 compute-0 podman[234956]: 2025-09-30 17:55:57.19076064 +0000 UTC m=+0.028035041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:55:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b4a99014bc80ab36a4e15362925355c864b43f76a71eb8ea398793ca785079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b4a99014bc80ab36a4e15362925355c864b43f76a71eb8ea398793ca785079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b4a99014bc80ab36a4e15362925355c864b43f76a71eb8ea398793ca785079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b4a99014bc80ab36a4e15362925355c864b43f76a71eb8ea398793ca785079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b4a99014bc80ab36a4e15362925355c864b43f76a71eb8ea398793ca785079/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:57 compute-0 podman[234956]: 2025-09-30 17:55:57.352700141 +0000 UTC m=+0.189974532 container init f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:55:57 compute-0 podman[234956]: 2025-09-30 17:55:57.361626304 +0000 UTC m=+0.198900675 container start f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lehmann, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:55:57 compute-0 podman[234956]: 2025-09-30 17:55:57.387474907 +0000 UTC m=+0.224749278 container attach f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 17:55:57 compute-0 sudo[235051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybvcjuziiqnmdbelshbjclyedoxhrfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254956.9641376-692-82670540331192/AnsiballZ_podman_container_info.py'
Sep 30 17:55:57 compute-0 sudo[235051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:57 compute-0 python3.9[235053]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Sep 30 17:55:57 compute-0 goofy_lehmann[234990]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:55:57 compute-0 goofy_lehmann[234990]: --> All data devices are unavailable
Sep 30 17:55:57 compute-0 systemd[1]: libpod-f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016.scope: Deactivated successfully.
Sep 30 17:55:57 compute-0 podman[235078]: 2025-09-30 17:55:57.794960628 +0000 UTC m=+0.059970164 container died f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:55:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2b4a99014bc80ab36a4e15362925355c864b43f76a71eb8ea398793ca785079-merged.mount: Deactivated successfully.
Sep 30 17:55:57 compute-0 podman[235092]: 2025-09-30 17:55:57.875762853 +0000 UTC m=+0.092132462 container remove f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 17:55:57 compute-0 systemd[1]: libpod-conmon-f7b0c6d208ea60f34af76ab1a431cf4c5778839078b2d380737b4761da87a016.scope: Deactivated successfully.
Sep 30 17:55:57 compute-0 sudo[235051]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:57 compute-0 sudo[234743]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:57 compute-0 sudo[235111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:55:57 compute-0 sudo[235111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:58 compute-0 sudo[235111]: pam_unix(sudo:session): session closed for user root
Sep 30 17:55:58 compute-0 sudo[235156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:55:58 compute-0 sudo[235156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:55:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:58 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3570000f70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:55:58 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002690 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:55:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:55:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:55:58.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:55:58 compute-0 podman[235222]: 2025-09-30 17:55:58.504748937 +0000 UTC m=+0.087737888 container create 086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 17:55:58 compute-0 podman[235222]: 2025-09-30 17:55:58.443590973 +0000 UTC m=+0.026579944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:55:58 compute-0 systemd[1]: Started libpod-conmon-086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267.scope.
Sep 30 17:55:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:55:58 compute-0 podman[235222]: 2025-09-30 17:55:58.769086736 +0000 UTC m=+0.352075697 container init 086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 17:55:58 compute-0 ceph-mon[73755]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:55:58 compute-0 podman[235222]: 2025-09-30 17:55:58.780574575 +0000 UTC m=+0.363563526 container start 086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:55:58 compute-0 friendly_gauss[235240]: 167 167
Sep 30 17:55:58 compute-0 systemd[1]: libpod-086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267.scope: Deactivated successfully.
Sep 30 17:55:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:58] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:55:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:55:58] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:55:58 compute-0 podman[235222]: 2025-09-30 17:55:58.822540199 +0000 UTC m=+0.405529170 container attach 086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:55:58 compute-0 podman[235222]: 2025-09-30 17:55:58.824035668 +0000 UTC m=+0.407024609 container died 086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 17:55:58 compute-0 podman[235239]: 2025-09-30 17:55:58.8663069 +0000 UTC m=+0.240882779 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 17:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5143b08d6aafbb810c4ac2accd56139f42c3691ab18df4c206f1aeb4d60ba7bc-merged.mount: Deactivated successfully.
Sep 30 17:55:58 compute-0 podman[235222]: 2025-09-30 17:55:58.984387036 +0000 UTC m=+0.567375987 container remove 086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:55:58 compute-0 systemd[1]: libpod-conmon-086cf6b8d2106e697b85f3a1902c75f4c5ed1db34f2e81e4058e26242d060267.scope: Deactivated successfully.
Sep 30 17:55:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:55:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:55:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:55:59.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:55:59 compute-0 podman[235335]: 2025-09-30 17:55:59.181648317 +0000 UTC m=+0.059042799 container create 695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:55:59 compute-0 systemd[1]: Started libpod-conmon-695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0.scope.
Sep 30 17:55:59 compute-0 podman[235335]: 2025-09-30 17:55:59.153649918 +0000 UTC m=+0.031044430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:55:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b25cf429e9bb3f5256c1c9f124ea436d8519b9c0365679a2ce9fb5e79772bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b25cf429e9bb3f5256c1c9f124ea436d8519b9c0365679a2ce9fb5e79772bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b25cf429e9bb3f5256c1c9f124ea436d8519b9c0365679a2ce9fb5e79772bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b25cf429e9bb3f5256c1c9f124ea436d8519b9c0365679a2ce9fb5e79772bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:55:59 compute-0 podman[235335]: 2025-09-30 17:55:59.306168833 +0000 UTC m=+0.183563345 container init 695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 17:55:59 compute-0 podman[235335]: 2025-09-30 17:55:59.316368319 +0000 UTC m=+0.193762801 container start 695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Sep 30 17:55:59 compute-0 sudo[235428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umtzlftejjbknvbefveduyfmgzfyyvky ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759254958.8119493-718-247835285008767/AnsiballZ_edpm_container_manage.py'
Sep 30 17:55:59 compute-0 sudo[235428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:55:59 compute-0 podman[235335]: 2025-09-30 17:55:59.346092923 +0000 UTC m=+0.223487425 container attach 695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_engelbart, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:55:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:55:59 compute-0 python3[235432]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]: {
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:     "0": [
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:         {
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "devices": [
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "/dev/loop3"
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             ],
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "lv_name": "ceph_lv0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "lv_size": "21470642176",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "name": "ceph_lv0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "tags": {
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.cluster_name": "ceph",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.crush_device_class": "",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.encrypted": "0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.osd_id": "0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.type": "block",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.vdo": "0",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:                 "ceph.with_tpm": "0"
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             },
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "type": "block",
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:             "vg_name": "ceph_vg0"
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:         }
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]:     ]
Sep 30 17:55:59 compute-0 intelligent_engelbart[235399]: }
Sep 30 17:55:59 compute-0 systemd[1]: libpod-695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0.scope: Deactivated successfully.
Sep 30 17:55:59 compute-0 podman[235335]: 2025-09-30 17:55:59.648837104 +0000 UTC m=+0.526231596 container died 695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3b25cf429e9bb3f5256c1c9f124ea436d8519b9c0365679a2ce9fb5e79772bc-merged.mount: Deactivated successfully.
Sep 30 17:55:59 compute-0 podman[235335]: 2025-09-30 17:55:59.949262634 +0000 UTC m=+0.826657116 container remove 695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Sep 30 17:55:59 compute-0 systemd[1]: libpod-conmon-695f132c75ab0072262421e5b39fda894857aae5894647a51fb4a332cacbe4d0.scope: Deactivated successfully.
Sep 30 17:56:00 compute-0 sudo[235156]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:00 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:00 compute-0 sudo[235494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:56:00 compute-0 sudo[235494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:56:00 compute-0 sudo[235494]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:00 compute-0 podman[235488]: 2025-09-30 17:56:00.046068967 +0000 UTC m=+0.026021159 image pull f8ff303843ab104c2f5f56920f311c0b22efd49dc54152d8e2ede3a7218e9091 38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Sep 30 17:56:00 compute-0 sudo[235526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:56:00 compute-0 sudo[235526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:56:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:00 compute-0 podman[235488]: 2025-09-30 17:56:00.34966888 +0000 UTC m=+0.329621062 container create 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid)
Sep 30 17:56:00 compute-0 python3[235432]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z 38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Sep 30 17:56:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:00 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003ce0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:00.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:00 compute-0 sudo[235428]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:00 compute-0 podman[235636]: 2025-09-30 17:56:00.578701849 +0000 UTC m=+0.028788181 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:56:00 compute-0 podman[235636]: 2025-09-30 17:56:00.683033808 +0000 UTC m=+0.133120110 container create 0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_haslett, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 17:56:00 compute-0 systemd[1]: Started libpod-conmon-0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07.scope.
Sep 30 17:56:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:56:00 compute-0 podman[235636]: 2025-09-30 17:56:00.865930005 +0000 UTC m=+0.316016327 container init 0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_haslett, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:56:00 compute-0 podman[235636]: 2025-09-30 17:56:00.874023716 +0000 UTC m=+0.324110018 container start 0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Sep 30 17:56:00 compute-0 wizardly_haslett[235711]: 167 167
Sep 30 17:56:00 compute-0 systemd[1]: libpod-0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07.scope: Deactivated successfully.
Sep 30 17:56:00 compute-0 ceph-mon[73755]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:56:00 compute-0 podman[235636]: 2025-09-30 17:56:00.938988649 +0000 UTC m=+0.389074981 container attach 0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 17:56:00 compute-0 podman[235636]: 2025-09-30 17:56:00.940739745 +0000 UTC m=+0.390826067 container died 0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_haslett, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:56:00 compute-0 sudo[235800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqndiorsulekicetxcuzihsjidgapbbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254960.6529007-734-43494437906583/AnsiballZ_stat.py'
Sep 30 17:56:00 compute-0 sudo[235800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:01.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-321f4e3c3f3cc77bcabcbad1c5293c8631c2c1d66f608600321285d05273372d-merged.mount: Deactivated successfully.
Sep 30 17:56:01 compute-0 python3.9[235802]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:56:01 compute-0 sudo[235800]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:01 compute-0 podman[235636]: 2025-09-30 17:56:01.250496388 +0000 UTC m=+0.700582680 container remove 0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:56:01 compute-0 systemd[1]: libpod-conmon-0872450d2cc2b0c02cac40f8af75981704cc04e102cc9e5c4c63d5f97b903a07.scope: Deactivated successfully.
Sep 30 17:56:01 compute-0 podman[235840]: 2025-09-30 17:56:01.440785357 +0000 UTC m=+0.052786476 container create 2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:56:01 compute-0 systemd[1]: Started libpod-conmon-2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4.scope.
Sep 30 17:56:01 compute-0 podman[235840]: 2025-09-30 17:56:01.415267242 +0000 UTC m=+0.027268411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:56:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3833d23b91081cfeb2f06ba30b2c7fb6e0d8ec4611b5f857c8d1511a3ad995c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3833d23b91081cfeb2f06ba30b2c7fb6e0d8ec4611b5f857c8d1511a3ad995c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3833d23b91081cfeb2f06ba30b2c7fb6e0d8ec4611b5f857c8d1511a3ad995c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3833d23b91081cfeb2f06ba30b2c7fb6e0d8ec4611b5f857c8d1511a3ad995c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:01 compute-0 podman[235840]: 2025-09-30 17:56:01.553450114 +0000 UTC m=+0.165451253 container init 2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 17:56:01 compute-0 podman[235840]: 2025-09-30 17:56:01.562770337 +0000 UTC m=+0.174771456 container start 2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_galois, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 17:56:01 compute-0 podman[235840]: 2025-09-30 17:56:01.580263743 +0000 UTC m=+0.192264892 container attach 2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Sep 30 17:56:01 compute-0 sudo[235993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmgoluifnihillbmcmklgpbiibrkbwmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254961.5027943-752-145521639657930/AnsiballZ_file.py'
Sep 30 17:56:01 compute-0 sudo[235993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:02 compute-0 python3.9[235999]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:02 compute-0 sudo[235993]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:02 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3570001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:02 compute-0 sudo[236131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tulottcmfspsyrzjtuznookmojdsselx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254961.5027943-752-145521639657930/AnsiballZ_stat.py'
Sep 30 17:56:02 compute-0 sudo[236131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:02 compute-0 lvm[236135]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:56:02 compute-0 lvm[236135]: VG ceph_vg0 finished
Sep 30 17:56:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:02 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002690 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:02 compute-0 unruffled_galois[235879]: {}
Sep 30 17:56:02 compute-0 systemd[1]: libpod-2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4.scope: Deactivated successfully.
Sep 30 17:56:02 compute-0 systemd[1]: libpod-2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4.scope: Consumed 1.304s CPU time.
Sep 30 17:56:02 compute-0 podman[235840]: 2025-09-30 17:56:02.434611789 +0000 UTC m=+1.046612908 container died 2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_galois, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 17:56:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:02.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:02 compute-0 python3.9[236133]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:56:02 compute-0 sudo[236131]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3833d23b91081cfeb2f06ba30b2c7fb6e0d8ec4611b5f857c8d1511a3ad995c-merged.mount: Deactivated successfully.
Sep 30 17:56:02 compute-0 podman[235840]: 2025-09-30 17:56:02.552705456 +0000 UTC m=+1.164706575 container remove 2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_galois, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:56:02 compute-0 systemd[1]: libpod-conmon-2804e26e4e3f4f89461972900f05b8725969568ce17d340133ec23d3ae90bbb4.scope: Deactivated successfully.
Sep 30 17:56:02 compute-0 sudo[235526]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:56:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:56:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:56:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:56:02 compute-0 sudo[236201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:56:02 compute-0 sudo[236201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:56:02 compute-0 sudo[236201]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:02 compute-0 ceph-mon[73755]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:56:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:56:03 compute-0 sudo[236326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gevmcyxutrrenavjbqczzxcvhfhgexir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254962.5878017-752-226009064671069/AnsiballZ_copy.py'
Sep 30 17:56:03 compute-0 sudo[236326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:03.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:03 compute-0 python3.9[236328]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759254962.5878017-752-226009064671069/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:03 compute-0 sudo[236326]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:03 compute-0 sudo[236403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulodrtvtxvzuzolgglvfcfvbzadvaeon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254962.5878017-752-226009064671069/AnsiballZ_systemd.py'
Sep 30 17:56:03 compute-0 sudo[236403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:56:03 compute-0 python3.9[236405]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:56:03 compute-0 systemd[1]: Reloading.
Sep 30 17:56:04 compute-0 systemd-rc-local-generator[236434]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:56:04 compute-0 systemd-sysv-generator[236437]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:56:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:04 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:04 compute-0 sudo[236442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:56:04 compute-0 sudo[236442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:56:04 compute-0 sudo[236442]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:04 compute-0 sudo[236403]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:04 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:04 compute-0 sudo[236540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lktftpabbrofglzvdpxwxauyegdtgvej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254962.5878017-752-226009064671069/AnsiballZ_systemd.py'
Sep 30 17:56:04 compute-0 sudo[236540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:04 compute-0 python3.9[236542]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:56:04 compute-0 systemd[1]: Reloading.
Sep 30 17:56:04 compute-0 ceph-mon[73755]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:56:05 compute-0 systemd-rc-local-generator[236572]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:56:05 compute-0 systemd-sysv-generator[236575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:56:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:05.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:05 compute-0 systemd[1]: Starting iscsid container...
Sep 30 17:56:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d91419ac294593413209e145c6b7ea5894c9ac12a2d45078486c17f9a54ae3a/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d91419ac294593413209e145c6b7ea5894c9ac12a2d45078486c17f9a54ae3a/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d91419ac294593413209e145c6b7ea5894c9ac12a2d45078486c17f9a54ae3a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20.
Sep 30 17:56:05 compute-0 podman[236583]: 2025-09-30 17:56:05.450613474 +0000 UTC m=+0.156333346 container init 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4)
Sep 30 17:56:05 compute-0 iscsid[236598]: + sudo -E kolla_set_configs
Sep 30 17:56:05 compute-0 podman[236583]: 2025-09-30 17:56:05.475554184 +0000 UTC m=+0.181274036 container start 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 17:56:05 compute-0 podman[236583]: iscsid
Sep 30 17:56:05 compute-0 sudo[236604]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Sep 30 17:56:05 compute-0 systemd[1]: Started iscsid container.
Sep 30 17:56:05 compute-0 systemd[1]: Created slice User Slice of UID 0.
Sep 30 17:56:05 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Sep 30 17:56:05 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Sep 30 17:56:05 compute-0 sudo[236540]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:05 compute-0 systemd[1]: Starting User Manager for UID 0...
Sep 30 17:56:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:05 compute-0 systemd[236623]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Sep 30 17:56:05 compute-0 podman[236605]: 2025-09-30 17:56:05.569312458 +0000 UTC m=+0.077827040 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930)
Sep 30 17:56:05 compute-0 systemd[1]: 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20-52a2f34eda1314ba.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 17:56:05 compute-0 systemd[1]: 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20-52a2f34eda1314ba.service: Failed with result 'exit-code'.
Sep 30 17:56:05 compute-0 systemd[236623]: Queued start job for default target Main User Target.
Sep 30 17:56:05 compute-0 systemd[236623]: Created slice User Application Slice.
Sep 30 17:56:05 compute-0 systemd[236623]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Sep 30 17:56:05 compute-0 systemd[236623]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 17:56:05 compute-0 systemd[236623]: Reached target Paths.
Sep 30 17:56:05 compute-0 systemd[236623]: Reached target Timers.
Sep 30 17:56:05 compute-0 systemd[236623]: Starting D-Bus User Message Bus Socket...
Sep 30 17:56:05 compute-0 systemd[236623]: Starting Create User's Volatile Files and Directories...
Sep 30 17:56:05 compute-0 systemd[236623]: Finished Create User's Volatile Files and Directories.
Sep 30 17:56:05 compute-0 systemd[236623]: Listening on D-Bus User Message Bus Socket.
Sep 30 17:56:05 compute-0 systemd[236623]: Reached target Sockets.
Sep 30 17:56:05 compute-0 systemd[236623]: Reached target Basic System.
Sep 30 17:56:05 compute-0 systemd[236623]: Reached target Main User Target.
Sep 30 17:56:05 compute-0 systemd[236623]: Startup finished in 154ms.
Sep 30 17:56:05 compute-0 systemd[1]: Started User Manager for UID 0.
Sep 30 17:56:05 compute-0 systemd[1]: Started Session c3 of User root.
Sep 30 17:56:05 compute-0 sudo[236604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 17:56:05 compute-0 iscsid[236598]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 17:56:05 compute-0 iscsid[236598]: INFO:__main__:Validating config file
Sep 30 17:56:05 compute-0 iscsid[236598]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 17:56:05 compute-0 iscsid[236598]: INFO:__main__:Writing out command to execute
Sep 30 17:56:05 compute-0 sudo[236604]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:05 compute-0 iscsid[236598]: ++ cat /run_command
Sep 30 17:56:05 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Sep 30 17:56:05 compute-0 iscsid[236598]: + CMD='/usr/sbin/iscsid -f'
Sep 30 17:56:05 compute-0 iscsid[236598]: + ARGS=
Sep 30 17:56:05 compute-0 iscsid[236598]: + sudo kolla_copy_cacerts
Sep 30 17:56:05 compute-0 sudo[236669]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Sep 30 17:56:05 compute-0 systemd[1]: Started Session c4 of User root.
Sep 30 17:56:05 compute-0 sudo[236669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 17:56:05 compute-0 sudo[236669]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:05 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Sep 30 17:56:05 compute-0 iscsid[236598]: + [[ ! -n '' ]]
Sep 30 17:56:05 compute-0 iscsid[236598]: + . kolla_extend_start
Sep 30 17:56:05 compute-0 iscsid[236598]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Sep 30 17:56:05 compute-0 iscsid[236598]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Sep 30 17:56:05 compute-0 iscsid[236598]: Running command: '/usr/sbin/iscsid -f'
Sep 30 17:56:05 compute-0 iscsid[236598]: + umask 0022
Sep 30 17:56:05 compute-0 iscsid[236598]: + exec /usr/sbin/iscsid -f
Sep 30 17:56:05 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Sep 30 17:56:06 compute-0 ceph-mon[73755]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:06 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3570001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:06 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003d20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:06 compute-0 sshd-session[236546]: Invalid user cuser from 45.252.249.158 port 37996
Sep 30 17:56:06 compute-0 sshd-session[236546]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 17:56:06 compute-0 sshd-session[236546]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 17:56:06 compute-0 python3.9[236804]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:56:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:07.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:07.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:56:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:56:07 compute-0 sudo[236954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjmwsqoeeavpzslemfgijqzejlvzhwnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254967.0411403-826-123340893216222/AnsiballZ_file.py'
Sep 30 17:56:07 compute-0 sudo[236954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:56:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:56:07
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'images', '.nfs', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes', '.rgw.root', 'backups']
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:56:07 compute-0 python3.9[236957]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:07 compute-0 sudo[236954]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:08 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002690 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:08 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3558003c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:08.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:08 compute-0 sudo[237110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqlpsxiwvqkhxwqimqusamygwfrjzhkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254968.1751523-848-176314342219789/AnsiballZ_service_facts.py'
Sep 30 17:56:08 compute-0 sudo[237110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:08 compute-0 ceph-mon[73755]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:08 compute-0 sshd-session[236956]: Invalid user clamav from 14.225.220.107 port 52806
Sep 30 17:56:08 compute-0 sshd-session[236956]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 17:56:08 compute-0 sshd-session[236956]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 17:56:08 compute-0 python3.9[237112]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:56:08 compute-0 network[237129]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:56:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:56:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:56:08 compute-0 network[237130]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:56:08 compute-0 network[237131]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:56:08 compute-0 sshd-session[236546]: Failed password for invalid user cuser from 45.252.249.158 port 37996 ssh2
Sep 30 17:56:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:09.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:56:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:10 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3570001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:10 compute-0 sshd-session[236956]: Failed password for invalid user clamav from 14.225.220.107 port 52806 ssh2
Sep 30 17:56:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:10 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3574003ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:10.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:10 compute-0 ceph-mon[73755]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:56:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:11.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:11 compute-0 sshd-session[236546]: Received disconnect from 45.252.249.158 port 37996:11: Bye Bye [preauth]
Sep 30 17:56:11 compute-0 sshd-session[236546]: Disconnected from invalid user cuser 45.252.249.158 port 37996 [preauth]
Sep 30 17:56:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:11 compute-0 sshd-session[236956]: Received disconnect from 14.225.220.107 port 52806:11: Bye Bye [preauth]
Sep 30 17:56:11 compute-0 sshd-session[236956]: Disconnected from invalid user clamav 14.225.220.107 port 52806 [preauth]
Sep 30 17:56:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:12 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002690 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:12 compute-0 sudo[237110]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:12 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002690 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:12.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:12 compute-0 ceph-mon[73755]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:13.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:56:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[221592]: 30/09/2025 17:56:14 : epoch 68dc195a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3550002690 fd 37 proxy ignored for local
Sep 30 17:56:14 compute-0 kernel: ganesha.nfsd[233499]: segfault at 50 ip 00007f362a79d32e sp 00007f35de7fb210 error 4 in libntirpc.so.5.8[7f362a782000+2c000] likely on CPU 0 (core 0, socket 0)
Sep 30 17:56:14 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:56:14 compute-0 systemd[1]: Started Process Core Dump (PID 237285/UID 0).
Sep 30 17:56:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:14.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:14 compute-0 sudo[237412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqvngjlwntdbsqakjgifawdakkbysdhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254974.4176521-868-240144827117102/AnsiballZ_file.py'
Sep 30 17:56:14 compute-0 sudo[237412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:14 compute-0 ceph-mon[73755]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:56:14 compute-0 python3.9[237414]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 17:56:15 compute-0 sudo[237412]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:15.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:15 compute-0 systemd-coredump[237286]: Process 221596 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007f362a79d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:56:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:15 compute-0 sudo[237566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dptawpdjwwjryesbasboxhcaxvuxwtlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254975.1567538-884-136787647684657/AnsiballZ_modprobe.py'
Sep 30 17:56:15 compute-0 sudo[237566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:15 compute-0 systemd[1]: systemd-coredump@10-237285-0.service: Deactivated successfully.
Sep 30 17:56:15 compute-0 systemd[1]: systemd-coredump@10-237285-0.service: Consumed 1.465s CPU time.
Sep 30 17:56:15 compute-0 podman[237573]: 2025-09-30 17:56:15.735074291 +0000 UTC m=+0.033435291 container died bd3169a87f54e6ed03482de0bdeeb00c28e39f86782caa51e37ed398095f0399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 17:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-42f144f70bf38175666d289454660f7f98df4b31e9c74fea157d96c048d9b15e-merged.mount: Deactivated successfully.
Sep 30 17:56:15 compute-0 podman[237573]: 2025-09-30 17:56:15.806796716 +0000 UTC m=+0.105157686 container remove bd3169a87f54e6ed03482de0bdeeb00c28e39f86782caa51e37ed398095f0399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:56:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:56:15 compute-0 python3.9[237569]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Sep 30 17:56:15 compute-0 sudo[237566]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:15 compute-0 systemd[1]: Stopping User Manager for UID 0...
Sep 30 17:56:15 compute-0 systemd[236623]: Activating special unit Exit the Session...
Sep 30 17:56:15 compute-0 systemd[236623]: Stopped target Main User Target.
Sep 30 17:56:15 compute-0 systemd[236623]: Stopped target Basic System.
Sep 30 17:56:15 compute-0 systemd[236623]: Stopped target Paths.
Sep 30 17:56:15 compute-0 systemd[236623]: Stopped target Sockets.
Sep 30 17:56:15 compute-0 systemd[236623]: Stopped target Timers.
Sep 30 17:56:15 compute-0 systemd[236623]: Stopped Daily Cleanup of User's Temporary Directories.
Sep 30 17:56:15 compute-0 systemd[236623]: Closed D-Bus User Message Bus Socket.
Sep 30 17:56:15 compute-0 systemd[236623]: Stopped Create User's Volatile Files and Directories.
Sep 30 17:56:15 compute-0 systemd[236623]: Removed slice User Application Slice.
Sep 30 17:56:15 compute-0 systemd[236623]: Reached target Shutdown.
Sep 30 17:56:15 compute-0 systemd[236623]: Finished Exit the Session.
Sep 30 17:56:15 compute-0 systemd[236623]: Reached target Exit the Session.
Sep 30 17:56:15 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Sep 30 17:56:15 compute-0 systemd[1]: Stopped User Manager for UID 0.
Sep 30 17:56:15 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Sep 30 17:56:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:56:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.751s CPU time.
Sep 30 17:56:15 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Sep 30 17:56:15 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Sep 30 17:56:15 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Sep 30 17:56:15 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Sep 30 17:56:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:16.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:16 compute-0 sudo[237785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znrgxqomumvnouiruqgqnlnrwsuofcbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254976.1629362-900-212173536707161/AnsiballZ_stat.py'
Sep 30 17:56:16 compute-0 sudo[237785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:16 compute-0 podman[237743]: 2025-09-30 17:56:16.566641098 +0000 UTC m=+0.137320073 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 17:56:16 compute-0 python3.9[237793]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:16 compute-0 sudo[237785]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:16 compute-0 ceph-mon[73755]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:17.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:17.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:17 compute-0 sudo[237917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rapschvlvxmcdklpowjwoxhzjdiobkxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254976.1629362-900-212173536707161/AnsiballZ_copy.py'
Sep 30 17:56:17 compute-0 sudo[237917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:17 compute-0 python3.9[237919]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254976.1629362-900-212173536707161/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:17 compute-0 sudo[237917]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:18 compute-0 sudo[238071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eikuwekzxovdzirczlbkahyrcienzheh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254977.741997-932-234321680420267/AnsiballZ_lineinfile.py'
Sep 30 17:56:18 compute-0 sudo[238071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:18 compute-0 python3.9[238073]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:18 compute-0 sudo[238071]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:18.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:56:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:56:18 compute-0 ceph-mon[73755]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:18 compute-0 sudo[238223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmzspjswwbbrmjrqjovltesmzluphckt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254978.6548104-948-20356145540633/AnsiballZ_systemd.py'
Sep 30 17:56:18 compute-0 sudo[238223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:19.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:19 compute-0 python3.9[238225]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:56:19 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Sep 30 17:56:19 compute-0 systemd[1]: Stopped Load Kernel Modules.
Sep 30 17:56:19 compute-0 systemd[1]: Stopping Load Kernel Modules...
Sep 30 17:56:19 compute-0 systemd[1]: Starting Load Kernel Modules...
Sep 30 17:56:19 compute-0 systemd[1]: Finished Load Kernel Modules.
Sep 30 17:56:19 compute-0 sudo[238223]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:19 compute-0 sudo[238381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzyxbijyolsthhvljkgtddtifnjuuvev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254979.6321616-964-216431059265714/AnsiballZ_file.py'
Sep 30 17:56:19 compute-0 sudo[238381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175620 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 4ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:56:20 compute-0 python3.9[238383]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:56:20 compute-0 sudo[238381]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:20.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:20 compute-0 sudo[238533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxrfdciabsvaqfgqmhtfufnuaahkvlxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254980.5180957-982-183374297799362/AnsiballZ_stat.py'
Sep 30 17:56:20 compute-0 sudo[238533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:20 compute-0 ceph-mon[73755]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:56:21 compute-0 python3.9[238535]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:56:21 compute-0 sudo[238533]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:21.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:56:21 compute-0 sudo[238687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhifqwbvodttimsnprebnezfjdhhncvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254981.3702788-1000-174316599067260/AnsiballZ_stat.py'
Sep 30 17:56:21 compute-0 sudo[238687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:21 compute-0 python3.9[238689]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:56:21 compute-0 sudo[238687]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:22 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Sep 30 17:56:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:56:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:56:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:22.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:22 compute-0 sudo[238840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyqcltfliuhozogjwranhsbxmdhfwdtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254982.2047296-1016-218008170122612/AnsiballZ_stat.py'
Sep 30 17:56:22 compute-0 sudo[238840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:22 compute-0 python3.9[238842]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:22 compute-0 sudo[238840]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:22 compute-0 ceph-mon[73755]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:56:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:56:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:23.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:23 compute-0 sudo[238963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciejdvskdntizmrwafsldbwomadmdeqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254982.2047296-1016-218008170122612/AnsiballZ_copy.py'
Sep 30 17:56:23 compute-0 sudo[238963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:23 compute-0 python3.9[238965]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759254982.2047296-1016-218008170122612/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:23 compute-0 sudo[238963]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:23 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Sep 30 17:56:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:56:23 compute-0 ceph-mgr[74051]: [dashboard INFO request] [192.168.122.100:39730] [POST] [200] [0.003s] [4.0B] [d63d7846-338b-4926-87a3-8ecc925222e4] /api/prometheus_receiver
Sep 30 17:56:24 compute-0 sudo[239118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjrhxusoicyizfmsetbfynjlumehbvyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254983.7904103-1046-30573452583290/AnsiballZ_command.py'
Sep 30 17:56:24 compute-0 sudo[239118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:24 compute-0 sudo[239121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:56:24 compute-0 sudo[239121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:56:24 compute-0 sudo[239121]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:24 compute-0 python3.9[239120]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:56:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:24 compute-0 sudo[239118]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:24 compute-0 ceph-mon[73755]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:56:25 compute-0 sudo[239296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pconhbbrdijfpgetjeipkgiyhxrnpggx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254984.7291477-1062-205654174641466/AnsiballZ_lineinfile.py'
Sep 30 17:56:25 compute-0 sudo[239296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:25.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:25 compute-0 python3.9[239298]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:25 compute-0 sudo[239296]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.332750) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254985332815, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 646, "num_deletes": 255, "total_data_size": 851409, "memory_usage": 863568, "flush_reason": "Manual Compaction"}
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254985343300, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 840926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17220, "largest_seqno": 17865, "table_properties": {"data_size": 837503, "index_size": 1267, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7343, "raw_average_key_size": 17, "raw_value_size": 830655, "raw_average_value_size": 2021, "num_data_blocks": 57, "num_entries": 411, "num_filter_entries": 411, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759254939, "oldest_key_time": 1759254939, "file_creation_time": 1759254985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 10652 microseconds, and 6107 cpu microseconds.
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.343399) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 840926 bytes OK
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.343435) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.345184) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.345214) EVENT_LOG_v1 {"time_micros": 1759254985345203, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.345247) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 847988, prev total WAL file size 847988, number of live WAL files 2.
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.346202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(821KB)], [38(9887KB)]
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254985346288, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 10965349, "oldest_snapshot_seqno": -1}
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4497 keys, 10530440 bytes, temperature: kUnknown
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254985419992, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 10530440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10499963, "index_size": 18143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 115034, "raw_average_key_size": 25, "raw_value_size": 10417724, "raw_average_value_size": 2316, "num_data_blocks": 747, "num_entries": 4497, "num_filter_entries": 4497, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759254985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.420308) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 10530440 bytes
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.421535) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.6 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(25.6) write-amplify(12.5) OK, records in: 5020, records dropped: 523 output_compression: NoCompression
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.421559) EVENT_LOG_v1 {"time_micros": 1759254985421547, "job": 18, "event": "compaction_finished", "compaction_time_micros": 73793, "compaction_time_cpu_micros": 30402, "output_level": 6, "num_output_files": 1, "total_output_size": 10530440, "num_input_records": 5020, "num_output_records": 4497, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254985421884, "job": 18, "event": "table_file_deletion", "file_number": 40}
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759254985424029, "job": 18, "event": "table_file_deletion", "file_number": 38}
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.346069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.424136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.424150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.424155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.424160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:56:25 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:56:25.424163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:56:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175625 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:56:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [ALERT] 272/175625 (4) : backend 'backend' has no server available!
Sep 30 17:56:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:56:26 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 11.
Sep 30 17:56:26 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:56:26 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.751s CPU time.
Sep 30 17:56:26 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:56:26 compute-0 sudo[239464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijsmlqwpkfrblvcmyrkkwntbuxzwaeih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254985.5297568-1078-195061999123730/AnsiballZ_replace.py'
Sep 30 17:56:26 compute-0 sudo[239464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:26 compute-0 podman[239502]: 2025-09-30 17:56:26.298018361 +0000 UTC m=+0.041511470 container create 444236ac086cf486eb2dd092b7bd60c2d6def35469c0cbf95a734f7ef629935f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 17:56:26 compute-0 ceph-mon[73755]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a59855422ac6e4a09ec4f7ce7a5e4fa9ba434ffc7496d85535205940ce074/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:26 compute-0 python3.9[239472]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a59855422ac6e4a09ec4f7ce7a5e4fa9ba434ffc7496d85535205940ce074/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a59855422ac6e4a09ec4f7ce7a5e4fa9ba434ffc7496d85535205940ce074/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a59855422ac6e4a09ec4f7ce7a5e4fa9ba434ffc7496d85535205940ce074/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:56:26 compute-0 podman[239502]: 2025-09-30 17:56:26.365723612 +0000 UTC m=+0.109216741 container init 444236ac086cf486eb2dd092b7bd60c2d6def35469c0cbf95a734f7ef629935f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:56:26 compute-0 sudo[239464]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:26 compute-0 podman[239502]: 2025-09-30 17:56:26.370976469 +0000 UTC m=+0.114469578 container start 444236ac086cf486eb2dd092b7bd60c2d6def35469c0cbf95a734f7ef629935f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:56:26 compute-0 podman[239502]: 2025-09-30 17:56:26.278837743 +0000 UTC m=+0.022330872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:56:26 compute-0 bash[239502]: 444236ac086cf486eb2dd092b7bd60c2d6def35469c0cbf95a734f7ef629935f
Sep 30 17:56:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:56:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:56:26 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:56:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:56:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:56:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:56:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:56:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:56:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:56:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:26.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:26 compute-0 sudo[239708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzdukrirzajcgmuzolsnkukumnpvpnzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254986.5446215-1094-218706973755064/AnsiballZ_replace.py'
Sep 30 17:56:26 compute-0 sudo[239708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:27 compute-0 python3.9[239710]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:27 compute-0 sudo[239708]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:27.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:56:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:27.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:56:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:56:27 compute-0 sudo[239861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlslyhitjqcnucfenbxtzvcmlvxqjioq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254987.3665264-1112-5742433446061/AnsiballZ_lineinfile.py'
Sep 30 17:56:27 compute-0 sudo[239861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:27 compute-0 python3.9[239864]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:27 compute-0 sudo[239861]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:28 compute-0 sudo[240014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yncivyaxegvdccexadxilspifzclxpvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254987.9571846-1112-102870648478518/AnsiballZ_lineinfile.py'
Sep 30 17:56:28 compute-0 sudo[240014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:28 compute-0 python3.9[240016]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:28 compute-0 sudo[240014]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:28.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:28 compute-0 ceph-mon[73755]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail
Sep 30 17:56:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:28] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 17:56:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:28] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 17:56:29 compute-0 sudo[240182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoukovqidvlokxdhqxxcwmmmvvynpxiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254988.5697453-1112-68735061104020/AnsiballZ_lineinfile.py'
Sep 30 17:56:29 compute-0 sudo[240182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:29 compute-0 podman[240140]: 2025-09-30 17:56:29.006694576 +0000 UTC m=+0.059999082 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 17:56:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:29.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:29 compute-0 python3.9[240186]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:29 compute-0 sudo[240182]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:56:29 compute-0 sudo[240339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhmyywiytbvelswxzyfqvfnijkseoxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254989.3891141-1112-230799678511664/AnsiballZ_lineinfile.py'
Sep 30 17:56:29 compute-0 sudo[240339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:29 compute-0 python3.9[240341]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:29 compute-0 sudo[240339]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:56:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:30.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:56:30 compute-0 ceph-mon[73755]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:56:30 compute-0 sudo[240491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlicfhbedtptqrqmxbcjbkdgjyhyqzzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254990.4984012-1170-270128308862210/AnsiballZ_stat.py'
Sep 30 17:56:30 compute-0 sudo[240491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:30 compute-0 python3.9[240493]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:56:31 compute-0 sudo[240491]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:31.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:56:31 compute-0 sudo[240647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jibgwpaxsgigeoorjzlvdrrhpjhdbdng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254991.3784776-1186-100547418397657/AnsiballZ_file.py'
Sep 30 17:56:31 compute-0 sudo[240647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:31 compute-0 python3.9[240649]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:31 compute-0 sudo[240647]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:32 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:56:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:32 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:56:32 compute-0 sshd[192864]: Timeout before authentication for connection from 115.190.39.222 to 38.102.83.202, pid = 220906
Sep 30 17:56:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:32.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:32 compute-0 sudo[240799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skqlhyhwshdabpynbzdzdbwmxnwxjzwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254992.2899444-1204-150337580618784/AnsiballZ_file.py'
Sep 30 17:56:32 compute-0 sudo[240799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:32 compute-0 ceph-mon[73755]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:56:32 compute-0 python3.9[240801]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:56:32 compute-0 sudo[240799]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:33.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:33 compute-0 sudo[240952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mddrfdtqbdscydsppcspqicuyryhlklb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254993.1765003-1220-212722052486950/AnsiballZ_stat.py'
Sep 30 17:56:33 compute-0 sudo[240952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:33.563Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:33 compute-0 python3.9[240954]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:33 compute-0 sudo[240952]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:33 compute-0 sudo[241031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhkqotxzbletsqexkpvpedpapakaqpgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254993.1765003-1220-212722052486950/AnsiballZ_file.py'
Sep 30 17:56:33 compute-0 sudo[241031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 17:56:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.3 total, 600.0 interval
                                           Cumulative writes: 3869 writes, 17K keys, 3867 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 3869 writes, 3867 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1403 writes, 6135 keys, 1402 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s
                                           Interval WAL: 1403 writes, 1402 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     59.5      0.41              0.06         9    0.045       0      0       0.0       0.0
                                             L6      1/0   10.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    181.4    155.2      0.55              0.22         8    0.068     36K   4168       0.0       0.0
                                            Sum      1/0   10.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    103.7    114.3      0.96              0.28        17    0.056     36K   4168       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.5    149.6    147.6      0.32              0.13         8    0.040     20K   2381       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    181.4    155.2      0.55              0.22         8    0.068     36K   4168       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    145.0      0.17              0.06         8    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.11 GB write, 0.09 MB/s write, 0.10 GB read, 0.08 MB/s read, 1.0 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 304.00 MB usage: 5.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.00011 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(337,5.32 MB,1.75107%) FilterBlock(18,106.36 KB,0.0341666%) IndexBlock(18,207.12 KB,0.0665364%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 17:56:34 compute-0 python3.9[241033]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:56:34 compute-0 sudo[241031]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:34.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:34 compute-0 sudo[241183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbethcblneigyuqksoxpingvafpgudqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254994.2813323-1220-113578062364545/AnsiballZ_stat.py'
Sep 30 17:56:34 compute-0 sudo[241183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:34 compute-0 ceph-mon[73755]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:34 compute-0 python3.9[241185]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:34 compute-0 sudo[241183]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:35 compute-0 sudo[241261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfvgviwvyidorbrmqolibwxudlxfngyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254994.2813323-1220-113578062364545/AnsiballZ_file.py'
Sep 30 17:56:35 compute-0 sudo[241261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:35.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:35 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Sep 30 17:56:35 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 17:56:35 compute-0 python3.9[241263]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:56:35 compute-0 sudo[241261]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:36 compute-0 sudo[241430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrvxoywdjcjgcrxilpoetlkomweolszm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254995.8804228-1266-95985971955394/AnsiballZ_file.py'
Sep 30 17:56:36 compute-0 sudo[241430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:36 compute-0 podman[241391]: 2025-09-30 17:56:36.212495708 +0000 UTC m=+0.077120627 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 17:56:36 compute-0 python3.9[241438]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:36 compute-0 sudo[241430]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:36.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:36 compute-0 ceph-mon[73755]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:36 compute-0 sudo[241589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmexdjisksbmgzaitchjbggsgcdptlvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254996.7267816-1282-266769016370995/AnsiballZ_stat.py'
Sep 30 17:56:36 compute-0 sudo[241589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:37.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:37.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:37 compute-0 python3.9[241591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:37 compute-0 sudo[241589]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:56:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:56:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:37 compute-0 sudo[241669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkguunwjmdajxdcpgbzjmlfuivfvaebc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254996.7267816-1282-266769016370995/AnsiballZ_file.py'
Sep 30 17:56:37 compute-0 sudo[241669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:56:37 compute-0 python3.9[241671]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:37 compute-0 sudo[241669]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:38.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:56:38 compute-0 sudo[241828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvhfxwzxpomtjtllpesiuvuijitlgbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254998.2620158-1306-101243058898810/AnsiballZ_stat.py'
Sep 30 17:56:38 compute-0 sudo[241828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:38 compute-0 python3.9[241835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:38 compute-0 ceph-mon[73755]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:38] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:56:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:38] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:56:38 compute-0 sudo[241828]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:39 compute-0 sudo[241911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scmunxdqrfezslodvgumnrfznzzthzom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254998.2620158-1306-101243058898810/AnsiballZ_file.py'
Sep 30 17:56:39 compute-0 sudo[241911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:39.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:39 compute-0 python3.9[241913]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:39 compute-0 sudo[241911]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:56:39 compute-0 sudo[242065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znuwgxslcguzcdwkfguvakumyvkvfxip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759254999.649008-1330-223479151056579/AnsiballZ_systemd.py'
Sep 30 17:56:39 compute-0 sudo[242065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:40 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:40 compute-0 python3.9[242067]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:56:40 compute-0 systemd[1]: Reloading.
Sep 30 17:56:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:40 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:40 compute-0 systemd-sysv-generator[242100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:56:40 compute-0 systemd-rc-local-generator[242092]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:56:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:40.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:40 compute-0 sudo[242065]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:40 compute-0 ceph-mon[73755]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:56:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:41.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:41 compute-0 ceph-osd[82241]: bluestore.MempoolThread fragmentation_score=0.000027 took=0.000048s
Sep 30 17:56:41 compute-0 sudo[242258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eryxzpaisdtlbpmhrmsmtxcvlbzddltv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255001.1212192-1346-3224803072694/AnsiballZ_stat.py'
Sep 30 17:56:41 compute-0 sudo[242258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:41 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:56:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:41 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:56:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:56:41 compute-0 python3.9[242260]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:41 compute-0 sudo[242258]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:41 compute-0 sudo[242337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtkrjqoogeeoiihpozbosvxehrowbbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255001.1212192-1346-3224803072694/AnsiballZ_file.py'
Sep 30 17:56:41 compute-0 sudo[242337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:42 compute-0 python3.9[242339]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175642 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:56:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:42 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:42 compute-0 sudo[242337]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:42 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:42.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:42 compute-0 ceph-mon[73755]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 17:56:42 compute-0 sudo[242489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzfwyojwzpwebmauzpckwvvaeexpgbrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255002.6017737-1370-76156451664069/AnsiballZ_stat.py'
Sep 30 17:56:42 compute-0 sudo[242489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:43 compute-0 python3.9[242491]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:43.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:43 compute-0 sudo[242489]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:43 compute-0 sudo[242568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btasdfmvttevlybkumlgtneiatpbrzwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255002.6017737-1370-76156451664069/AnsiballZ_file.py'
Sep 30 17:56:43 compute-0 sudo[242568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:43.565Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Sep 30 17:56:43 compute-0 python3.9[242570]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:43 compute-0 sudo[242568]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:44 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a00001230 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:44 compute-0 sudo[242721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsywgqdcovjnvtihogvnmywrggafowuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255004.0052-1394-237512773295221/AnsiballZ_systemd.py'
Sep 30 17:56:44 compute-0 sudo[242721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:44 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001680 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:44 compute-0 sudo[242724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:56:44 compute-0 sudo[242724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:56:44 compute-0 sudo[242724]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:44.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:44 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:56:44 compute-0 python3.9[242723]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:56:44 compute-0 systemd[1]: Reloading.
Sep 30 17:56:44 compute-0 systemd-rc-local-generator[242775]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:56:44 compute-0 systemd-sysv-generator[242778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:56:44 compute-0 ceph-mon[73755]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Sep 30 17:56:45 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 17:56:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:45 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 17:56:45 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 17:56:45 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 17:56:45 compute-0 sudo[242721]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:45 compute-0 sshd[192864]: drop connection #0 from [115.190.39.222]:37204 on [38.102.83.202]:22 penalty: exceeded LoginGraceTime
Sep 30 17:56:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:56:45 compute-0 sudo[242941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhhzemyicyqctdswmnteuqswnmwhapxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255005.5832322-1414-166306083304163/AnsiballZ_file.py'
Sep 30 17:56:45 compute-0 sudo[242941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:46 compute-0 python3.9[242943]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:56:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:46 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:46 compute-0 sudo[242941]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:46 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:46.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:46 compute-0 sudo[243108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqywbxihviwpvbmggvegbzytccbhpgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255006.546694-1430-219550717582934/AnsiballZ_stat.py'
Sep 30 17:56:46 compute-0 sudo[243108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:46 compute-0 podman[243067]: 2025-09-30 17:56:46.91892511 +0000 UTC m=+0.105140865 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.4)
Sep 30 17:56:46 compute-0 ceph-mon[73755]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:56:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:47.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:47 compute-0 python3.9[243113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:47 compute-0 sudo[243108]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:47 compute-0 sudo[243241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcaubszxthqommbngnifweitnpfucyem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255006.546694-1430-219550717582934/AnsiballZ_copy.py'
Sep 30 17:56:47 compute-0 sudo[243241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175647 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:56:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:56:47 compute-0 python3.9[243243]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255006.546694-1430-219550717582934/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:56:47 compute-0 sudo[243241]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:48 compute-0 ceph-mon[73755]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:56:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:48 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a00001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:48 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001680 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:48 compute-0 sudo[243394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edkgkjirqigamvzdjjrpyqrxxoogjqhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255008.2175162-1464-37408252869854/AnsiballZ_file.py'
Sep 30 17:56:48 compute-0 sudo[243394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:48.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:48 compute-0 python3.9[243396]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:56:48 compute-0 sudo[243394]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:48] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:56:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:48] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:56:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:49 compute-0 sudo[243547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrzcodllrterssvghbbzhirqahbvfsae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255009.138664-1480-15113623612750/AnsiballZ_stat.py'
Sep 30 17:56:49 compute-0 sudo[243547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:56:49 compute-0 python3.9[243549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:56:49 compute-0 sudo[243547]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:49 compute-0 sudo[243671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slxqqjqigunommqjmetgycorijbjtbnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255009.138664-1480-15113623612750/AnsiballZ_copy.py'
Sep 30 17:56:49 compute-0 sudo[243671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:50 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:50 compute-0 python3.9[243673]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255009.138664-1480-15113623612750/.source.json _original_basename=.3f8t69oq follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:50 compute-0 sudo[243671]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:50 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:56:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:50.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:56:50 compute-0 ceph-mon[73755]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:56:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:51 compute-0 sudo[243823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjgqzqxouiykqsnhgjyqwucvgouxttok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255010.6837735-1510-56058347611919/AnsiballZ_file.py'
Sep 30 17:56:51 compute-0 sudo[243823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:51 compute-0 python3.9[243825]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:56:51 compute-0 sudo[243823]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:52 compute-0 sudo[243977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfwibzadjpmajceebgkjercaxbxqyeec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255011.7662287-1526-185900845394151/AnsiballZ_stat.py'
Sep 30 17:56:52 compute-0 sudo[243977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:52 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a00001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:56:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:56:52 compute-0 sudo[243977]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:52 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001680 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:52.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:52 compute-0 ceph-mon[73755]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:56:52 compute-0 sudo[244100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ushrucmqtuosimolwnppsjenwkajzvey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255011.7662287-1526-185900845394151/AnsiballZ_copy.py'
Sep 30 17:56:52 compute-0 sudo[244100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:52 compute-0 sudo[244100]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:53.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:53.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:53 compute-0 sudo[244254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpssetknhioyqrpaehzxrpnlswvvuxik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255013.4281461-1560-184757712773986/AnsiballZ_container_config_data.py'
Sep 30 17:56:53 compute-0 sudo[244254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:53 compute-0 python3.9[244256]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Sep 30 17:56:53 compute-0 sudo[244254]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:54 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:56:54.246 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:56:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:56:54.246 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:56:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:56:54.246 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:56:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:54 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:54.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:54 compute-0 sudo[244407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdcfchaoqzadodjnrvxiszlodbuesiel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255014.3484483-1578-252020262617906/AnsiballZ_container_config_hash.py'
Sep 30 17:56:54 compute-0 sudo[244407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:54 compute-0 ceph-mon[73755]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Sep 30 17:56:54 compute-0 python3.9[244409]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 17:56:54 compute-0 sudo[244407]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:55.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:56:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:56:55 compute-0 sudo[244561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmsskrjuowaxfthlyzjgzgogvndsjaka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255015.5527818-1596-202350699244552/AnsiballZ_podman_container_info.py'
Sep 30 17:56:55 compute-0 sudo[244561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:56 compute-0 python3.9[244563]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Sep 30 17:56:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:56 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a00001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:56 compute-0 sudo[244561]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:56 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001680 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:56.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:56 compute-0 ceph-mon[73755]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:56:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:56:57.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:56:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:57.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:56:57 compute-0 sudo[244741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elpuqixtmxvjntwjstxtyjdodtbbvxiw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759255017.3729086-1622-34242553983966/AnsiballZ_edpm_container_manage.py'
Sep 30 17:56:57 compute-0 sudo[244741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:57 compute-0 python3[244743]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 17:56:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:58 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e00016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:56:58 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c0091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:56:58 compute-0 podman[244756]: 2025-09-30 17:56:58.460543204 +0000 UTC m=+0.449961143 image pull e99d9627280779529e99daa6a112e310843a207a3acc590902c030127020a067 38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest
Sep 30 17:56:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:56:58.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:58 compute-0 podman[244813]: 2025-09-30 17:56:58.621212683 +0000 UTC m=+0.060665659 container create 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Sep 30 17:56:58 compute-0 podman[244813]: 2025-09-30 17:56:58.58918731 +0000 UTC m=+0.028640306 image pull e99d9627280779529e99daa6a112e310843a207a3acc590902c030127020a067 38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest
Sep 30 17:56:58 compute-0 python3[244743]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z 38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest
Sep 30 17:56:58 compute-0 ceph-mon[73755]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:56:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:58] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 17:56:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:56:58] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 17:56:58 compute-0 sudo[244741]: pam_unix(sudo:session): session closed for user root
Sep 30 17:56:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:56:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:56:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:56:59.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:56:59 compute-0 sudo[245015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txtkayavvwwjelgwczabsvtemksdrwqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255019.1469023-1638-37699775850887/AnsiballZ_stat.py'
Sep 30 17:56:59 compute-0 sudo[245015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:56:59 compute-0 podman[244976]: 2025-09-30 17:56:59.454639818 +0000 UTC m=+0.060139445 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Sep 30 17:56:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:56:59 compute-0 python3.9[245023]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:56:59 compute-0 sudo[245015]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:00 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a000031e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:00 compute-0 sudo[245177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vukdcrzeohdbyyxuexiiehuguthpuuzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255020.030698-1656-167760927749283/AnsiballZ_file.py'
Sep 30 17:57:00 compute-0 sudo[245177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:00 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001680 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:00 compute-0 python3.9[245179]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:00 compute-0 sudo[245177]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:00 compute-0 ceph-mon[73755]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 17:57:01 compute-0 sudo[245253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvomgphapniwnnqqbqeafglmpxuungas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255020.030698-1656-167760927749283/AnsiballZ_stat.py'
Sep 30 17:57:01 compute-0 sudo[245253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:01.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:01 compute-0 python3.9[245255]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:57:01 compute-0 sudo[245253]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:01 compute-0 sudo[245406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onakibztsjlxayowfepgrpmazxqhbvva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255021.3814852-1656-179489475794299/AnsiballZ_copy.py'
Sep 30 17:57:01 compute-0 sudo[245406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:02 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e00032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:02 compute-0 python3.9[245408]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759255021.3814852-1656-179489475794299/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:02 compute-0 sudo[245406]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:02 compute-0 sudo[245482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyezfqopooxdaopwjmatjbozereyejvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255021.3814852-1656-179489475794299/AnsiballZ_systemd.py'
Sep 30 17:57:02 compute-0 sudo[245482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:02 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c0091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:57:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:57:02 compute-0 ceph-mon[73755]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:02 compute-0 python3.9[245484]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:57:02 compute-0 systemd[1]: Reloading.
Sep 30 17:57:02 compute-0 systemd-rc-local-generator[245514]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:57:02 compute-0 systemd-sysv-generator[245518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:57:03 compute-0 sudo[245511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:57:03 compute-0 sudo[245511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:03 compute-0 sudo[245511]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:03 compute-0 sudo[245482]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:03.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:03 compute-0 sudo[245545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:57:03 compute-0 sudo[245545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:03 compute-0 sudo[245656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpfwkghkgfkvtiobevzigqyeovpenfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255021.3814852-1656-179489475794299/AnsiballZ_systemd.py'
Sep 30 17:57:03 compute-0 sudo[245656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:03.567Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:03 compute-0 sudo[245545]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:57:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:57:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:57:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:57:03 compute-0 python3.9[245661]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:57:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:57:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:57:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:57:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:57:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:57:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:57:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:57:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:57:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:57:03 compute-0 systemd[1]: Reloading.
Sep 30 17:57:03 compute-0 systemd-rc-local-generator[245733]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:57:03 compute-0 systemd-sysv-generator[245736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:57:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:04 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a000031e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:04 compute-0 sudo[245682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:57:04 compute-0 sudo[245682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:04 compute-0 sudo[245682]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:04 compute-0 systemd[1]: Starting multipathd container...
Sep 30 17:57:04 compute-0 sudo[245744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:57:04 compute-0 sudo[245744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7012291df90a26fef772d5e7239d2fa0937dfcc00c31fd8a28b016d29279d1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7012291df90a26fef772d5e7239d2fa0937dfcc00c31fd8a28b016d29279d1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:04 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1.
Sep 30 17:57:04 compute-0 podman[245745]: 2025-09-30 17:57:04.374728394 +0000 UTC m=+0.139971141 container init 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:57:04 compute-0 multipathd[245784]: + sudo -E kolla_set_configs
Sep 30 17:57:04 compute-0 podman[245745]: 2025-09-30 17:57:04.403476612 +0000 UTC m=+0.168719389 container start 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 17:57:04 compute-0 sudo[245790]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Sep 30 17:57:04 compute-0 podman[245745]: multipathd
Sep 30 17:57:04 compute-0 sudo[245790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 17:57:04 compute-0 systemd[1]: Started multipathd container.
Sep 30 17:57:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:04 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001680 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:04 compute-0 sudo[245656]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:04 compute-0 podman[245791]: 2025-09-30 17:57:04.48374433 +0000 UTC m=+0.066998314 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 17:57:04 compute-0 multipathd[245784]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 17:57:04 compute-0 multipathd[245784]: INFO:__main__:Validating config file
Sep 30 17:57:04 compute-0 multipathd[245784]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 17:57:04 compute-0 multipathd[245784]: INFO:__main__:Writing out command to execute
Sep 30 17:57:04 compute-0 systemd[1]: 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1-7c26be8522d55151.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 17:57:04 compute-0 systemd[1]: 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1-7c26be8522d55151.service: Failed with result 'exit-code'.
Sep 30 17:57:04 compute-0 sudo[245790]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:04 compute-0 multipathd[245784]: ++ cat /run_command
Sep 30 17:57:04 compute-0 multipathd[245784]: + CMD='/usr/sbin/multipathd -d'
Sep 30 17:57:04 compute-0 multipathd[245784]: + ARGS=
Sep 30 17:57:04 compute-0 multipathd[245784]: + sudo kolla_copy_cacerts
Sep 30 17:57:04 compute-0 sudo[245847]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Sep 30 17:57:04 compute-0 sudo[245847]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 17:57:04 compute-0 sudo[245847]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:04 compute-0 multipathd[245784]: + [[ ! -n '' ]]
Sep 30 17:57:04 compute-0 multipathd[245784]: + . kolla_extend_start
Sep 30 17:57:04 compute-0 multipathd[245784]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Sep 30 17:57:04 compute-0 multipathd[245784]: Running command: '/usr/sbin/multipathd -d'
Sep 30 17:57:04 compute-0 multipathd[245784]: + umask 0022
Sep 30 17:57:04 compute-0 multipathd[245784]: + exec /usr/sbin/multipathd -d
Sep 30 17:57:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:04.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:04 compute-0 sudo[245823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:57:04 compute-0 multipathd[245784]: 3623.289528 | multipathd v0.9.9: start up
Sep 30 17:57:04 compute-0 sudo[245823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:04 compute-0 sudo[245823]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:04 compute-0 multipathd[245784]: 3623.299442 | reconfigure: setting up paths and maps
Sep 30 17:57:04 compute-0 multipathd[245784]: 3623.301327 | _check_bindings_file: failed to read header from /etc/multipath/bindings
Sep 30 17:57:04 compute-0 multipathd[245784]: 3623.302963 | updated bindings file /etc/multipath/bindings
Sep 30 17:57:04 compute-0 podman[245910]: 2025-09-30 17:57:04.67487038 +0000 UTC m=+0.041397517 container create 76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:57:04 compute-0 systemd[1]: Started libpod-conmon-76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221.scope.
Sep 30 17:57:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:57:04 compute-0 podman[245910]: 2025-09-30 17:57:04.658060263 +0000 UTC m=+0.024587430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:57:04 compute-0 podman[245910]: 2025-09-30 17:57:04.75944218 +0000 UTC m=+0.125969337 container init 76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 17:57:04 compute-0 podman[245910]: 2025-09-30 17:57:04.767962191 +0000 UTC m=+0.134489328 container start 76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 17:57:04 compute-0 serene_jennings[245926]: 167 167
Sep 30 17:57:04 compute-0 systemd[1]: libpod-76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221.scope: Deactivated successfully.
Sep 30 17:57:04 compute-0 podman[245910]: 2025-09-30 17:57:04.773325041 +0000 UTC m=+0.139852208 container attach 76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 17:57:04 compute-0 conmon[245926]: conmon 76b23e4fea10b3e606cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221.scope/container/memory.events
Sep 30 17:57:04 compute-0 podman[245910]: 2025-09-30 17:57:04.775946319 +0000 UTC m=+0.142473496 container died 76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9de3b9460e74010689dff3b4ba4a779b1589ff84e435556e30cf1b3a24b9ab16-merged.mount: Deactivated successfully.
Sep 30 17:57:04 compute-0 ceph-mon[73755]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:57:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:57:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:57:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:57:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:57:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:57:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:57:04 compute-0 podman[245910]: 2025-09-30 17:57:04.829218574 +0000 UTC m=+0.195745721 container remove 76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:57:04 compute-0 systemd[1]: libpod-conmon-76b23e4fea10b3e606ccba244a07e288b5bf4b48b9a7ec7be8e2d6440f982221.scope: Deactivated successfully.
Sep 30 17:57:05 compute-0 podman[245950]: 2025-09-30 17:57:05.009608696 +0000 UTC m=+0.049375645 container create 7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:57:05 compute-0 systemd[1]: Started libpod-conmon-7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618.scope.
Sep 30 17:57:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9b5f29b151511157c97786bf20fe9cc01e881cd96943119ee2b4ca5bd39dd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:05 compute-0 podman[245950]: 2025-09-30 17:57:04.984991926 +0000 UTC m=+0.024758925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9b5f29b151511157c97786bf20fe9cc01e881cd96943119ee2b4ca5bd39dd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9b5f29b151511157c97786bf20fe9cc01e881cd96943119ee2b4ca5bd39dd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9b5f29b151511157c97786bf20fe9cc01e881cd96943119ee2b4ca5bd39dd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9b5f29b151511157c97786bf20fe9cc01e881cd96943119ee2b4ca5bd39dd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:05 compute-0 podman[245950]: 2025-09-30 17:57:05.099004011 +0000 UTC m=+0.138770940 container init 7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_keller, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 17:57:05 compute-0 podman[245950]: 2025-09-30 17:57:05.106123386 +0000 UTC m=+0.145890295 container start 7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_keller, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 17:57:05 compute-0 podman[245950]: 2025-09-30 17:57:05.109054682 +0000 UTC m=+0.148821641 container attach 7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_keller, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:57:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:05.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:05 compute-0 infallible_keller[245966]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:57:05 compute-0 infallible_keller[245966]: --> All data devices are unavailable
Sep 30 17:57:05 compute-0 systemd[1]: libpod-7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618.scope: Deactivated successfully.
Sep 30 17:57:05 compute-0 podman[245950]: 2025-09-30 17:57:05.484971209 +0000 UTC m=+0.524738118 container died 7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 17:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-be9b5f29b151511157c97786bf20fe9cc01e881cd96943119ee2b4ca5bd39dd8-merged.mount: Deactivated successfully.
Sep 30 17:57:05 compute-0 podman[245950]: 2025-09-30 17:57:05.523144022 +0000 UTC m=+0.562910921 container remove 7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 17:57:05 compute-0 systemd[1]: libpod-conmon-7c128191ab438fd01ceac9466dacb5fabbaa6fa6c0a847cc33e08e4048b46618.scope: Deactivated successfully.
Sep 30 17:57:05 compute-0 sudo[245744]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:05 compute-0 sudo[245994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:57:05 compute-0 sudo[245994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:05 compute-0 sudo[245994]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:05 compute-0 sudo[246047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:57:05 compute-0 sudo[246047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:06 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e00032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:06 compute-0 python3.9[246182]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:57:06 compute-0 podman[246213]: 2025-09-30 17:57:06.222952842 +0000 UTC m=+0.049658403 container create 4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 17:57:06 compute-0 systemd[1]: Started libpod-conmon-4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10.scope.
Sep 30 17:57:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:57:06 compute-0 podman[246213]: 2025-09-30 17:57:06.287937252 +0000 UTC m=+0.114642833 container init 4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 17:57:06 compute-0 podman[246213]: 2025-09-30 17:57:06.294933414 +0000 UTC m=+0.121638975 container start 4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:57:06 compute-0 podman[246213]: 2025-09-30 17:57:06.20710723 +0000 UTC m=+0.033812791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:57:06 compute-0 podman[246213]: 2025-09-30 17:57:06.301526465 +0000 UTC m=+0.128232046 container attach 4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:57:06 compute-0 nice_herschel[246256]: 167 167
Sep 30 17:57:06 compute-0 systemd[1]: libpod-4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10.scope: Deactivated successfully.
Sep 30 17:57:06 compute-0 podman[246213]: 2025-09-30 17:57:06.303521197 +0000 UTC m=+0.130226758 container died 4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_herschel, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 17:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe90f52650fa1ee7df35a8f96600a45f6162eda0ca01d1b536e483ff3e95447-merged.mount: Deactivated successfully.
Sep 30 17:57:06 compute-0 podman[246213]: 2025-09-30 17:57:06.349937924 +0000 UTC m=+0.176643485 container remove 4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_herschel, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 17:57:06 compute-0 podman[246249]: 2025-09-30 17:57:06.353552048 +0000 UTC m=+0.087898827 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, managed_by=edpm_ansible)
Sep 30 17:57:06 compute-0 systemd[1]: libpod-conmon-4b3ae65f87141889c341f18a12e9d6318dc4f3479840259d49a6aed3792f4b10.scope: Deactivated successfully.
Sep 30 17:57:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:06 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:06.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:06 compute-0 podman[246352]: 2025-09-30 17:57:06.535032368 +0000 UTC m=+0.046731186 container create 7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_panini, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 17:57:06 compute-0 systemd[1]: Started libpod-conmon-7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86.scope.
Sep 30 17:57:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adefb81726b06d04fb43ac877c93acd55166292a0f1c78ff781094b0202cc046/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:06 compute-0 podman[246352]: 2025-09-30 17:57:06.514132135 +0000 UTC m=+0.025831003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adefb81726b06d04fb43ac877c93acd55166292a0f1c78ff781094b0202cc046/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adefb81726b06d04fb43ac877c93acd55166292a0f1c78ff781094b0202cc046/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adefb81726b06d04fb43ac877c93acd55166292a0f1c78ff781094b0202cc046/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:06 compute-0 podman[246352]: 2025-09-30 17:57:06.624481165 +0000 UTC m=+0.136180033 container init 7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:57:06 compute-0 podman[246352]: 2025-09-30 17:57:06.633590552 +0000 UTC m=+0.145289370 container start 7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 17:57:06 compute-0 podman[246352]: 2025-09-30 17:57:06.641271591 +0000 UTC m=+0.152970409 container attach 7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 17:57:06 compute-0 sudo[246447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoyuwssvsgxsqppphbsxkzmnpbdupbjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255026.4085898-1728-245201650467781/AnsiballZ_command.py'
Sep 30 17:57:06 compute-0 sudo[246447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:06 compute-0 ceph-mon[73755]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:06 compute-0 python3.9[246449]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:57:06 compute-0 strange_panini[246396]: {
Sep 30 17:57:06 compute-0 strange_panini[246396]:     "0": [
Sep 30 17:57:06 compute-0 strange_panini[246396]:         {
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "devices": [
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "/dev/loop3"
Sep 30 17:57:06 compute-0 strange_panini[246396]:             ],
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "lv_name": "ceph_lv0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "lv_size": "21470642176",
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "name": "ceph_lv0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "tags": {
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.cluster_name": "ceph",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.crush_device_class": "",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.encrypted": "0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.osd_id": "0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.type": "block",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.vdo": "0",
Sep 30 17:57:06 compute-0 strange_panini[246396]:                 "ceph.with_tpm": "0"
Sep 30 17:57:06 compute-0 strange_panini[246396]:             },
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "type": "block",
Sep 30 17:57:06 compute-0 strange_panini[246396]:             "vg_name": "ceph_vg0"
Sep 30 17:57:06 compute-0 strange_panini[246396]:         }
Sep 30 17:57:06 compute-0 strange_panini[246396]:     ]
Sep 30 17:57:06 compute-0 strange_panini[246396]: }
Sep 30 17:57:06 compute-0 systemd[1]: libpod-7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86.scope: Deactivated successfully.
Sep 30 17:57:06 compute-0 podman[246352]: 2025-09-30 17:57:06.968050819 +0000 UTC m=+0.479749677 container died 7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 17:57:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-adefb81726b06d04fb43ac877c93acd55166292a0f1c78ff781094b0202cc046-merged.mount: Deactivated successfully.
Sep 30 17:57:07 compute-0 sudo[246447]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:07 compute-0 podman[246352]: 2025-09-30 17:57:07.041594422 +0000 UTC m=+0.553293240 container remove 7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_panini, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 17:57:07 compute-0 systemd[1]: libpod-conmon-7bfe1f44cc54de4fe7bc8c190f50461a3c310cc257e8fe14beb3b255a5e17f86.scope: Deactivated successfully.
Sep 30 17:57:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:07.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:07 compute-0 sudo[246047]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:07 compute-0 sudo[246501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:57:07 compute-0 sudo[246501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:07 compute-0 sudo[246501]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:07.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:07 compute-0 sudo[246526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:57:07 compute-0 sudo[246526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:57:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:57:07
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'backups', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control']
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:57:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:07 compute-0 podman[246694]: 2025-09-30 17:57:07.625336083 +0000 UTC m=+0.037687991 container create e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_payne, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 17:57:07 compute-0 sudo[246734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cupnmtlckxtpnigtxhlwydpmmvpvnktz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255027.3095355-1744-61385847185009/AnsiballZ_systemd.py'
Sep 30 17:57:07 compute-0 systemd[1]: Started libpod-conmon-e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa.scope.
Sep 30 17:57:07 compute-0 sudo[246734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:57:07 compute-0 podman[246694]: 2025-09-30 17:57:07.706075903 +0000 UTC m=+0.118427841 container init e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_payne, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 17:57:07 compute-0 podman[246694]: 2025-09-30 17:57:07.609741117 +0000 UTC m=+0.022093045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:57:07 compute-0 podman[246694]: 2025-09-30 17:57:07.716792502 +0000 UTC m=+0.129144420 container start e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:57:07 compute-0 podman[246694]: 2025-09-30 17:57:07.720083987 +0000 UTC m=+0.132435925 container attach e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_payne, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Sep 30 17:57:07 compute-0 gracious_payne[246740]: 167 167
Sep 30 17:57:07 compute-0 systemd[1]: libpod-e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa.scope: Deactivated successfully.
Sep 30 17:57:07 compute-0 podman[246694]: 2025-09-30 17:57:07.724185734 +0000 UTC m=+0.136537652 container died e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 17:57:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f7225d389915431ff7bdca3b53f7a6806cd4c55dc61f4f19678503b6afdcff6-merged.mount: Deactivated successfully.
Sep 30 17:57:07 compute-0 podman[246694]: 2025-09-30 17:57:07.787449629 +0000 UTC m=+0.199801557 container remove e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_payne, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:57:07 compute-0 systemd[1]: libpod-conmon-e4f44b82302c7c6d5da74ed328ed9ccf379707e446a04c13c30011c3c6ac27aa.scope: Deactivated successfully.
Sep 30 17:57:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:57:07 compute-0 python3.9[246742]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:57:08 compute-0 podman[246765]: 2025-09-30 17:57:08.009548235 +0000 UTC m=+0.062694711 container create c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_knuth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:57:08 compute-0 systemd[1]: Stopping multipathd container...
Sep 30 17:57:08 compute-0 systemd[1]: Started libpod-conmon-c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7.scope.
Sep 30 17:57:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41552d120220d8ac5d61546f8e037ef4f45cfb49d8ffd2800a71dcae157f5257/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41552d120220d8ac5d61546f8e037ef4f45cfb49d8ffd2800a71dcae157f5257/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41552d120220d8ac5d61546f8e037ef4f45cfb49d8ffd2800a71dcae157f5257/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41552d120220d8ac5d61546f8e037ef4f45cfb49d8ffd2800a71dcae157f5257/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:08 compute-0 podman[246765]: 2025-09-30 17:57:07.986448005 +0000 UTC m=+0.039594501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:57:08 compute-0 podman[246765]: 2025-09-30 17:57:08.082898353 +0000 UTC m=+0.136044839 container init c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:57:08 compute-0 podman[246765]: 2025-09-30 17:57:08.089171786 +0000 UTC m=+0.142318272 container start c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_knuth, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:57:08 compute-0 podman[246765]: 2025-09-30 17:57:08.096792294 +0000 UTC m=+0.149938780 container attach c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:57:08 compute-0 multipathd[245784]: 3626.869082 | multipathd: shut down
Sep 30 17:57:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:08 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a000031e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:08 compute-0 systemd[1]: libpod-6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1.scope: Deactivated successfully.
Sep 30 17:57:08 compute-0 podman[246782]: 2025-09-30 17:57:08.157930424 +0000 UTC m=+0.102476356 container died 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 17:57:08 compute-0 systemd[1]: 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1-7c26be8522d55151.timer: Deactivated successfully.
Sep 30 17:57:08 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1.
Sep 30 17:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1-userdata-shm.mount: Deactivated successfully.
Sep 30 17:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e7012291df90a26fef772d5e7239d2fa0937dfcc00c31fd8a28b016d29279d1-merged.mount: Deactivated successfully.
Sep 30 17:57:08 compute-0 podman[246782]: 2025-09-30 17:57:08.335087092 +0000 UTC m=+0.279632994 container cleanup 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:57:08 compute-0 podman[246782]: multipathd
Sep 30 17:57:08 compute-0 podman[246828]: multipathd
Sep 30 17:57:08 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Sep 30 17:57:08 compute-0 systemd[1]: Stopped multipathd container.
Sep 30 17:57:08 compute-0 systemd[1]: Starting multipathd container...
Sep 30 17:57:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:08 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001680 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7012291df90a26fef772d5e7239d2fa0937dfcc00c31fd8a28b016d29279d1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7012291df90a26fef772d5e7239d2fa0937dfcc00c31fd8a28b016d29279d1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 17:57:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1.
Sep 30 17:57:08 compute-0 podman[246847]: 2025-09-30 17:57:08.563882362 +0000 UTC m=+0.140487395 container init 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Sep 30 17:57:08 compute-0 multipathd[246875]: + sudo -E kolla_set_configs
Sep 30 17:57:08 compute-0 sudo[246900]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Sep 30 17:57:08 compute-0 sudo[246900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 17:57:08 compute-0 podman[246847]: 2025-09-30 17:57:08.594921319 +0000 UTC m=+0.171526332 container start 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2)
Sep 30 17:57:08 compute-0 podman[246847]: multipathd
Sep 30 17:57:08 compute-0 systemd[1]: Started multipathd container.
Sep 30 17:57:08 compute-0 multipathd[246875]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 17:57:08 compute-0 multipathd[246875]: INFO:__main__:Validating config file
Sep 30 17:57:08 compute-0 multipathd[246875]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 17:57:08 compute-0 multipathd[246875]: INFO:__main__:Writing out command to execute
Sep 30 17:57:08 compute-0 sudo[246734]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:08 compute-0 sudo[246900]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:08 compute-0 multipathd[246875]: ++ cat /run_command
Sep 30 17:57:08 compute-0 multipathd[246875]: + CMD='/usr/sbin/multipathd -d'
Sep 30 17:57:08 compute-0 multipathd[246875]: + ARGS=
Sep 30 17:57:08 compute-0 multipathd[246875]: + sudo kolla_copy_cacerts
Sep 30 17:57:08 compute-0 sudo[246934]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Sep 30 17:57:08 compute-0 sudo[246934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 17:57:08 compute-0 sudo[246934]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:08 compute-0 multipathd[246875]: + [[ ! -n '' ]]
Sep 30 17:57:08 compute-0 multipathd[246875]: + . kolla_extend_start
Sep 30 17:57:08 compute-0 multipathd[246875]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Sep 30 17:57:08 compute-0 multipathd[246875]: Running command: '/usr/sbin/multipathd -d'
Sep 30 17:57:08 compute-0 multipathd[246875]: + umask 0022
Sep 30 17:57:08 compute-0 multipathd[246875]: + exec /usr/sbin/multipathd -d
Sep 30 17:57:08 compute-0 multipathd[246875]: 3627.447628 | multipathd v0.9.9: start up
Sep 30 17:57:08 compute-0 multipathd[246875]: 3627.455912 | reconfigure: setting up paths and maps
Sep 30 17:57:08 compute-0 podman[246902]: 2025-09-30 17:57:08.705108805 +0000 UTC m=+0.097722262 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 17:57:08 compute-0 systemd[1]: 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1-26c0fba8a842f8df.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 17:57:08 compute-0 systemd[1]: 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1-26c0fba8a842f8df.service: Failed with result 'exit-code'.
Sep 30 17:57:08 compute-0 lvm[246977]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:57:08 compute-0 lvm[246977]: VG ceph_vg0 finished
Sep 30 17:57:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:08] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:57:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:08] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:57:08 compute-0 intelligent_knuth[246786]: {}
Sep 30 17:57:08 compute-0 ceph-mon[73755]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:08 compute-0 systemd[1]: libpod-c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7.scope: Deactivated successfully.
Sep 30 17:57:08 compute-0 systemd[1]: libpod-c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7.scope: Consumed 1.245s CPU time.
Sep 30 17:57:08 compute-0 podman[246981]: 2025-09-30 17:57:08.915052115 +0000 UTC m=+0.023456931 container died c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 17:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-41552d120220d8ac5d61546f8e037ef4f45cfb49d8ffd2800a71dcae157f5257-merged.mount: Deactivated successfully.
Sep 30 17:57:08 compute-0 podman[246981]: 2025-09-30 17:57:08.960020675 +0000 UTC m=+0.068425471 container remove c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_knuth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:57:08 compute-0 systemd[1]: libpod-conmon-c59d7c10afc7fc0b7465b74da1c7f1bc8688199b2db11e1da5dcb3e76199bbc7.scope: Deactivated successfully.
Sep 30 17:57:09 compute-0 sudo[246526]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:57:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:57:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:57:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:57:09 compute-0 sudo[247048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:57:09 compute-0 sudo[247048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:09 compute-0 sudo[247048]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:09.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:09 compute-0 sudo[247146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpajaqwaarvaovcjclsmvtlajaeuodkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255028.995337-1760-55360803038979/AnsiballZ_file.py'
Sep 30 17:57:09 compute-0 sudo[247146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:09 compute-0 python3.9[247148]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:09 compute-0 sudo[247146]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 340 B/s rd, 0 op/s
Sep 30 17:57:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:57:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:57:10 compute-0 ceph-mon[73755]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 340 B/s rd, 0 op/s
Sep 30 17:57:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:10 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001680 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:10 compute-0 sudo[247300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lquzqveyitcvtkfiaqyprkehteabyeou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255030.0647166-1784-262364781070610/AnsiballZ_file.py'
Sep 30 17:57:10 compute-0 sudo[247300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:10 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:10 compute-0 python3.9[247302]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 17:57:10 compute-0 sudo[247300]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:11 compute-0 sudo[247452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssaazolwysdbuuengfnbubbatfveztyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255030.8769922-1800-151324002051032/AnsiballZ_modprobe.py'
Sep 30 17:57:11 compute-0 sudo[247452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:11.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:11 compute-0 python3.9[247454]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Sep 30 17:57:11 compute-0 kernel: Key type psk registered
Sep 30 17:57:11 compute-0 sudo[247452]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:12 compute-0 sudo[247613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrgcyflvmzqtslcoohxrggukegwrzwgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255031.7042735-1816-275315726980019/AnsiballZ_stat.py'
Sep 30 17:57:12 compute-0 sudo[247613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:12 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:12 compute-0 python3.9[247615]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:57:12 compute-0 sudo[247613]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:12 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:12 compute-0 sudo[247737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcotzghrllchinboiuvuhaugnpnbdlgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255031.7042735-1816-275315726980019/AnsiballZ_copy.py'
Sep 30 17:57:12 compute-0 sudo[247737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:12 compute-0 ceph-mon[73755]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:12 compute-0 python3.9[247739]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255031.7042735-1816-275315726980019/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:12 compute-0 sudo[247737]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:13.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:13.568Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:57:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:13.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:13 compute-0 sudo[247890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjubmoxqwnspfmdowcpamlhimytasppj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255033.2761402-1848-46797888709916/AnsiballZ_lineinfile.py'
Sep 30 17:57:13 compute-0 sudo[247890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:13 compute-0 python3.9[247892]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:13 compute-0 sudo[247890]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:14 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d8000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:14 compute-0 sudo[248043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddasmukgnorjnxerodzhssdxoooxcmdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255034.067355-1864-184092189095525/AnsiballZ_systemd.py'
Sep 30 17:57:14 compute-0 sudo[248043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:14 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:14.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:14 compute-0 ceph-mon[73755]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:14 compute-0 python3.9[248045]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:57:14 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Sep 30 17:57:14 compute-0 systemd[1]: Stopped Load Kernel Modules.
Sep 30 17:57:14 compute-0 systemd[1]: Stopping Load Kernel Modules...
Sep 30 17:57:14 compute-0 systemd[1]: Starting Load Kernel Modules...
Sep 30 17:57:14 compute-0 systemd[1]: Finished Load Kernel Modules.
Sep 30 17:57:14 compute-0 sudo[248043]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:15.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:15 compute-0 sudo[248200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibgwvcybgrhodqryiaopfliccsfbsirm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255035.1080077-1880-23133287029164/AnsiballZ_setup.py'
Sep 30 17:57:15 compute-0 sudo[248200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:15 compute-0 python3.9[248203]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 17:57:16 compute-0 sudo[248200]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:16 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:16 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:16.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:16 compute-0 sudo[248288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvtpsmrddjrcmmklwgxvonhnjsytuezd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255035.1080077-1880-23133287029164/AnsiballZ_dnf.py'
Sep 30 17:57:16 compute-0 sudo[248288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:16 compute-0 ceph-mon[73755]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:16 compute-0 python3.9[248290]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 17:57:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:17.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:17 compute-0 sshd-session[248208]: Invalid user srv from 45.252.249.158 port 51252
Sep 30 17:57:17 compute-0 sshd-session[248208]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 17:57:17 compute-0 sshd-session[248208]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 17:57:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:17.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:17 compute-0 podman[248292]: 2025-09-30 17:57:17.303426473 +0000 UTC m=+0.110730901 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 17:57:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:18 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:18 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:18.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:18 compute-0 ceph-mon[73755]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:18] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:57:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:18] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 17:57:19 compute-0 sshd-session[248208]: Failed password for invalid user srv from 45.252.249.158 port 51252 ssh2
Sep 30 17:57:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:19.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:20 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:20 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:20.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:20 compute-0 ceph-mon[73755]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:20 compute-0 sshd-session[248208]: Received disconnect from 45.252.249.158 port 51252:11: Bye Bye [preauth]
Sep 30 17:57:20 compute-0 sshd-session[248208]: Disconnected from invalid user srv 45.252.249.158 port 51252 [preauth]
Sep 30 17:57:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:21 compute-0 unix_chkpwd[248326]: password check failed for user (root)
Sep 30 17:57:21 compute-0 sshd-session[248323]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 17:57:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 17:57:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:22 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:57:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:57:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:22 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:22.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:22 compute-0 ceph-mon[73755]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:57:22 compute-0 sshd-session[248323]: Failed password for root from 14.225.220.107 port 54696 ssh2
Sep 30 17:57:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:23.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:23.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:23 compute-0 systemd[1]: Reloading.
Sep 30 17:57:23 compute-0 systemd-rc-local-generator[248358]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:57:23 compute-0 systemd-sysv-generator[248365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:57:24 compute-0 systemd[1]: Reloading.
Sep 30 17:57:24 compute-0 systemd-rc-local-generator[248392]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:57:24 compute-0 systemd-sysv-generator[248399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:57:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:24 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:24 compute-0 sshd-session[248323]: Received disconnect from 14.225.220.107 port 54696:11: Bye Bye [preauth]
Sep 30 17:57:24 compute-0 sshd-session[248323]: Disconnected from authenticating user root 14.225.220.107 port 54696 [preauth]
Sep 30 17:57:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:24 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:24 compute-0 systemd-logind[811]: Watching system buttons on /dev/input/event0 (Power Button)
Sep 30 17:57:24 compute-0 systemd-logind[811]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Sep 30 17:57:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:24.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:24 compute-0 lvm[248440]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:57:24 compute-0 lvm[248440]: VG ceph_vg0 finished
Sep 30 17:57:24 compute-0 sudo[248449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:57:24 compute-0 sudo[248449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:24 compute-0 sudo[248449]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 17:57:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 17:57:24 compute-0 systemd[1]: Reloading.
Sep 30 17:57:24 compute-0 ceph-mon[73755]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:24 compute-0 systemd-rc-local-generator[248514]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:57:24 compute-0 systemd-sysv-generator[248520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:57:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 17:57:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:25.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:25 compute-0 sudo[248288]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 17:57:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 17:57:26 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.557s CPU time.
Sep 30 17:57:26 compute-0 systemd[1]: run-r668a23a84b524477beb9a1da317db22b.service: Deactivated successfully.
Sep 30 17:57:26 compute-0 sudo[249807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswlvlizrihmneyetvrywhbmxebnmeob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255045.9800913-1904-31147877512236/AnsiballZ_file.py'
Sep 30 17:57:26 compute-0 sudo[249807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:26 compute-0 python3.9[249809]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:26 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4002160 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:26 compute-0 sudo[249807]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:26.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:26 compute-0 ceph-mon[73755]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:27.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:27.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:27 compute-0 python3.9[249959]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:57:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:28 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:28 compute-0 sudo[250115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdbbfnryfmqwdknkuzqpacefpscliiyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255048.0690002-1939-161720100450856/AnsiballZ_file.py'
Sep 30 17:57:28 compute-0 sudo[250115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:28 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:28.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:28 compute-0 python3.9[250117]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:28 compute-0 sudo[250115]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:57:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:57:28 compute-0 ceph-mon[73755]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:29.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:29 compute-0 sudo[250279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfzveykxpbxldrnbpivngawpsfxmlmmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255049.1735222-1961-141918642664395/AnsiballZ_systemd_service.py'
Sep 30 17:57:29 compute-0 sudo[250279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:29 compute-0 podman[250243]: 2025-09-30 17:57:29.881226175 +0000 UTC m=+0.055862894 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 17:57:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:30 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:30 compute-0 python3.9[250285]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:57:30 compute-0 systemd[1]: Reloading.
Sep 30 17:57:30 compute-0 systemd-rc-local-generator[250315]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:57:30 compute-0 systemd-sysv-generator[250319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:57:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:30 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:30.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:30 compute-0 sudo[250279]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:30 compute-0 ceph-mon[73755]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:31 compute-0 python3.9[250472]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:57:31 compute-0 network[250489]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:57:31 compute-0 network[250490]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:57:31 compute-0 network[250491]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:57:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:32 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:32 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:32.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:32 compute-0 ceph-mon[73755]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:33.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:33.570Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:57:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:33.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:34 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d80036e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:34 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:34.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:34 compute-0 ceph-mon[73755]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:35.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:36 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:36 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:36 compute-0 podman[250648]: 2025-09-30 17:57:36.540307269 +0000 UTC m=+0.070842744 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 17:57:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:36.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:36 compute-0 ceph-mon[73755]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:37.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:37.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:57:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:57:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:37 compute-0 sudo[250796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzpswucyehwejufnlcbgufoozykicupy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255057.5688136-1999-182163337444523/AnsiballZ_systemd_service.py'
Sep 30 17:57:37 compute-0 sudo[250796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:57:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59d80036e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:38 compute-0 python3.9[250798]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:38 compute-0 sudo[250796]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:38 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:38.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:38 compute-0 sudo[250949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jircviyoayaulhapmrxxakwgeuyvgdkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255058.421883-1999-77375599333972/AnsiballZ_systemd_service.py'
Sep 30 17:57:38 compute-0 sudo[250949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:38] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:57:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:38] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:57:38 compute-0 podman[250951]: 2025-09-30 17:57:38.868576621 +0000 UTC m=+0.076602234 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 17:57:39 compute-0 ceph-mon[73755]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:39 compute-0 python3.9[250952]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:39 compute-0 sudo[250949]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:39.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:39 compute-0 sudo[251124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wchhoofqyfafyfrczgncyepgvipdebqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255059.268678-1999-156515079780251/AnsiballZ_systemd_service.py'
Sep 30 17:57:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:39 compute-0 sudo[251124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:39 compute-0 python3.9[251126]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:39 compute-0 sudo[251124]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:40 compute-0 ceph-mon[73755]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:40 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:40 compute-0 sudo[251279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqochhtheodbtwuqkuaqajysxrrohikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255060.0960467-1999-1576920143095/AnsiballZ_systemd_service.py'
Sep 30 17:57:40 compute-0 sudo[251279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:40 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:40.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:40 compute-0 python3.9[251281]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:40 compute-0 sudo[251279]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:41 compute-0 sudo[251432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfwoodpfplctcjsdphwasbwpzmcivvdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255060.8893132-1999-62457221283156/AnsiballZ_systemd_service.py'
Sep 30 17:57:41 compute-0 sudo[251432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:41.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:41 compute-0 python3.9[251434]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:41 compute-0 sudo[251432]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:42 compute-0 sudo[251587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kczyqrnacrqdrfjfsewgnvycjthbcqsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255061.7503936-1999-115588798041911/AnsiballZ_systemd_service.py'
Sep 30 17:57:42 compute-0 sudo[251587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:42 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a00001740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:42 compute-0 python3.9[251589]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:42 compute-0 sudo[251587]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:42 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:42.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:42 compute-0 ceph-mon[73755]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:42 compute-0 sudo[251740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gerzchwzvaagxxmpjjvirmayjkmmrexa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255062.5806355-1999-261135995079197/AnsiballZ_systemd_service.py'
Sep 30 17:57:42 compute-0 sudo[251740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:43.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:43 compute-0 python3.9[251742]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:43 compute-0 sudo[251740]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:43.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:43 compute-0 sudo[251895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikzimvragpngqjddayrwnpvomluglsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255063.423263-1999-199832474873794/AnsiballZ_systemd_service.py'
Sep 30 17:57:43 compute-0 sudo[251895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:43 compute-0 python3.9[251897]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:57:44 compute-0 sudo[251895]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:44 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:44 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:44 compute-0 ceph-mon[73755]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:44 compute-0 sudo[251924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:57:44 compute-0 sudo[251924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:57:44 compute-0 sudo[251924]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:45 compute-0 sudo[252074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdyduxunqfvxypqkejrqcqdphldkqhkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255064.9002376-2117-177060924004503/AnsiballZ_file.py'
Sep 30 17:57:45 compute-0 sudo[252074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:45.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:45 compute-0 python3.9[252076]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:45 compute-0 sudo[252074]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:45 compute-0 sudo[252228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzkgolvcjiccwtdlrejyqhjdbbvbftgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255065.5739872-2117-238251322079308/AnsiballZ_file.py'
Sep 30 17:57:45 compute-0 sudo[252228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:46 compute-0 python3.9[252230]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:46 compute-0 sudo[252228]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:46 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:46 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec000f60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:46.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:46 compute-0 sudo[252380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behpkgynjtymvpwuqnwlnrhxtktqbhlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255066.288331-2117-91229182978758/AnsiballZ_file.py'
Sep 30 17:57:46 compute-0 sudo[252380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:46 compute-0 ceph-mon[73755]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:46 compute-0 python3.9[252382]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:46 compute-0 sudo[252380]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:57:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:47.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:47.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:47 compute-0 sudo[252532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgbquqnrclkqvuflbeyrkilfhmuoxuga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255067.0352101-2117-122944723263145/AnsiballZ_file.py'
Sep 30 17:57:47 compute-0 sudo[252532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:47 compute-0 podman[252535]: 2025-09-30 17:57:47.479459993 +0000 UTC m=+0.104877618 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 17:57:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:47 compute-0 python3.9[252536]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:47 compute-0 sudo[252532]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:48 compute-0 sudo[252712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwyzsajqrrxwdwoillnmqcmkutviimzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255067.7868319-2117-270794410004971/AnsiballZ_file.py'
Sep 30 17:57:48 compute-0 sudo[252712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:48 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:48 compute-0 python3.9[252714]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:48 compute-0 sudo[252712]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:48 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:48.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:48 compute-0 sudo[252864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vulspwblkzspqsbgitlrjhianolzqofm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255068.4474516-2117-266618264321830/AnsiballZ_file.py'
Sep 30 17:57:48 compute-0 sudo[252864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:48 compute-0 ceph-mon[73755]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:48] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:57:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:48] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:57:48 compute-0 python3.9[252866]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:48 compute-0 sudo[252864]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:49.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:49 compute-0 sudo[253016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogoliqwoonhmbpcsldhzyirhphbwsjvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255069.07624-2117-3623201882574/AnsiballZ_file.py'
Sep 30 17:57:49 compute-0 sudo[253016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:49 compute-0 python3.9[253019]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:49 compute-0 sudo[253016]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:49 compute-0 sudo[253170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acelwvfrrwymfagvgggxzysbivacigee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255069.7114503-2117-135991156798682/AnsiballZ_file.py'
Sep 30 17:57:49 compute-0 sudo[253170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:50 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:50 compute-0 python3.9[253172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:50 compute-0 sudo[253170]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:50 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec000f60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:50 compute-0 ceph-mon[73755]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:51 compute-0 sudo[253322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sghvazzhmuqmaworgxvhfbrcctvdvhcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255070.8585541-2231-205709600126770/AnsiballZ_file.py'
Sep 30 17:57:51 compute-0 sudo[253322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:51.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:51 compute-0 python3.9[253324]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:51 compute-0 sudo[253322]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:51 compute-0 sudo[253476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tozqcibgugaqfatkvcvsuuibozqcxvrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255071.5471294-2231-196967468449219/AnsiballZ_file.py'
Sep 30 17:57:51 compute-0 sudo[253476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:52 compute-0 python3.9[253478]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:52 compute-0 sudo[253476]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:52 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:57:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:57:52 compute-0 sudo[253628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdbbuolynipsvurpfhsfspbouqijwfhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255072.1596823-2231-190757798262364/AnsiballZ_file.py'
Sep 30 17:57:52 compute-0 sudo[253628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:52 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:52 compute-0 python3.9[253630]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:52 compute-0 sudo[253628]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:52 compute-0 ceph-mon[73755]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:57:53 compute-0 sudo[253780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orctkorcimzcwuabxbxgapcobewvvmmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255072.788588-2231-77173940002221/AnsiballZ_file.py'
Sep 30 17:57:53 compute-0 sudo[253780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:53.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:53 compute-0 python3.9[253782]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:53 compute-0 sudo[253780]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:53.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:53 compute-0 sudo[253934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kebimwzxetvbifvxustammmymqxxmmnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255073.4226317-2231-176276006661579/AnsiballZ_file.py'
Sep 30 17:57:53 compute-0 sudo[253934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:53 compute-0 python3.9[253936]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:53 compute-0 sudo[253934]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:54 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:57:54.248 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:57:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:57:54.250 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:57:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:57:54.250 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:57:54 compute-0 sudo[254087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyxjxjmyuchodelvizqwzswjdjpnroxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255074.091717-2231-251458667847935/AnsiballZ_file.py'
Sep 30 17:57:54 compute-0 sudo[254087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:54 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:54 compute-0 python3.9[254089]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:54.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:54 compute-0 sudo[254087]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:54 compute-0 ceph-mon[73755]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:55 compute-0 sudo[254239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kajztljgjcsxqwnxviptnhbjobwcgchn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255074.7190194-2231-53069845114750/AnsiballZ_file.py'
Sep 30 17:57:55 compute-0 sudo[254239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:55 compute-0 python3.9[254241]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:55 compute-0 sudo[254239]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 17:57:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:55.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 17:57:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:57:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:55 compute-0 sudo[254392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lydyijwcvvswotrzcafiyntlsghyolwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255075.3357747-2231-212989828632310/AnsiballZ_file.py'
Sep 30 17:57:55 compute-0 sudo[254392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:55 compute-0 python3.9[254394]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:57:55 compute-0 sudo[254392]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:56 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:56 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:57:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:57:56 compute-0 ceph-mon[73755]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:56 compute-0 sudo[254545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aozkucjirkmgbmzbpdoqnkelyyftqjcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255076.5569816-2347-188090044517317/AnsiballZ_command.py'
Sep 30 17:57:56 compute-0 sudo[254545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:57:57.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:57:57 compute-0 python3.9[254547]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:57:57 compute-0 sudo[254545]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:57.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:58 compute-0 python3.9[254701]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 17:57:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:58 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:57:58 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec001e50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:57:58 compute-0 sudo[254851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyqecuhuknnjhnzxjebzxhvuobkbepet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255078.3059103-2383-268692577982132/AnsiballZ_systemd_service.py'
Sep 30 17:57:58 compute-0 sudo[254851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:57:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:57:58.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:58] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:57:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:57:58] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:57:58 compute-0 ceph-mon[73755]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:57:58 compute-0 python3.9[254853]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:57:58 compute-0 systemd[1]: Reloading.
Sep 30 17:57:59 compute-0 systemd-sysv-generator[254880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:57:59 compute-0 systemd-rc-local-generator[254877]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:57:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:57:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:57:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:57:59.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:57:59 compute-0 sudo[254851]: pam_unix(sudo:session): session closed for user root
Sep 30 17:57:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:57:59 compute-0 sudo[255040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qleganmjhubjvhrfkvjqmnsvxjpvxone ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255079.6016166-2399-129540771926259/AnsiballZ_command.py'
Sep 30 17:57:59 compute-0 sudo[255040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:00 compute-0 python3.9[255042]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:58:00 compute-0 sudo[255040]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:00 compute-0 podman[255044]: 2025-09-30 17:58:00.179322269 +0000 UTC m=+0.061568552 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Sep 30 17:58:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:00 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:00 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:00 compute-0 sudo[255212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdhratmxdaidzkroksinxihbezirzgsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255080.2701788-2399-12124864104770/AnsiballZ_command.py'
Sep 30 17:58:00 compute-0 sudo[255212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:00.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:00 compute-0 python3.9[255214]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:58:00 compute-0 sudo[255212]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:00 compute-0 ceph-mon[73755]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:58:01 compute-0 sudo[255365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbfqxhlvqgtznepojuivdohkkzzolak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255080.9468422-2399-194037331947410/AnsiballZ_command.py'
Sep 30 17:58:01 compute-0 sudo[255365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:01.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:01 compute-0 python3.9[255367]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:58:01 compute-0 sudo[255365]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:01 compute-0 sudo[255520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khzpwjpmhxuizmypvwekwifnsaqmlhks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255081.5797467-2399-233914769444746/AnsiballZ_command.py'
Sep 30 17:58:01 compute-0 sudo[255520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:02 compute-0 python3.9[255522]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:58:02 compute-0 sudo[255520]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:02 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:02 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec002770 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:02 compute-0 sudo[255673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpudbfknbwgeulxodbvmakecqvndjbut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255082.304446-2399-168019954147314/AnsiballZ_command.py'
Sep 30 17:58:02 compute-0 sudo[255673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:02.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:02 compute-0 python3.9[255675]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:58:02 compute-0 sudo[255673]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:02 compute-0 ceph-mon[73755]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:03 compute-0 sudo[255826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltlvxojqruuwubwoghjswyccoplatikm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255082.9362288-2399-92299151439095/AnsiballZ_command.py'
Sep 30 17:58:03 compute-0 sudo[255826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:03.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:03 compute-0 python3.9[255828]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:58:03 compute-0 sudo[255826]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:03.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:03 compute-0 sudo[255981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udxrkleithkzsyxxikdmezobwufwsjbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255083.567877-2399-113178615515726/AnsiballZ_command.py'
Sep 30 17:58:03 compute-0 sudo[255981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:04 compute-0 python3.9[255983]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:58:04 compute-0 sudo[255981]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:04 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:04 compute-0 sudo[256134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usfcydpjyhavholfiyokwclbnskqjlkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255084.1913157-2399-239388147087184/AnsiballZ_command.py'
Sep 30 17:58:04 compute-0 sudo[256134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:04 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:04.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:04 compute-0 python3.9[256136]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:58:04 compute-0 sudo[256134]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:04 compute-0 sudo[256162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:58:04 compute-0 sudo[256162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:04 compute-0 sudo[256162]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:04 compute-0 ceph-mon[73755]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:05.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:06 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:06 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec002770 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:06.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:07 compute-0 ceph-mon[73755]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:07.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:58:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:58:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:07.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:58:07
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.nfs', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'backups', '.mgr', 'default.rgw.control']
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:58:07 compute-0 podman[256190]: 2025-09-30 17:58:07.531110428 +0000 UTC m=+0.067307482 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=watcher_latest)
Sep 30 17:58:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:07 compute-0 sudo[256337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwukyeegilwfrlxxqzffavaiansguhms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255087.531181-2542-243356102510324/AnsiballZ_file.py'
Sep 30 17:58:07 compute-0 sudo[256337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:58:08 compute-0 python3.9[256339]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:08 compute-0 sudo[256337]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:08 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:08 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5a0c009ec0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:08 compute-0 sudo[256489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adnknthurmfuiehujkiopriqdkcjhuhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255088.2471888-2542-125812365509135/AnsiballZ_file.py'
Sep 30 17:58:08 compute-0 sudo[256489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:08.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:08 compute-0 python3.9[256491]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:08 compute-0 sudo[256489]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:58:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:58:09 compute-0 ceph-mon[73755]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:09 compute-0 sudo[256653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbwbvfdrdjeqhdpnyspxlsfhfqcdzaac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255088.8981965-2542-177988045912063/AnsiballZ_file.py'
Sep 30 17:58:09 compute-0 sudo[256653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:09 compute-0 podman[256615]: 2025-09-30 17:58:09.184238381 +0000 UTC m=+0.057633890 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=multipathd, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 17:58:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:09.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:09 compute-0 sudo[256663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:58:09 compute-0 sudo[256663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:09 compute-0 sudo[256663]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:09 compute-0 python3.9[256662]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:09 compute-0 sudo[256688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 17:58:09 compute-0 sudo[256688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:09 compute-0 sudo[256653]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:58:09 compute-0 sudo[256688]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:58:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:58:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:09 compute-0 sudo[256758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:58:09 compute-0 sudo[256758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:09 compute-0 sudo[256758]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:09 compute-0 sudo[256783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:58:09 compute-0 sudo[256783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 17:58:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 17:58:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:10 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59f4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:10 compute-0 sudo[256953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awmwhqmzydtlplyrajyyzkgsodrtlrag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255089.9812047-2586-35686966782022/AnsiballZ_file.py'
Sep 30 17:58:10 compute-0 sudo[256953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:10 compute-0 sudo[256783]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:10 compute-0 python3.9[256955]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:10 compute-0 sudo[256953]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:10 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59ec002770 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:10.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:10 compute-0 ceph-mon[73755]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:58:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:58:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:58:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:58:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:58:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:58:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:58:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:58:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:58:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:58:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:58:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:58:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:58:10 compute-0 sudo[257120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzfnvybtnzwhlwualhsvzuirbtvbafhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255090.6262796-2586-28542545101948/AnsiballZ_file.py'
Sep 30 17:58:10 compute-0 sudo[257120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:10 compute-0 sudo[257116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:58:10 compute-0 sudo[257116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:10 compute-0 sudo[257116]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:10 compute-0 sudo[257145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:58:10 compute-0 sudo[257145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:11 compute-0 python3.9[257140]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:11 compute-0 sudo[257120]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:11.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:11 compute-0 podman[257285]: 2025-09-30 17:58:11.433690162 +0000 UTC m=+0.044537109 container create 0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:58:11 compute-0 systemd[1]: Started libpod-conmon-0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c.scope.
Sep 30 17:58:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:58:11 compute-0 podman[257285]: 2025-09-30 17:58:11.415457828 +0000 UTC m=+0.026304815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:58:11 compute-0 podman[257285]: 2025-09-30 17:58:11.511801814 +0000 UTC m=+0.122648861 container init 0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_pike, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:58:11 compute-0 podman[257285]: 2025-09-30 17:58:11.518760845 +0000 UTC m=+0.129607802 container start 0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:58:11 compute-0 podman[257285]: 2025-09-30 17:58:11.522360578 +0000 UTC m=+0.133207555 container attach 0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_pike, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:58:11 compute-0 modest_pike[257325]: 167 167
Sep 30 17:58:11 compute-0 systemd[1]: libpod-0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c.scope: Deactivated successfully.
Sep 30 17:58:11 compute-0 podman[257285]: 2025-09-30 17:58:11.526948398 +0000 UTC m=+0.137795375 container died 0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 17:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc104b9f751c498f355d65bda54754dcf7a8da50b1b0f2724813f49d93b1dc1b-merged.mount: Deactivated successfully.
Sep 30 17:58:11 compute-0 podman[257285]: 2025-09-30 17:58:11.566334162 +0000 UTC m=+0.177181109 container remove 0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_pike, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:58:11 compute-0 systemd[1]: libpod-conmon-0b91329e93f18358032bcc9279910f68e642ca2a6d1443ee6c921a5706a3b79c.scope: Deactivated successfully.
Sep 30 17:58:11 compute-0 sudo[257391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbblenuomwtwktwulwwbxlbltgzinkgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255091.3184228-2586-89293140517654/AnsiballZ_file.py'
Sep 30 17:58:11 compute-0 sudo[257391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:11 compute-0 podman[257402]: 2025-09-30 17:58:11.718794877 +0000 UTC m=+0.040179356 container create 92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 17:58:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:58:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:58:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:58:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:58:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:58:11 compute-0 systemd[1]: Started libpod-conmon-92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d.scope.
Sep 30 17:58:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3e6435d1d64f50b23d413941eb8cd9a449734f0c8bab4de561f2603894255c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3e6435d1d64f50b23d413941eb8cd9a449734f0c8bab4de561f2603894255c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3e6435d1d64f50b23d413941eb8cd9a449734f0c8bab4de561f2603894255c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3e6435d1d64f50b23d413941eb8cd9a449734f0c8bab4de561f2603894255c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db3e6435d1d64f50b23d413941eb8cd9a449734f0c8bab4de561f2603894255c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:11 compute-0 podman[257402]: 2025-09-30 17:58:11.70238889 +0000 UTC m=+0.023773369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:58:11 compute-0 podman[257402]: 2025-09-30 17:58:11.808657384 +0000 UTC m=+0.130041883 container init 92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 17:58:11 compute-0 podman[257402]: 2025-09-30 17:58:11.823191952 +0000 UTC m=+0.144576421 container start 92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 17:58:11 compute-0 podman[257402]: 2025-09-30 17:58:11.826622851 +0000 UTC m=+0.148007360 container attach 92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:58:11 compute-0 python3.9[257395]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:11 compute-0 sudo[257391]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:12 compute-0 objective_thompson[257419]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:58:12 compute-0 objective_thompson[257419]: --> All data devices are unavailable
Sep 30 17:58:12 compute-0 kernel: ganesha.nfsd[242070]: segfault at 50 ip 00007f5ab620532e sp 00007f5a6e7fb210 error 4 in libntirpc.so.5.8[7f5ab61ea000+2c000] likely on CPU 4 (core 0, socket 4)
Sep 30 17:58:12 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 17:58:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[239517]: 30/09/2025 17:58:12 : epoch 68dc19ca : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f59e0003c50 fd 37 proxy ignored for local
Sep 30 17:58:12 compute-0 podman[257402]: 2025-09-30 17:58:12.230440063 +0000 UTC m=+0.551824542 container died 92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:58:12 compute-0 systemd[1]: Started Process Core Dump (PID 257552/UID 0).
Sep 30 17:58:12 compute-0 systemd[1]: libpod-92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d.scope: Deactivated successfully.
Sep 30 17:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-db3e6435d1d64f50b23d413941eb8cd9a449734f0c8bab4de561f2603894255c-merged.mount: Deactivated successfully.
Sep 30 17:58:12 compute-0 podman[257402]: 2025-09-30 17:58:12.305806763 +0000 UTC m=+0.627191232 container remove 92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_thompson, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 17:58:12 compute-0 sudo[257598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxrxgktxzsgxosvlghgbecxywnqsybeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255092.0155284-2586-37164687471615/AnsiballZ_file.py'
Sep 30 17:58:12 compute-0 sudo[257598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:12 compute-0 systemd[1]: libpod-conmon-92488e3f8a472fff09db1d03e2ca3dc04fe76b8c769de87ae8684cdef494435d.scope: Deactivated successfully.
Sep 30 17:58:12 compute-0 sudo[257145]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:12 compute-0 sudo[257601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:58:12 compute-0 sudo[257601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:12 compute-0 sudo[257601]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:12 compute-0 sudo[257626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:58:12 compute-0 sudo[257626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:12 compute-0 python3.9[257600]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:12 compute-0 sudo[257598]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:12.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:12 compute-0 podman[257787]: 2025-09-30 17:58:12.863463867 +0000 UTC m=+0.042706712 container create a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jones, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:58:12 compute-0 ceph-mon[73755]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:12 compute-0 systemd[1]: Started libpod-conmon-a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb.scope.
Sep 30 17:58:12 compute-0 podman[257787]: 2025-09-30 17:58:12.846107895 +0000 UTC m=+0.025350770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:58:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:58:12 compute-0 podman[257787]: 2025-09-30 17:58:12.960801688 +0000 UTC m=+0.140044553 container init a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jones, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 17:58:12 compute-0 podman[257787]: 2025-09-30 17:58:12.970981823 +0000 UTC m=+0.150224658 container start a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:58:12 compute-0 podman[257787]: 2025-09-30 17:58:12.97470628 +0000 UTC m=+0.153949125 container attach a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 17:58:12 compute-0 gallant_jones[257830]: 167 167
Sep 30 17:58:12 compute-0 systemd[1]: libpod-a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb.scope: Deactivated successfully.
Sep 30 17:58:12 compute-0 podman[257787]: 2025-09-30 17:58:12.979036382 +0000 UTC m=+0.158279237 container died a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:58:12 compute-0 sudo[257860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbnvdbwgirkdayceawizowatplaxwbfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255092.6993928-2586-176475860196353/AnsiballZ_file.py'
Sep 30 17:58:12 compute-0 sudo[257860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b4530bd0db2be7e7c29b7fb29142268a65f841ed70326bd6c6486209aa3d5a7-merged.mount: Deactivated successfully.
Sep 30 17:58:13 compute-0 podman[257787]: 2025-09-30 17:58:13.02161464 +0000 UTC m=+0.200857485 container remove a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:58:13 compute-0 systemd[1]: libpod-conmon-a70bd7cb6e4201f309498b52642064db2b08de07a87f05e93db354248931a3cb.scope: Deactivated successfully.
Sep 30 17:58:13 compute-0 python3.9[257864]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:13 compute-0 sudo[257860]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:13.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:13 compute-0 podman[257883]: 2025-09-30 17:58:13.188221163 +0000 UTC m=+0.031436459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:58:13 compute-0 sshd-session[257430]: Invalid user ubnt from 80.94.95.115 port 21850
Sep 30 17:58:13 compute-0 sshd-session[257430]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 17:58:13 compute-0 sshd-session[257430]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.115
Sep 30 17:58:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:13.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:13 compute-0 sudo[258047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rviieqjdtmcibmodkhnqdxzgiymzxirk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255093.3383672-2586-37644057487188/AnsiballZ_file.py'
Sep 30 17:58:13 compute-0 sudo[258047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:13 compute-0 python3.9[258050]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:14 compute-0 sudo[258047]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:14 compute-0 sudo[258200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymguwwcpznishnbuewhyfnjuviaxhnot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255094.142397-2586-90951247618882/AnsiballZ_file.py'
Sep 30 17:58:14 compute-0 sudo[258200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:14 compute-0 podman[257883]: 2025-09-30 17:58:14.55235415 +0000 UTC m=+1.395569426 container create b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:58:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:14.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:14 compute-0 python3.9[258202]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:14 compute-0 sudo[258200]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:15 compute-0 sshd-session[257430]: Failed password for invalid user ubnt from 80.94.95.115 port 21850 ssh2
Sep 30 17:58:15 compute-0 sudo[258352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czfohwgkvgucwpswsulelaieabxepiop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255094.9047062-2586-239787155726986/AnsiballZ_file.py'
Sep 30 17:58:15 compute-0 ceph-mon[73755]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:15 compute-0 sudo[258352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:15 compute-0 systemd-coredump[257559]: Process 239521 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f5ab620532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 17:58:15 compute-0 systemd[1]: Started libpod-conmon-b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5.scope.
Sep 30 17:58:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12104ec313af004ea0c8884cb75ff70ff45f855a2ccb547f514e16c445e1d52b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12104ec313af004ea0c8884cb75ff70ff45f855a2ccb547f514e16c445e1d52b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12104ec313af004ea0c8884cb75ff70ff45f855a2ccb547f514e16c445e1d52b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12104ec313af004ea0c8884cb75ff70ff45f855a2ccb547f514e16c445e1d52b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:15 compute-0 podman[257883]: 2025-09-30 17:58:15.25640226 +0000 UTC m=+2.099617546 container init b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:58:15 compute-0 systemd[1]: systemd-coredump@11-257552-0.service: Deactivated successfully.
Sep 30 17:58:15 compute-0 systemd[1]: systemd-coredump@11-257552-0.service: Consumed 1.271s CPU time.
Sep 30 17:58:15 compute-0 podman[257883]: 2025-09-30 17:58:15.268107954 +0000 UTC m=+2.111323220 container start b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermat, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:58:15 compute-0 podman[257883]: 2025-09-30 17:58:15.271910483 +0000 UTC m=+2.115125759 container attach b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermat, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Sep 30 17:58:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:15 compute-0 podman[258366]: 2025-09-30 17:58:15.345849746 +0000 UTC m=+0.042160417 container died 444236ac086cf486eb2dd092b7bd60c2d6def35469c0cbf95a734f7ef629935f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:58:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-997a59855422ac6e4a09ec4f7ce7a5e4fa9ba434ffc7496d85535205940ce074-merged.mount: Deactivated successfully.
Sep 30 17:58:15 compute-0 podman[258366]: 2025-09-30 17:58:15.385024215 +0000 UTC m=+0.081334846 container remove 444236ac086cf486eb2dd092b7bd60c2d6def35469c0cbf95a734f7ef629935f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 17:58:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 17:58:15 compute-0 python3.9[258355]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:15 compute-0 sudo[258352]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:15 compute-0 sshd-session[257430]: Connection closed by invalid user ubnt 80.94.95.115 port 21850 [preauth]
Sep 30 17:58:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 17:58:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.695s CPU time.
Sep 30 17:58:15 compute-0 adoring_fermat[258358]: {
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:     "0": [
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:         {
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "devices": [
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "/dev/loop3"
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             ],
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "lv_name": "ceph_lv0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "lv_size": "21470642176",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "name": "ceph_lv0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "tags": {
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.cluster_name": "ceph",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.crush_device_class": "",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.encrypted": "0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.osd_id": "0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.type": "block",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.vdo": "0",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:                 "ceph.with_tpm": "0"
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             },
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "type": "block",
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:             "vg_name": "ceph_vg0"
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:         }
Sep 30 17:58:15 compute-0 adoring_fermat[258358]:     ]
Sep 30 17:58:15 compute-0 adoring_fermat[258358]: }
Sep 30 17:58:15 compute-0 systemd[1]: libpod-b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5.scope: Deactivated successfully.
Sep 30 17:58:15 compute-0 podman[257883]: 2025-09-30 17:58:15.592008538 +0000 UTC m=+2.435223864 container died b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 17:58:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-12104ec313af004ea0c8884cb75ff70ff45f855a2ccb547f514e16c445e1d52b-merged.mount: Deactivated successfully.
Sep 30 17:58:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:15 compute-0 podman[257883]: 2025-09-30 17:58:15.641553166 +0000 UTC m=+2.484768432 container remove b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 17:58:15 compute-0 systemd[1]: libpod-conmon-b56f4b17e70531d7dfe62a39afafe8efd148eedde7ca212438c8b816d7d6e1a5.scope: Deactivated successfully.
Sep 30 17:58:15 compute-0 sudo[257626]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:15 compute-0 sudo[258522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:58:15 compute-0 sudo[258522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:15 compute-0 sudo[258522]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:15 compute-0 sudo[258570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:58:15 compute-0 sudo[258570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:15 compute-0 sudo[258622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzlhxdqcqgfqkkkuqoisqlkwziufsbee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255095.5810978-2586-114339748106716/AnsiballZ_file.py'
Sep 30 17:58:15 compute-0 sudo[258622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:16 compute-0 python3.9[258624]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:16 compute-0 sudo[258622]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:16 compute-0 ceph-mon[73755]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:16 compute-0 podman[258686]: 2025-09-30 17:58:16.2111394 +0000 UTC m=+0.048837351 container create 53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_borg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:58:16 compute-0 systemd[1]: Started libpod-conmon-53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b.scope.
Sep 30 17:58:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:58:16 compute-0 podman[258686]: 2025-09-30 17:58:16.188503341 +0000 UTC m=+0.026201312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:58:16 compute-0 podman[258686]: 2025-09-30 17:58:16.286572702 +0000 UTC m=+0.124270673 container init 53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_borg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:58:16 compute-0 podman[258686]: 2025-09-30 17:58:16.295037272 +0000 UTC m=+0.132735233 container start 53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_borg, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:58:16 compute-0 podman[258686]: 2025-09-30 17:58:16.298668436 +0000 UTC m=+0.136366407 container attach 53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_borg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:58:16 compute-0 admiring_borg[258703]: 167 167
Sep 30 17:58:16 compute-0 systemd[1]: libpod-53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b.scope: Deactivated successfully.
Sep 30 17:58:16 compute-0 podman[258686]: 2025-09-30 17:58:16.301824068 +0000 UTC m=+0.139522019 container died 53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_borg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 17:58:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d74ab3c2a145f3d6ee8005a10d3028b52cdfcef07cde68a4fc5904bb2ffe08e-merged.mount: Deactivated successfully.
Sep 30 17:58:16 compute-0 podman[258686]: 2025-09-30 17:58:16.343673557 +0000 UTC m=+0.181371528 container remove 53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 17:58:16 compute-0 systemd[1]: libpod-conmon-53641b560487a50ac555c23004164227f5e9c4ce1016a1e08d35da19ecc8107b.scope: Deactivated successfully.
Sep 30 17:58:16 compute-0 podman[258728]: 2025-09-30 17:58:16.606960984 +0000 UTC m=+0.074300123 container create 04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 17:58:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:16.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:16 compute-0 systemd[1]: Started libpod-conmon-04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4.scope.
Sep 30 17:58:16 compute-0 podman[258728]: 2025-09-30 17:58:16.57718914 +0000 UTC m=+0.044528339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:58:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894c1ca4b14701acd8bae05afaca1ca6f5e327878cb0275b386c69f3e499428/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894c1ca4b14701acd8bae05afaca1ca6f5e327878cb0275b386c69f3e499428/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894c1ca4b14701acd8bae05afaca1ca6f5e327878cb0275b386c69f3e499428/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894c1ca4b14701acd8bae05afaca1ca6f5e327878cb0275b386c69f3e499428/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:16 compute-0 podman[258728]: 2025-09-30 17:58:16.717311404 +0000 UTC m=+0.184650553 container init 04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 17:58:16 compute-0 podman[258728]: 2025-09-30 17:58:16.733914486 +0000 UTC m=+0.201253595 container start 04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Sep 30 17:58:16 compute-0 podman[258728]: 2025-09-30 17:58:16.738445234 +0000 UTC m=+0.205784493 container attach 04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hofstadter, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:58:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:17.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:17.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:17 compute-0 lvm[258819]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:58:17 compute-0 lvm[258819]: VG ceph_vg0 finished
Sep 30 17:58:17 compute-0 pensive_hofstadter[258744]: {}
Sep 30 17:58:17 compute-0 systemd[1]: libpod-04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4.scope: Deactivated successfully.
Sep 30 17:58:17 compute-0 systemd[1]: libpod-04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4.scope: Consumed 1.243s CPU time.
Sep 30 17:58:17 compute-0 podman[258728]: 2025-09-30 17:58:17.560813101 +0000 UTC m=+1.028152220 container died 04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 17:58:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6894c1ca4b14701acd8bae05afaca1ca6f5e327878cb0275b386c69f3e499428-merged.mount: Deactivated successfully.
Sep 30 17:58:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:17 compute-0 podman[258728]: 2025-09-30 17:58:17.635218606 +0000 UTC m=+1.102557715 container remove 04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_hofstadter, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 17:58:17 compute-0 systemd[1]: libpod-conmon-04516ab52679d1b7485439d9c53e79e4f11f61696f7e8e887e334eb9eb061ae4.scope: Deactivated successfully.
Sep 30 17:58:17 compute-0 sudo[258570]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:58:17 compute-0 podman[258823]: 2025-09-30 17:58:17.698111672 +0000 UTC m=+0.132839336 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4)
Sep 30 17:58:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:58:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:17 compute-0 sudo[258862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:58:17 compute-0 sudo[258862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:17 compute-0 sudo[258862]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:18.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:18 compute-0 ceph-mon[73755]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:58:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:58:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:58:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:19.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:58:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175820 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 17:58:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:20.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:20 compute-0 ceph-mon[73755]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 17:58:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:21.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:58:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:58:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:58:22 compute-0 sudo[259018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yltdqqplrddjbmivowndpwqanlyknjqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255101.8309407-2871-106038488177889/AnsiballZ_getent.py'
Sep 30 17:58:22 compute-0 sudo[259018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:22 compute-0 python3.9[259020]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Sep 30 17:58:22 compute-0 sudo[259018]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:22 compute-0 unix_chkpwd[259022]: password check failed for user (root)
Sep 30 17:58:22 compute-0 sshd-session[258889]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 17:58:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:22.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:22 compute-0 ceph-mon[73755]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:58:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:58:23 compute-0 sudo[259172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmhpapxzhpkdsyfvfavuocgywzlppchv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255102.7513537-2887-85660534290848/AnsiballZ_group.py'
Sep 30 17:58:23 compute-0 sudo[259172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:23.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:23 compute-0 python3.9[259174]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 17:58:23 compute-0 groupadd[259176]: group added to /etc/group: name=nova, GID=42436
Sep 30 17:58:23 compute-0 groupadd[259176]: group added to /etc/gshadow: name=nova
Sep 30 17:58:23 compute-0 groupadd[259176]: new group: name=nova, GID=42436
Sep 30 17:58:23 compute-0 sudo[259172]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:23.579Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:58:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:23.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:58:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:58:24 compute-0 sudo[259332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-budmhttvurfwghmrbwynxahhuxcqpuwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255103.821803-2903-92038933075516/AnsiballZ_user.py'
Sep 30 17:58:24 compute-0 sudo[259332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:24 compute-0 python3.9[259334]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 17:58:24 compute-0 useradd[259336]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Sep 30 17:58:24 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:58:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:24 compute-0 useradd[259336]: add 'nova' to group 'libvirt'
Sep 30 17:58:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:24 compute-0 useradd[259336]: add 'nova' to shadow group 'libvirt'
Sep 30 17:58:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:24.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:24 compute-0 sshd-session[258889]: Failed password for root from 45.252.249.158 port 50004 ssh2
Sep 30 17:58:24 compute-0 sudo[259332]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:24 compute-0 ceph-mon[73755]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:58:24 compute-0 sudo[259368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:58:24 compute-0 sudo[259368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:24 compute-0 sudo[259368]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:25.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:25 compute-0 sshd-session[258889]: Received disconnect from 45.252.249.158 port 50004:11: Bye Bye [preauth]
Sep 30 17:58:25 compute-0 sshd-session[258889]: Disconnected from authenticating user root 45.252.249.158 port 50004 [preauth]
Sep 30 17:58:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:58:25 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 12.
Sep 30 17:58:25 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:58:25 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.695s CPU time.
Sep 30 17:58:25 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 17:58:25 compute-0 sshd-session[259394]: Accepted publickey for zuul from 192.168.122.30 port 45070 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:58:25 compute-0 systemd-logind[811]: New session 57 of user zuul.
Sep 30 17:58:25 compute-0 systemd[1]: Started Session 57 of User zuul.
Sep 30 17:58:25 compute-0 sshd-session[259394]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:58:25 compute-0 sshd-session[259401]: Received disconnect from 192.168.122.30 port 45070:11: disconnected by user
Sep 30 17:58:25 compute-0 sshd-session[259401]: Disconnected from user zuul 192.168.122.30 port 45070
Sep 30 17:58:25 compute-0 sshd-session[259394]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:58:25 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Sep 30 17:58:25 compute-0 systemd-logind[811]: Session 57 logged out. Waiting for processes to exit.
Sep 30 17:58:25 compute-0 systemd-logind[811]: Removed session 57.
Sep 30 17:58:25 compute-0 podman[259467]: 2025-09-30 17:58:25.947223753 +0000 UTC m=+0.056656852 container create 76f85129328150eb71d21e7c61e63b1c179a003382bbdb3cb36bea5b2bfe00b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b81d77edfe7492ff1ba710bc5e5ad6842e3500fecfcf076c0e191fac9d950e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b81d77edfe7492ff1ba710bc5e5ad6842e3500fecfcf076c0e191fac9d950e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b81d77edfe7492ff1ba710bc5e5ad6842e3500fecfcf076c0e191fac9d950e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b81d77edfe7492ff1ba710bc5e5ad6842e3500fecfcf076c0e191fac9d950e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:58:26 compute-0 podman[259467]: 2025-09-30 17:58:26.012198721 +0000 UTC m=+0.121631850 container init 76f85129328150eb71d21e7c61e63b1c179a003382bbdb3cb36bea5b2bfe00b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 17:58:26 compute-0 podman[259467]: 2025-09-30 17:58:25.920082018 +0000 UTC m=+0.029515187 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:58:26 compute-0 podman[259467]: 2025-09-30 17:58:26.017954391 +0000 UTC m=+0.127387490 container start 76f85129328150eb71d21e7c61e63b1c179a003382bbdb3cb36bea5b2bfe00b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:58:26 compute-0 bash[259467]: 76f85129328150eb71d21e7c61e63b1c179a003382bbdb3cb36bea5b2bfe00b8
Sep 30 17:58:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 17:58:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 17:58:26 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 17:58:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 17:58:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 17:58:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 17:58:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 17:58:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 17:58:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 17:58:26 compute-0 python3.9[259650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:58:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:26.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:26 compute-0 ceph-mon[73755]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:58:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:27.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:27 compute-0 python3.9[259771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255106.054977-2953-163179753705153/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:27.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:58:27 compute-0 python3.9[259924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:58:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 17:58:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 8831 writes, 33K keys, 8831 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8831 writes, 2076 syncs, 4.25 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 652 writes, 1022 keys, 652 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s
                                           Interval WAL: 652 writes, 318 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631b1d9f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 17:58:28 compute-0 python3.9[260001]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:28 compute-0 unix_chkpwd[260140]: password check failed for user (root)
Sep 30 17:58:28 compute-0 sshd-session[259848]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 17:58:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:28.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:28] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:58:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:28] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 17:58:28 compute-0 ceph-mon[73755]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 17:58:28 compute-0 python3.9[260152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:58:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:29 compute-0 python3.9[260273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255108.3465154-2953-91796953680825/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:30 compute-0 python3.9[260425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:58:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:30 compute-0 podman[260520]: 2025-09-30 17:58:30.442550837 +0000 UTC m=+0.060576424 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Sep 30 17:58:30 compute-0 sshd-session[259848]: Failed password for root from 14.225.220.107 port 35748 ssh2
Sep 30 17:58:30 compute-0 python3.9[260561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255109.5671582-2953-47588226789119/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 17:58:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:30.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 17:58:30 compute-0 ceph-mon[73755]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:31 compute-0 python3.9[260715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:58:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:31 compute-0 sshd-session[259848]: Received disconnect from 14.225.220.107 port 35748:11: Bye Bye [preauth]
Sep 30 17:58:31 compute-0 sshd-session[259848]: Disconnected from authenticating user root 14.225.220.107 port 35748 [preauth]
Sep 30 17:58:31 compute-0 python3.9[260837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255110.804114-2953-124184217831498/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 17:58:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 17:58:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:32 compute-0 ceph-mon[73755]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:33 compute-0 sudo[260988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huorbtmzrjakuovekgeaplxxgzwlxpfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255112.7091005-3091-161467120866269/AnsiballZ_file.py'
Sep 30 17:58:33 compute-0 sudo[260988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:33 compute-0 python3.9[260990]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:58:33 compute-0 sudo[260988]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:33.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:33.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:33 compute-0 sudo[261142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loekipcrbwpkgynlfuefvyopqqqauhfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255113.516826-3107-246947650787983/AnsiballZ_copy.py'
Sep 30 17:58:33 compute-0 sudo[261142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:34 compute-0 python3.9[261144]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:58:34 compute-0 sudo[261142]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:34.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:34 compute-0 sudo[261294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvhdnzjbvbianptlftumpbncplwsoewk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255114.358371-3123-179814188222999/AnsiballZ_stat.py'
Sep 30 17:58:34 compute-0 sudo[261294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:34 compute-0 python3.9[261296]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:58:34 compute-0 sudo[261294]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:34 compute-0 ceph-mon[73755]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:35.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:35 compute-0 sudo[261447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teldzvnkaqriabtwsotlnaodqaywrqsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255115.1390998-3139-198897066696847/AnsiballZ_stat.py'
Sep 30 17:58:35 compute-0 sudo[261447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:35 compute-0 python3.9[261449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:58:35 compute-0 sudo[261447]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:36 compute-0 sudo[261571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijxeshuhmjavcvqjlmoyndzaiarwocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255115.1390998-3139-198897066696847/AnsiballZ_copy.py'
Sep 30 17:58:36 compute-0 sudo[261571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:36 compute-0 python3.9[261573]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759255115.1390998-3139-198897066696847/.source _original_basename=.vd9r0kj0 follow=False checksum=34ce7570c9cb4d08e9684d89eba125a308899d89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Sep 30 17:58:36 compute-0 sudo[261571]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:36.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:36 compute-0 ceph-mon[73755]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:37.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:37 compute-0 python3.9[261725]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:58:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:58:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:58:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:37.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:58:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:37 compute-0 podman[261852]: 2025-09-30 17:58:37.662198834 +0000 UTC m=+0.060522903 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Sep 30 17:58:37 compute-0 python3.9[261896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:58:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:38 compute-0 python3.9[262021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255117.3381217-3191-899094809173/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=b27833676660fa98c54003ae3ee408ee2eef3f6a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:38.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:38] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:58:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:38] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:58:38 compute-0 ceph-mon[73755]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:39 compute-0 python3.9[262187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:58:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:39.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:39 compute-0 podman[262260]: 2025-09-30 17:58:39.539813275 +0000 UTC m=+0.069972139 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Sep 30 17:58:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:58:39 compute-0 python3.9[262329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255118.6898408-3221-152240606632640/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=0d7080b27a2b16032bc39b6298de5bdc4fff259e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:58:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/175840 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 17:58:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:40.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:40 compute-0 sudo[262480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwgqghozphevjmukwydyrxiiecofqvae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255120.358134-3255-160483465117165/AnsiballZ_container_config_data.py'
Sep 30 17:58:40 compute-0 sudo[262480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:40 compute-0 python3.9[262482]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Sep 30 17:58:40 compute-0 sudo[262480]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:41 compute-0 ceph-mon[73755]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 17:58:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:41 compute-0 sudo[262634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhjotwetpcnaysuqiklefjpbktspjifg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255121.3242004-3273-265538558983146/AnsiballZ_container_config_hash.py'
Sep 30 17:58:41 compute-0 sudo[262634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:41 compute-0 python3.9[262636]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 17:58:41 compute-0 sudo[262634]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:42 compute-0 sudo[262786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewrnoghxdhulueikvonktfgbsrlbeqvo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759255122.2917552-3293-94552335316209/AnsiballZ_edpm_container_manage.py'
Sep 30 17:58:42 compute-0 sudo[262786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:42.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:42 compute-0 python3[262788]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 17:58:43 compute-0 ceph-mon[73755]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:43.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:43.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:45 compute-0 sudo[262843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:58:45 compute-0 sudo[262843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:58:45 compute-0 ceph-mon[73755]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 17:58:45 compute-0 sudo[262843]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:46 compute-0 ceph-mon[73755]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:46.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:47.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:47.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:48.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:48] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:58:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:48] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 17:58:48 compute-0 ceph-mon[73755]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:49.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:51 compute-0 ceph-mon[73755]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 17:58:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:58:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:58:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:53.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:53.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:53 compute-0 ceph-mon[73755]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:58:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:53 compute-0 podman[262879]: 2025-09-30 17:58:53.860789775 +0000 UTC m=+5.394759387 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 17:58:53 compute-0 podman[262802]: 2025-09-30 17:58:53.87522164 +0000 UTC m=+10.909835458 image pull d136a586f9f7c346565dba6e8dc081bc2663ef9baa7df2145dd739dc20978132 38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest
Sep 30 17:58:54 compute-0 podman[262948]: 2025-09-30 17:58:54.065908033 +0000 UTC m=+0.052027002 container create 51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Sep 30 17:58:54 compute-0 podman[262948]: 2025-09-30 17:58:54.04077777 +0000 UTC m=+0.026896759 image pull d136a586f9f7c346565dba6e8dc081bc2663ef9baa7df2145dd739dc20978132 38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest
Sep 30 17:58:54 compute-0 python3[262788]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z 38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Sep 30 17:58:54 compute-0 sudo[262786]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:58:54.251 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:58:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:58:54.252 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:58:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:58:54.252 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:58:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:54 compute-0 sudo[263135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltnixtcuwgxcipztyxzxwtydfqpkoeoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255134.3872235-3309-149191960411606/AnsiballZ_stat.py'
Sep 30 17:58:54 compute-0 sudo[263135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:54.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:54 compute-0 ceph-mon[73755]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:54 compute-0 python3.9[263137]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:58:54 compute-0 sudo[263135]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:58:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:55 compute-0 sudo[263291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gttnixbofxlxkswkuwkxomxiuhglpnuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255135.4597945-3333-70250327150698/AnsiballZ_container_config_data.py'
Sep 30 17:58:55 compute-0 sudo[263291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:55 compute-0 python3.9[263293]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Sep 30 17:58:55 compute-0 sudo[263291]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:56 compute-0 sudo[263443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcezllfnsdqcqcqjorthwxrxlkcmxium ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255136.312759-3351-17684937851040/AnsiballZ_container_config_hash.py'
Sep 30 17:58:56 compute-0 sudo[263443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:56.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:56 compute-0 ceph-mon[73755]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:56 compute-0 python3.9[263445]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 17:58:56 compute-0 sudo[263443]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:58:57.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:58:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:57.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:57 compute-0 sudo[263596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paupzefzityzrgijmcnopyiqpcnfmjei ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759255137.2760415-3371-221160871859120/AnsiballZ_edpm_container_manage.py'
Sep 30 17:58:57 compute-0 sudo[263596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:58 compute-0 python3[263598]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 17:58:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:58 compute-0 podman[263638]: 2025-09-30 17:58:58.341007956 +0000 UTC m=+0.052270349 container create 5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99 (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, container_name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Sep 30 17:58:58 compute-0 podman[263638]: 2025-09-30 17:58:58.310647457 +0000 UTC m=+0.021909880 image pull d136a586f9f7c346565dba6e8dc081bc2663ef9baa7df2145dd739dc20978132 38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest
Sep 30 17:58:58 compute-0 python3[263598]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro 38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest kolla_start
Sep 30 17:58:58 compute-0 sudo[263596]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:58:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:58:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:58:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:58:58.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:58:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:58] "GET /metrics HTTP/1.1" 200 46442 "" "Prometheus/2.51.0"
Sep 30 17:58:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:58:58] "GET /metrics HTTP/1.1" 200 46442 "" "Prometheus/2.51.0"
Sep 30 17:58:58 compute-0 ceph-mon[73755]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:58:59 compute-0 sudo[263827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqngnsnisnvwgmasrjagdwvhurgfsuli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255138.8190224-3387-181943031476653/AnsiballZ_stat.py'
Sep 30 17:58:59 compute-0 sudo[263827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:58:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:58:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:58:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:58:59.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:58:59 compute-0 python3.9[263829]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:58:59 compute-0 sudo[263827]: pam_unix(sudo:session): session closed for user root
Sep 30 17:58:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:58:59 compute-0 sudo[263983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgymgmkerohviquvnbiyygegmglbluup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255139.7088077-3405-155880249023021/AnsiballZ_file.py'
Sep 30 17:58:59 compute-0 sudo[263983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:00 compute-0 python3.9[263985]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:00 compute-0 sudo[263983]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:00.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:00 compute-0 sudo[264145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmkswigohldwqaudkbrtuxryqlworffv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255140.2817125-3405-280367176073602/AnsiballZ_copy.py'
Sep 30 17:59:00 compute-0 sudo[264145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:00 compute-0 podman[264108]: 2025-09-30 17:59:00.789534806 +0000 UTC m=+0.103577192 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 17:59:00 compute-0 python3.9[264153]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759255140.2817125-3405-280367176073602/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:00 compute-0 ceph-mon[73755]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:00 compute-0 sudo[264145]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:01 compute-0 sudo[264229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlvjfuuyisvrpmigvzsnucggbfrmaiuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255140.2817125-3405-280367176073602/AnsiballZ_systemd.py'
Sep 30 17:59:01 compute-0 sudo[264229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:01.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:01 compute-0 python3.9[264231]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:59:01 compute-0 systemd[1]: Reloading.
Sep 30 17:59:01 compute-0 systemd-sysv-generator[264264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:59:01 compute-0 systemd-rc-local-generator[264261]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:59:02 compute-0 sudo[264229]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:02 compute-0 sudo[264342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofgyhsacauktiygiokyoldmlxxbszwdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255140.2817125-3405-280367176073602/AnsiballZ_systemd.py'
Sep 30 17:59:02 compute-0 sudo[264342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:02.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:02 compute-0 python3.9[264344]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:59:02 compute-0 systemd[1]: Reloading.
Sep 30 17:59:03 compute-0 ceph-mon[73755]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:03 compute-0 systemd-rc-local-generator[264369]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:59:03 compute-0 systemd-sysv-generator[264372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:59:03 compute-0 systemd[1]: Starting nova_compute container...
Sep 30 17:59:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:03.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:03 compute-0 podman[264383]: 2025-09-30 17:59:03.40306595 +0000 UTC m=+0.113540030 container init 5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99 (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Sep 30 17:59:03 compute-0 podman[264383]: 2025-09-30 17:59:03.40996851 +0000 UTC m=+0.120442580 container start 5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99 (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Sep 30 17:59:03 compute-0 podman[264383]: nova_compute
Sep 30 17:59:03 compute-0 nova_compute[264398]: + sudo -E kolla_set_configs
Sep 30 17:59:03 compute-0 systemd[1]: Started nova_compute container.
Sep 30 17:59:03 compute-0 sudo[264342]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Validating config file
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying service configuration files
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Deleting /etc/nova/nova.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Deleting /etc/ceph
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Creating directory /etc/ceph
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/ceph
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Writing out command to execute
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Sep 30 17:59:03 compute-0 nova_compute[264398]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Sep 30 17:59:03 compute-0 nova_compute[264398]: ++ cat /run_command
Sep 30 17:59:03 compute-0 nova_compute[264398]: + CMD=nova-compute
Sep 30 17:59:03 compute-0 nova_compute[264398]: + ARGS=
Sep 30 17:59:03 compute-0 nova_compute[264398]: + sudo kolla_copy_cacerts
Sep 30 17:59:03 compute-0 nova_compute[264398]: + [[ ! -n '' ]]
Sep 30 17:59:03 compute-0 nova_compute[264398]: + . kolla_extend_start
Sep 30 17:59:03 compute-0 nova_compute[264398]: Running command: 'nova-compute'
Sep 30 17:59:03 compute-0 nova_compute[264398]: + echo 'Running command: '\''nova-compute'\'''
Sep 30 17:59:03 compute-0 nova_compute[264398]: + umask 0022
Sep 30 17:59:03 compute-0 nova_compute[264398]: + exec nova-compute
Sep 30 17:59:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:03.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:04 compute-0 ceph-mon[73755]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:04.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:05 compute-0 sudo[264488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:59:05 compute-0 sudo[264488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:05 compute-0 sudo[264488]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:05.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:05 compute-0 python3.9[264586]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:59:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:06 compute-0 nova_compute[264398]: 2025-09-30 17:59:06.322 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Sep 30 17:59:06 compute-0 nova_compute[264398]: 2025-09-30 17:59:06.322 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Sep 30 17:59:06 compute-0 nova_compute[264398]: 2025-09-30 17:59:06.323 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Sep 30 17:59:06 compute-0 nova_compute[264398]: 2025-09-30 17:59:06.323 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Sep 30 17:59:06 compute-0 python3.9[264738]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:59:06 compute-0 nova_compute[264398]: 2025-09-30 17:59:06.465 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 17:59:06 compute-0 nova_compute[264398]: 2025-09-30 17:59:06.486 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 17:59:06 compute-0 nova_compute[264398]: 2025-09-30 17:59:06.521 2 INFO oslo_service.periodic_task [-] Skipping periodic task _heal_instance_info_cache because its interval is negative
Sep 30 17:59:06 compute-0 nova_compute[264398]: 2025-09-30 17:59:06.523 2 WARNING oslo_config.cfg [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Deprecated: Option "heartbeat_in_pthread" from group "oslo_messaging_rabbit" is deprecated for removal (The option is related to Eventlet which will be removed. In addition this has never worked as expected with services using eventlet for core service framework.).  Its value may be silently ignored in the future.
Sep 30 17:59:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:06 compute-0 ceph-mon[73755]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:07.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:07 compute-0 python3.9[264891]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:59:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:59:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:59:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_17:59:07
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.mgr', 'backups', 'volumes', 'vms', '.nfs', '.rgw.root']
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 17:59:07 compute-0 nova_compute[264398]: 2025-09-30 17:59:07.574 2 INFO nova.virt.driver [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Sep 30 17:59:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:59:07 compute-0 nova_compute[264398]: 2025-09-30 17:59:07.732 2 INFO nova.compute.provider_config [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Sep 30 17:59:07 compute-0 sudo[265058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yybanvgurgritjxwqygbxslwovmzanbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255147.6340377-3525-79502086137270/AnsiballZ_podman_container.py'
Sep 30 17:59:07 compute-0 sudo[265058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:07 compute-0 podman[265019]: 2025-09-30 17:59:07.959335458 +0000 UTC m=+0.064097546 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 17:59:08 compute-0 python3.9[265066]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Sep 30 17:59:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.241 2 DEBUG oslo_concurrency.lockutils [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 17:59:08 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.241 2 DEBUG oslo_concurrency.lockutils [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 17:59:08 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.242 2 DEBUG oslo_concurrency.lockutils [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.242 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:274
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.242 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.242 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.242 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.243 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.243 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.243 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.243 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.244 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.244 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.244 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.244 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.244 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cell_worker_thread_pool_size   = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.245 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.245 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.245 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.245 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.245 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.245 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.246 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.246 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.246 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.246 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.246 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.247 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.247 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.247 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.247 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.247 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] default_green_pool_size        = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.247 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.248 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.248 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] default_thread_pool_size       = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.248 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.248 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.248 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] fatal_deprecations             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.248 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.249 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.249 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.249 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.249 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] heal_instance_info_cache_interval = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.249 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.250 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.250 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.250 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.250 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] injected_network_template      = /usr/lib/python3.12/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.250 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.250 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.251 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.251 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.251 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.251 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.251 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.252 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.252 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.252 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.252 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.252 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.252 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.253 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.253 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.253 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.253 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.253 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.253 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.253 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.254 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.254 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.254 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.254 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.254 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.254 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.255 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.255 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.255 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.255 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.255 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.255 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.256 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.256 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.256 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.256 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.256 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.257 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] my_shared_fs_storage_ip        = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.257 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.257 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.257 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.257 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.258 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.258 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.259 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.259 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.259 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.259 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] pybasedir                      = /usr/lib/python3.12/site-packages log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.259 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.259 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.260 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.260 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.260 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.260 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.260 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.260 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.261 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.261 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.261 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.261 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.261 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.262 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.262 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.262 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.262 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.262 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.262 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.263 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.263 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.263 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.263 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.263 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.263 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.264 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.264 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.264 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.264 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.264 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.264 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.264 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.265 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.265 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.265 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.265 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.265 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.265 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] thread_pool_statistic_period   = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.266 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.266 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.266 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.266 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.266 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.266 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.267 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.267 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.267 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.267 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.267 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.267 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.268 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.268 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.268 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.268 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.270 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.271 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.271 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_brick.lock_path             = /var/lib/nova/tmp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.271 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.271 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.271 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.272 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.272 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.272 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.272 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.272 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.272 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.273 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.273 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.273 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.273 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.273 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.273 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.274 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.274 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.274 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.274 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.274 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.neutron_default_project_id = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.275 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.response_validation        = warn log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.275 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.275 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.275 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.275 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.275 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.276 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.276 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.276 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.276 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.276 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.276 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.backend_expiration_time  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.276 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.277 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.277 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.277 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.277 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.277 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.277 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.enforce_fips_mode        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.277 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.277 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.278 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.278 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.278 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_password        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.278 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.278 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.278 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.279 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.279 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.279 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.279 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.279 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.memcache_username        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.279 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.280 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.redis_db                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.280 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.redis_password           = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.280 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.redis_sentinel_service_name = mymaster log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.280 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.redis_sentinels          = ['localhost:26379'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.280 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.redis_server             = localhost:6379 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.280 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.redis_socket_timeout     = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.280 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.redis_username           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.281 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.281 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.281 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.281 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.281 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.281 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.282 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.282 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.282 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.282 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.282 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.283 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.283 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.283 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.283 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.283 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.283 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.284 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.284 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.284 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.284 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.284 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.284 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.285 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.285 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.285 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.285 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.285 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.285 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.286 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.286 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.286 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.286 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.286 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.286 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.286 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.sharing_providers_max_uuids_per_request = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.287 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.287 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.287 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.287 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.289 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.289 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.289 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] consoleauth.enforce_session_timeout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.290 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.290 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.290 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.290 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.290 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.290 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.291 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.291 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.291 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.291 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.291 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.291 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.292 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.292 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.292 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.292 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.292 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.292 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.293 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.293 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.293 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.293 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.asyncio_connection    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.293 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.asyncio_slave_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.293 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.294 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.294 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.294 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.294 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.294 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.294 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.294 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.294 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.295 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.295 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.295 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.295 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.295 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.295 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.296 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.296 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.296 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.296 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.296 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.asyncio_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.296 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.asyncio_slave_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.296 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.297 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.297 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.297 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.297 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.297 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.297 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.298 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.298 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.298 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.298 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.298 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.298 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.298 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.299 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.299 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.299 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.299 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.299 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.299 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.299 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.300 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ephemeral_storage_encryption.default_format = luks log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.300 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.300 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.300 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.300 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.301 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.301 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.301 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.301 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.301 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.301 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.302 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.302 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.302 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.302 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.302 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.302 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.302 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.303 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.303 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.303 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.303 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.304 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.304 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.304 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.304 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.304 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.304 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.304 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.305 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.305 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.305 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.305 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.305 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.305 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.305 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.305 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.306 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.306 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.306 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.306 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.306 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.306 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.306 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.306 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.307 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.307 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.307 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.307 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.307 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.307 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.service_type            = shared-file-system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.307 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.share_apply_policy_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.307 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.308 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.308 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.308 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.308 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.308 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] manila.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.308 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.309 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.309 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.309 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.309 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.309 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.309 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.309 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.310 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.310 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.310 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.310 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.310 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.310 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.311 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.311 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.conductor_group         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.311 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.311 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.311 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.311 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.311 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.311 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.312 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.312 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.312 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.312 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.312 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.312 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.313 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.313 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.shard                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.313 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.313 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.313 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.313 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.313 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.314 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.314 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.314 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.314 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.314 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.314 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.314 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.314 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.315 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.315 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.315 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.315 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.315 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.315 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.315 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.315 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.316 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.316 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.316 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.316 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.316 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.316 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.316 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.316 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.317 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.317 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.317 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.317 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.317 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.317 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.approle_role_id          = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.317 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.approle_secret_id        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.318 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.318 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.kv_path                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.318 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.318 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.318 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.root_token_id            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.318 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.319 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.timeout                  = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.319 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.319 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.319 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.319 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.319 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.320 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.320 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.320 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.320 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.320 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.321 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.321 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.321 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.321 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.321 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.321 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.321 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.322 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.322 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.322 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.322 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.322 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.322 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.ceph_mount_options     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.323 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.ceph_mount_point_base  = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.323 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.323 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.324 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.324 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.324 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.324 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.324 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.324 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.324 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.325 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.325 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.325 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.325 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.325 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.325 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.325 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.326 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.326 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.326 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.326 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.326 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.326 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.326 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.327 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.327 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.327 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.327 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.327 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.327 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.328 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.328 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.328 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.328 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.328 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.328 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.329 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.329 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.329 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.329 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.329 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.330 2 WARNING oslo_config.cfg [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Sep 30 17:59:08 compute-0 nova_compute[264398]: live_migration_uri is deprecated for removal in favor of two other options that
Sep 30 17:59:08 compute-0 nova_compute[264398]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Sep 30 17:59:08 compute-0 nova_compute[264398]: and ``live_migration_inbound_addr`` respectively.
Sep 30 17:59:08 compute-0 nova_compute[264398]: ).  Its value may be silently ignored in the future.
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.330 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.330 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.330 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.330 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.330 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.migration_inbound_addr = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.331 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.331 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.331 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.331 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.331 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.331 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.332 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.332 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.332 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.332 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.332 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.332 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.333 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.333 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.333 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rbd_secret_uuid        = 63d32c6a-fa18-54ed-8711-9a3915cc367b log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.333 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.333 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.333 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.334 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.334 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.334 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.334 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.334 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.334 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.335 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.335 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.335 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.335 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.335 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.335 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.336 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.336 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.336 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.336 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.tb_cache_size          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.336 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.336 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.337 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.337 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.337 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.337 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.337 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.volume_enforce_multipath = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.337 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.338 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.338 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.338 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.338 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.338 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.338 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.338 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.338 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.339 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.339 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 sudo[265058]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.339 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.339 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.339 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.339 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.339 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.339 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.340 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.340 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.340 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.340 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.340 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.340 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.340 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.340 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.341 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.341 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.341 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.341 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.341 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.341 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.341 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.341 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.342 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.342 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.342 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.342 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.342 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.342 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.342 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.343 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] notifications.include_share_mapping = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.343 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] notifications.notification_format = both log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.343 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] notifications.notify_on_state_change = vm_and_task_state log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.343 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.343 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.343 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.344 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.344 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.344 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.344 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.344 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.344 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.344 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.345 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.345 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.345 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.345 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.345 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.345 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.345 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.345 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.346 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.346 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.346 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.346 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.346 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.346 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.346 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.347 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.347 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.347 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.347 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.347 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.347 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.347 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.348 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.348 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.348 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.348 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.348 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.348 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.348 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.348 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.349 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.349 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.349 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.349 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.349 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.349 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.349 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.349 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.350 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.350 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.350 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.350 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.350 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.350 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.350 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.350 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.unified_limits_resource_list = ['servers'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.351 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] quota.unified_limits_resource_strategy = require log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.351 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.351 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.351 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.351 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.351 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.351 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.351 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.352 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.352 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.352 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.352 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.352 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.352 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.352 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.352 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.353 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.353 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.353 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.353 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.353 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.hypervisor_version_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.353 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.353 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.image_props_weight_multiplier = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.353 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.image_props_weight_setting = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.354 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.354 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.354 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.354 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.354 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.354 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.num_instances_weight_multiplier = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.354 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.355 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.355 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.355 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.355 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.355 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.355 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.356 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.356 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.356 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.356 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.356 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.356 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.357 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.357 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.357 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.357 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.357 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.358 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.358 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.358 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.358 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.358 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.358 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.359 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.359 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.359 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.359 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.359 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.359 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.359 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.360 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.360 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.360 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.360 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.360 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.360 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.361 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.require_secure           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.361 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.361 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.361 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.spice_direct_proxy_base_url = http://127.0.0.1:13002/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.361 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.361 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.361 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.362 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.362 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.362 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.362 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.362 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.362 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.362 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.363 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.363 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.363 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.363 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.363 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.363 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.363 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.364 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.364 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.364 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.364 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.364 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.364 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.364 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.365 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.365 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.365 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.365 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.365 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.365 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.365 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.365 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.366 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.366 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.366 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.366 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.366 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.366 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.366 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.366 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.367 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.367 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.367 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.367 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.367 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.367 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.368 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.369 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.369 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.369 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.369 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.369 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.369 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.369 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.369 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.370 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.370 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.370 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.370 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.370 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.370 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.370 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.370 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.371 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.371 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.371 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.371 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.371 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.371 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.372 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.372 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.372 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.372 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.372 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.373 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.373 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.373 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.373 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.373 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.373 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.374 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.hostname = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.374 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.374 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.374 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.374 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.374 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_splay = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.374 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.processname = nova-compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.374 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.375 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.375 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.375 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.375 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.375 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.375 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.375 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.375 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.376 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.376 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_stream_fanout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.376 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.376 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_quorum_queue = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.376 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.376 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.376 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.377 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.377 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.377 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.377 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.377 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_rabbit.use_queue_manager = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.378 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_notifications.driver = ['messagingv2'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.378 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.379 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.379 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.379 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.379 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.379 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.380 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.380 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.380 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.380 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.380 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.381 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.381 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.381 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.381 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.381 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.381 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.endpoint_interface  = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.382 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.382 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.endpoint_region_name = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.382 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.endpoint_service_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.382 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.endpoint_service_type = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.382 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.382 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.383 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.383 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.383 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.383 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.383 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.383 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.384 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.384 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.384 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.384 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.384 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.384 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.385 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.385 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.385 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.385 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.385 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.386 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.386 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.386 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.386 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.386 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.386 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.387 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.387 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.387 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.387 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.387 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.388 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.388 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_linux_bridge_privileged.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.388 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.388 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.388 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.388 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.389 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.389 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.389 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_ovs_privileged.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.389 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.389 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.389 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.390 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.390 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.390 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.390 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.390 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.390 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.391 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.391 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.391 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_ovs.default_qos_type    = linux-noop log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.391 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.391 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.391 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.392 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.392 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.392 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.392 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] privsep_osbrick.capabilities   = [21, 2] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.392 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.392 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.392 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] privsep_osbrick.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.392 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.393 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.393 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.393 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.393 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.393 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.393 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] nova_sys_admin.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.394 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.394 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.394 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.394 2 DEBUG oslo_service.backend._eventlet.service [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.395 2 INFO nova.service [-] Starting compute node (version 32.1.0-0.20250919142712.b99a882.el10)
Sep 30 17:59:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:59:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:59:08 compute-0 ceph-mon[73755]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:08 compute-0 sudo[265240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fglqzhvtzudqjibvogfxzhbdajnongkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255148.5648236-3541-275627424367450/AnsiballZ_systemd.py'
Sep 30 17:59:08 compute-0 sudo[265240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.902 2 DEBUG nova.virt.libvirt.host [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:498
Sep 30 17:59:08 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Sep 30 17:59:08 compute-0 systemd[1]: Started libvirt QEMU daemon.
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.996 2 DEBUG nova.virt.libvirt.host [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f7571ce4b00> _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:504
Sep 30 17:59:08 compute-0 nova_compute[264398]: libvirt:  error : internal error: could not initialize domain event timer
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.998 2 WARNING nova.virt.libvirt.host [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] URI qemu:///system does not support events: internal error: could not initialize domain event timer: libvirt.libvirtError: internal error: could not initialize domain event timer
Sep 30 17:59:08 compute-0 nova_compute[264398]: 2025-09-30 17:59:08.999 2 DEBUG nova.virt.libvirt.host [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f7571ce4b00> _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:525
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.002 2 DEBUG nova.virt.libvirt.host [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Starting native event thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:484
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.002 2 DEBUG nova.virt.libvirt.host [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:490
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.003 2 INFO nova.utils [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] The default thread pool MainProcess.default is initialized
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.003 2 DEBUG nova.virt.libvirt.host [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Starting connection event dispatch thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:493
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.004 2 INFO nova.virt.libvirt.driver [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Connection event '1' reason 'None'
Sep 30 17:59:09 compute-0 python3.9[265242]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 17:59:09 compute-0 systemd[1]: Stopping nova_compute container...
Sep 30 17:59:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:09.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.391 2 DEBUG oslo_concurrency.lockutils [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.392 2 DEBUG oslo_concurrency.lockutils [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.392 2 DEBUG oslo_concurrency.lockutils [None req-2a823e7b-84c5-4cfb-91b1-b5181d0fbbd0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 17:59:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:09 compute-0 podman[265322]: 2025-09-30 17:59:09.690554225 +0000 UTC m=+0.090617725 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.811 2 WARNING nova.virt.libvirt.driver [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Sep 30 17:59:09 compute-0 nova_compute[264398]: 2025-09-30 17:59:09.811 2 DEBUG nova.virt.libvirt.volume.mount [None req-f5df6aa5-1286-448f-8f95-2db62fcce95e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:130
Sep 30 17:59:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:10 compute-0 virtqemud[265263]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Sep 30 17:59:10 compute-0 virtqemud[265263]: hostname: compute-0
Sep 30 17:59:10 compute-0 virtqemud[265263]: End of file while reading data: Input/output error
Sep 30 17:59:10 compute-0 systemd[1]: libpod-5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99.scope: Deactivated successfully.
Sep 30 17:59:10 compute-0 systemd[1]: libpod-5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99.scope: Consumed 3.736s CPU time.
Sep 30 17:59:10 compute-0 podman[265298]: 2025-09-30 17:59:10.625170121 +0000 UTC m=+1.363154089 container died 5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99 (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 17:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99-userdata-shm.mount: Deactivated successfully.
Sep 30 17:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105-merged.mount: Deactivated successfully.
Sep 30 17:59:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:10.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:10 compute-0 podman[265298]: 2025-09-30 17:59:10.7325375 +0000 UTC m=+1.470521448 container cleanup 5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99 (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 17:59:10 compute-0 podman[265298]: nova_compute
Sep 30 17:59:10 compute-0 podman[265362]: nova_compute
Sep 30 17:59:10 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Sep 30 17:59:10 compute-0 systemd[1]: Stopped nova_compute container.
Sep 30 17:59:10 compute-0 systemd[1]: Starting nova_compute container...
Sep 30 17:59:10 compute-0 ceph-mon[73755]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa81a43b9bb16f6a82ff8334e58e3e8d1e2eae5e8bb9396125cbb3da4c8105/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:10 compute-0 podman[265375]: 2025-09-30 17:59:10.919055525 +0000 UTC m=+0.093131340 container init 5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99 (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0)
Sep 30 17:59:10 compute-0 podman[265375]: 2025-09-30 17:59:10.92540117 +0000 UTC m=+0.099476955 container start 5c57df5628252304282bd7850e9ea4dca7f0fcabc9c22ccdc19a8b5ba5d56e99 (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 17:59:10 compute-0 podman[265375]: nova_compute
Sep 30 17:59:10 compute-0 nova_compute[265391]: + sudo -E kolla_set_configs
Sep 30 17:59:10 compute-0 systemd[1]: Started nova_compute container.
Sep 30 17:59:10 compute-0 sudo[265240]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Validating config file
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Copying service configuration files
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Deleting /etc/nova/nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 17:59:10 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Deleting /etc/ceph
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Creating directory /etc/ceph
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/ceph
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Writing out command to execute
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Sep 30 17:59:11 compute-0 nova_compute[265391]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Sep 30 17:59:11 compute-0 nova_compute[265391]: ++ cat /run_command
Sep 30 17:59:11 compute-0 nova_compute[265391]: + CMD=nova-compute
Sep 30 17:59:11 compute-0 nova_compute[265391]: + ARGS=
Sep 30 17:59:11 compute-0 nova_compute[265391]: + sudo kolla_copy_cacerts
Sep 30 17:59:11 compute-0 nova_compute[265391]: + [[ ! -n '' ]]
Sep 30 17:59:11 compute-0 nova_compute[265391]: + . kolla_extend_start
Sep 30 17:59:11 compute-0 nova_compute[265391]: + echo 'Running command: '\''nova-compute'\'''
Sep 30 17:59:11 compute-0 nova_compute[265391]: Running command: 'nova-compute'
Sep 30 17:59:11 compute-0 nova_compute[265391]: + umask 0022
Sep 30 17:59:11 compute-0 nova_compute[265391]: + exec nova-compute
Sep 30 17:59:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:11.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:11 compute-0 sudo[265553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etauxbrmubojocgjpqlwkldtkaaolnna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255151.201111-3559-264186116121446/AnsiballZ_podman_container.py'
Sep 30 17:59:11 compute-0 sudo[265553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:11 compute-0 python3.9[265555]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Sep 30 17:59:11 compute-0 systemd[1]: Started libpod-conmon-51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b.scope.
Sep 30 17:59:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb15f850e5cf52d9169e7822a7ec844134cfb5a59f02b681b01d69f74e641992/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb15f850e5cf52d9169e7822a7ec844134cfb5a59f02b681b01d69f74e641992/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb15f850e5cf52d9169e7822a7ec844134cfb5a59f02b681b01d69f74e641992/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:11 compute-0 podman[265583]: 2025-09-30 17:59:11.989287534 +0000 UTC m=+0.122821402 container init 51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 17:59:11 compute-0 podman[265583]: 2025-09-30 17:59:11.998735269 +0000 UTC m=+0.132269107 container start 51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=nova_compute_init, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 17:59:12 compute-0 python3.9[265555]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Applying nova statedir ownership
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Sep 30 17:59:12 compute-0 nova_compute_init[265603]: INFO:nova_statedir:Nova statedir ownership complete
Sep 30 17:59:12 compute-0 systemd[1]: libpod-51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b.scope: Deactivated successfully.
Sep 30 17:59:12 compute-0 podman[265604]: 2025-09-30 17:59:12.082161016 +0000 UTC m=+0.032401043 container died 51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=nova_compute_init, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:59:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b-userdata-shm.mount: Deactivated successfully.
Sep 30 17:59:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb15f850e5cf52d9169e7822a7ec844134cfb5a59f02b681b01d69f74e641992-merged.mount: Deactivated successfully.
Sep 30 17:59:12 compute-0 podman[265614]: 2025-09-30 17:59:12.154127465 +0000 UTC m=+0.077870333 container cleanup 51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b (image=38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'image': '38.129.56.221:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:59:12 compute-0 systemd[1]: libpod-conmon-51abaa1a0c8d12a110a7f86482d9b63eda95665ebdef1108a61ab9cc3a6c684b.scope: Deactivated successfully.
Sep 30 17:59:12 compute-0 sudo[265553]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:12 compute-0 sshd-session[227752]: Connection closed by 192.168.122.30 port 53656
Sep 30 17:59:12 compute-0 sshd-session[227749]: pam_unix(sshd:session): session closed for user zuul
Sep 30 17:59:12 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Sep 30 17:59:12 compute-0 systemd[1]: session-55.scope: Consumed 2min 56.458s CPU time.
Sep 30 17:59:12 compute-0 systemd-logind[811]: Session 55 logged out. Waiting for processes to exit.
Sep 30 17:59:12 compute-0 systemd-logind[811]: Removed session 55.
Sep 30 17:59:12 compute-0 ceph-mon[73755]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:13 compute-0 nova_compute[265391]: 2025-09-30 17:59:13.212 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Sep 30 17:59:13 compute-0 nova_compute[265391]: 2025-09-30 17:59:13.213 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Sep 30 17:59:13 compute-0 nova_compute[265391]: 2025-09-30 17:59:13.213 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Sep 30 17:59:13 compute-0 nova_compute[265391]: 2025-09-30 17:59:13.213 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Sep 30 17:59:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:13.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:13 compute-0 nova_compute[265391]: 2025-09-30 17:59:13.374 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 17:59:13 compute-0 nova_compute[265391]: 2025-09-30 17:59:13.395 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 17:59:13 compute-0 nova_compute[265391]: 2025-09-30 17:59:13.427 2 INFO oslo_service.periodic_task [-] Skipping periodic task _heal_instance_info_cache because its interval is negative
Sep 30 17:59:13 compute-0 nova_compute[265391]: 2025-09-30 17:59:13.428 2 WARNING oslo_config.cfg [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] Deprecated: Option "heartbeat_in_pthread" from group "oslo_messaging_rabbit" is deprecated for removal (The option is related to Eventlet which will be removed. In addition this has never worked as expected with services using eventlet for core service framework.).  Its value may be silently ignored in the future.
Sep 30 17:59:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:13.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 17:59:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.386 2 INFO nova.virt.driver [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.487 2 INFO nova.compute.provider_config [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Sep 30 17:59:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:14.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:14 compute-0 ceph-mon[73755]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.994 2 DEBUG oslo_concurrency.lockutils [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.995 2 DEBUG oslo_concurrency.lockutils [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.995 2 DEBUG oslo_concurrency.lockutils [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.995 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:274
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.996 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.996 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.996 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.997 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.997 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.997 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.997 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.998 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.998 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.998 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.998 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.999 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cell_worker_thread_pool_size   = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.999 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:14 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.999 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:14.999 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.000 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.000 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.000 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.000 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.001 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.001 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.001 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.001 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.002 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.002 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.002 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.002 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.003 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] default_green_pool_size        = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.003 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.003 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.003 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] default_thread_pool_size       = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.004 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.004 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.004 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] fatal_deprecations             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.004 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.005 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.005 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.005 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.005 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] heal_instance_info_cache_interval = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.005 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.006 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.006 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.006 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.006 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] injected_network_template      = /usr/lib/python3.12/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.007 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.007 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.007 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.007 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.008 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.008 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.008 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.008 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.009 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.009 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] key                            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.009 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.009 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.010 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.010 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.010 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.010 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.010 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.011 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.011 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.011 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.011 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.011 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.012 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.012 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.012 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.012 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.012 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.013 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.013 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.013 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.013 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.013 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.013 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.014 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.014 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.014 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.014 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.014 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] my_shared_fs_storage_ip        = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.015 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.015 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.015 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.015 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.015 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.016 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.016 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.016 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.016 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.016 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] pybasedir                      = /usr/lib/python3.12/site-packages log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.016 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.017 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.017 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.017 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.017 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.018 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.018 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] record                         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.018 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.018 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.018 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.018 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.019 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.019 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.019 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.019 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.019 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.019 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.020 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.020 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.020 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.020 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.020 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.020 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.021 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.021 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.021 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.021 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.021 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.022 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.022 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.022 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.022 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.022 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.022 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.023 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.023 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.023 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.023 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] thread_pool_statistic_period   = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.023 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.024 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.024 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.024 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.024 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.024 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.024 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.025 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.025 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.025 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.025 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.025 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.025 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.025 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.026 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.026 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.026 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.026 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.026 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_brick.lock_path             = /var/lib/nova/tmp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.027 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.027 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.027 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.027 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.027 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.027 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.028 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.028 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.028 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.028 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.028 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.029 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.029 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.029 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.029 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.029 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.030 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.030 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.030 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.030 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.neutron_default_project_id = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.030 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.response_validation        = warn log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.030 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.030 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.031 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.031 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.031 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.031 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.031 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.031 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.032 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.032 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.032 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.backend_expiration_time  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.032 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.032 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.032 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.032 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.033 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.033 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.033 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.enforce_fips_mode        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.033 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.033 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.033 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.033 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.033 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_password        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.034 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.034 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.034 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.034 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.034 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.034 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.034 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.035 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.memcache_username        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.035 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.035 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.redis_db                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.035 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.redis_password           = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.035 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.redis_sentinel_service_name = mymaster log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.035 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.redis_sentinels          = ['localhost:26379'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.035 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.redis_server             = localhost:6379 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.035 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.redis_socket_timeout     = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.036 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.redis_username           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.036 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.036 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.036 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.036 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.036 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.036 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.036 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.037 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.037 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.037 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.037 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.037 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.037 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.037 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.037 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.038 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.039 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.039 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.039 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.039 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.039 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.039 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.039 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.040 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.040 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.040 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.040 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.040 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.sharing_providers_max_uuids_per_request = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.040 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.040 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.040 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] consoleauth.enforce_session_timeout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.041 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.042 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.043 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.043 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.043 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.043 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.043 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.043 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.043 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.asyncio_connection    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.asyncio_slave_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.044 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.045 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.046 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.046 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.046 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.046 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.asyncio_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.046 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.asyncio_slave_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.046 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.046 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.046 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.047 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.048 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.048 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.048 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.048 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.048 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.048 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.048 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.048 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ephemeral_storage_encryption.default_format = luks log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.049 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.050 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.050 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.050 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.050 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.050 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.050 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.052 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.052 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.052 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.053 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.053 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.053 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.053 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.053 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.053 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.053 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.053 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.054 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.054 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.054 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.054 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.054 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.054 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.054 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.054 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.055 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.055 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.055 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.055 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.055 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.055 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.055 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.056 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.056 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.056 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.056 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.056 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.056 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.056 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.056 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.service_type            = shared-file-system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.share_apply_policy_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.057 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.058 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] manila.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.058 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.058 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.058 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.058 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.059 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.059 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.059 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.059 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.059 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.060 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.060 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.060 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.060 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.060 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.060 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.060 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.conductor_group         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.060 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.061 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.061 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.061 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.061 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.061 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.061 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.061 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.061 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.062 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.062 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.062 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.062 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.062 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.shard                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.062 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.062 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.063 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.063 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.063 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.063 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.063 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.063 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.063 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.063 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.064 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.064 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.064 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.064 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.064 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.064 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.064 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.065 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.065 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.065 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.065 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.065 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.065 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.065 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.066 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.066 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.066 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.066 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.066 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.066 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.066 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.067 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.067 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.067 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.067 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.approle_role_id          = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.067 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.approle_secret_id        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.067 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.067 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.kv_path                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.067 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.068 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.068 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.root_token_id            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.068 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.068 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.timeout                  = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.068 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.068 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.068 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.068 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.069 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.069 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.069 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.069 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.069 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.069 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.069 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.069 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.070 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.070 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.070 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.070 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.070 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.070 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.070 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.070 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.071 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.071 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.071 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.ceph_mount_options     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.071 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.ceph_mount_point_base  = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.071 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.071 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.072 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.072 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.072 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.072 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.072 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.072 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.072 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.072 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.073 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.073 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.073 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.073 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.073 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.073 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.073 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.074 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.074 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.074 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.074 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.074 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.074 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.075 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.075 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.075 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.075 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.075 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.075 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.075 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.076 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.076 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.076 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.076 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.076 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.076 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.076 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.076 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.077 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.077 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.077 2 WARNING oslo_config.cfg [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Sep 30 17:59:15 compute-0 nova_compute[265391]: live_migration_uri is deprecated for removal in favor of two other options that
Sep 30 17:59:15 compute-0 nova_compute[265391]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Sep 30 17:59:15 compute-0 nova_compute[265391]: and ``live_migration_inbound_addr`` respectively.
Sep 30 17:59:15 compute-0 nova_compute[265391]: ).  Its value may be silently ignored in the future.
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.077 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.077 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.077 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.078 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.078 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.migration_inbound_addr = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.078 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.078 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.078 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.078 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.078 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.079 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.079 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.079 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.079 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.079 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.079 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.079 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.079 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.080 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.080 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rbd_secret_uuid        = 63d32c6a-fa18-54ed-8711-9a3915cc367b log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.080 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.080 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.080 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.080 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.080 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.080 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.081 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.081 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.081 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.081 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.081 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.081 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.081 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.081 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.082 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.082 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.082 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.082 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.082 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.tb_cache_size          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.082 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.082 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.082 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.volume_enforce_multipath = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.083 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.084 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.084 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.084 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.084 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.084 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.084 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.084 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.084 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.085 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.085 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.085 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.085 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.085 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.085 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.085 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.085 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.086 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.086 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.086 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.086 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.086 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.086 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.086 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.086 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.087 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.087 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.087 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.087 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.087 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.087 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.087 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.087 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] notifications.include_share_mapping = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] notifications.notification_format = both log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] notifications.notify_on_state_change = vm_and_task_state log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.088 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.089 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.089 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.089 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.089 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.089 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.089 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.089 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.089 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.090 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.090 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.090 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.090 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.090 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.090 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.090 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.090 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.091 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.092 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.092 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.092 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.092 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.092 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.092 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.092 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.092 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.093 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.093 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.093 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.093 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.093 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.093 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.093 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.093 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.094 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.095 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.095 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.095 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.unified_limits_resource_list = ['servers'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.095 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] quota.unified_limits_resource_strategy = require log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.095 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.095 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.095 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.096 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.096 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.096 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.096 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.096 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.096 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.096 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.096 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.097 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.097 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.097 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.097 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.097 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.097 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.097 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.097 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.098 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.hypervisor_version_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.098 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.098 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.image_props_weight_multiplier = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.098 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.image_props_weight_setting = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.098 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.098 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.098 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.098 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.099 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.099 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.num_instances_weight_multiplier = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.099 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.099 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.099 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.099 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.099 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.099 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.100 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.100 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.100 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.100 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.100 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.100 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.100 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.101 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.102 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.102 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.102 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.102 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.102 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.102 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.102 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.102 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.103 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.103 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.103 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.103 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.103 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.103 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.103 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.104 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.require_secure           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.104 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.104 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.104 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.spice_direct_proxy_base_url = http://127.0.0.1:13002/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.104 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.104 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.104 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.104 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.105 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.105 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.105 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.105 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.105 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.105 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.105 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.105 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.106 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.106 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.106 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.106 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.106 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.106 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.106 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.107 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.107 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.107 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.107 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.107 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.107 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.107 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.107 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.108 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.108 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.108 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.108 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.108 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.108 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.108 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.109 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.109 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.109 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.109 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.109 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.109 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.109 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.109 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.110 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.110 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.110 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.110 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.110 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.110 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.110 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.111 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.111 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.111 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.111 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.111 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.111 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.111 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.112 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.113 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.113 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.113 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.113 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.113 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.113 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.113 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.113 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.114 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.114 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.114 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.114 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.114 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.114 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.114 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.114 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.115 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.115 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.115 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.115 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.115 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.115 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.115 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.115 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.hostname = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.116 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.116 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.116 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.116 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.116 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_splay = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.116 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.processname = nova-compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.116 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.117 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.117 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.117 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.117 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.117 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.117 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.117 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.118 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.118 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.118 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_stream_fanout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.118 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.118 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rabbit_transient_quorum_queue = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.118 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.118 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.118 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.119 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.119 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.119 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.119 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.119 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_rabbit.use_queue_manager = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.119 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_notifications.driver = ['messagingv2'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.119 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.120 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.120 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.120 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.120 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.120 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.120 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.120 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.121 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.endpoint_interface  = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.122 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.122 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.endpoint_region_name = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.122 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.endpoint_service_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.122 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.endpoint_service_type = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.122 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.122 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.122 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.122 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.123 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.123 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.123 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.123 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.123 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.123 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.123 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.123 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.124 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.124 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.124 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.124 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.124 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.124 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.124 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.125 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.125 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.125 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.125 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.125 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.125 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.125 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.125 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.126 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.126 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.126 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.126 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.126 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_linux_bridge_privileged.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.126 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.126 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.126 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.127 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.127 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.127 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.127 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_ovs_privileged.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.127 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.127 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.127 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.128 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.128 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.128 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.128 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.128 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.128 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.128 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.128 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.129 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_ovs.default_qos_type    = linux-noop log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.129 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.129 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.129 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.129 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.129 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.129 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.129 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] privsep_osbrick.capabilities   = [21, 2] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.130 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.130 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.130 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] privsep_osbrick.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.130 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.130 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.130 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.131 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.131 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.131 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.131 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] nova_sys_admin.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.131 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.131 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.131 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.132 2 DEBUG oslo_service.backend._eventlet.service [None req-b06211b1-1a4a-431e-92ad-becd46c25687 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.132 2 INFO nova.service [-] Starting compute node (version 32.1.0-0.20250919142712.b99a882.el10)
Sep 30 17:59:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.639 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:498
Sep 30 17:59:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.655 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f9f643ba810> _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:504
Sep 30 17:59:15 compute-0 nova_compute[265391]: libvirt:  error : internal error: could not initialize domain event timer
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.657 2 WARNING nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] URI qemu:///system does not support events: internal error: could not initialize domain event timer: libvirt.libvirtError: internal error: could not initialize domain event timer
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.657 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f9f643ba810> _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:525
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.659 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:484
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.660 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:490
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.660 2 INFO nova.utils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] The default thread pool MainProcess.default is initialized
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.661 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Starting connection event dispatch thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:493
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.661 2 INFO nova.virt.libvirt.driver [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Connection event '1' reason 'None'
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.674 2 INFO nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Libvirt host capabilities <capabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]: 
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <host>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <uuid>889dcbce-7e29-433f-8b4d-bf5603fcc4a5</uuid>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <arch>x86_64</arch>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model>EPYC-Rome-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <vendor>AMD</vendor>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <microcode version='16777317'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <signature family='23' model='49' stepping='0'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <maxphysaddr mode='emulate' bits='40'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='x2apic'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='tsc-deadline'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='osxsave'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='hypervisor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='tsc_adjust'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='spec-ctrl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='stibp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='arch-capabilities'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='cmp_legacy'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='topoext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='virt-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='lbrv'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='tsc-scale'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='vmcb-clean'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='pause-filter'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='pfthreshold'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='svme-addr-chk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='rdctl-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='skip-l1dfl-vmentry'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='mds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature name='pschange-mc-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <pages unit='KiB' size='4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <pages unit='KiB' size='2048'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <pages unit='KiB' size='1048576'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <power_management>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <suspend_mem/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </power_management>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <iommu support='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <migration_features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <live/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <uri_transports>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <uri_transport>tcp</uri_transport>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <uri_transport>rdma</uri_transport>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </uri_transports>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </migration_features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <topology>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <cells num='1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <cell id='0'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:           <memory unit='KiB'>7864116</memory>
Sep 30 17:59:15 compute-0 nova_compute[265391]:           <pages unit='KiB' size='4'>1966029</pages>
Sep 30 17:59:15 compute-0 nova_compute[265391]:           <pages unit='KiB' size='2048'>0</pages>
Sep 30 17:59:15 compute-0 nova_compute[265391]:           <pages unit='KiB' size='1048576'>0</pages>
Sep 30 17:59:15 compute-0 nova_compute[265391]:           <distances>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <sibling id='0' value='10'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:           </distances>
Sep 30 17:59:15 compute-0 nova_compute[265391]:           <cpus num='8'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:           </cpus>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         </cell>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </cells>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </topology>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <cache>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </cache>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <secmodel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model>selinux</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <doi>0</doi>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </secmodel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <secmodel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model>dac</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <doi>0</doi>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <baselabel type='kvm'>+107:+107</baselabel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <baselabel type='qemu'>+107:+107</baselabel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </secmodel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </host>
Sep 30 17:59:15 compute-0 nova_compute[265391]: 
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <guest>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <os_type>hvm</os_type>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <arch name='i686'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <wordsize>32</wordsize>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <domain type='qemu'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <domain type='kvm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </arch>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <pae/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <nonpae/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <acpi default='on' toggle='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <apic default='on' toggle='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <cpuselection/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <deviceboot/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <disksnapshot default='on' toggle='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <externalSnapshot/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </guest>
Sep 30 17:59:15 compute-0 nova_compute[265391]: 
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <guest>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <os_type>hvm</os_type>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <arch name='x86_64'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <wordsize>64</wordsize>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <domain type='qemu'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <domain type='kvm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </arch>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <acpi default='on' toggle='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <apic default='on' toggle='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <cpuselection/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <deviceboot/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <disksnapshot default='on' toggle='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <externalSnapshot/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </guest>
Sep 30 17:59:15 compute-0 nova_compute[265391]: 
Sep 30 17:59:15 compute-0 nova_compute[265391]: </capabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]: 
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.689 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:944
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.712 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Sep 30 17:59:15 compute-0 nova_compute[265391]: <domainCapabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <domain>kvm</domain>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <machine>pc-i440fx-rhel7.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <arch>i686</arch>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <vcpu max='240'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <iothreads supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <os supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <enum name='firmware'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <loader supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>rom</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pflash</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='readonly'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>yes</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>no</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='secure'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>no</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </loader>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </os>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='host-passthrough' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='hostPassthroughMigratable'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>on</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>off</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='maximum' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='maximumMigratable'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>on</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>off</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='host-model' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <vendor>AMD</vendor>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='x2apic'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='hypervisor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='stibp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='overflow-recov'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='succor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='lbrv'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc-scale'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='flushbyasid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pause-filter'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pfthreshold'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='rdctl-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='mds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='gds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='rfds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='disable' name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='custom' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Dhyana-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Genoa'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='auto-ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='auto-ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-128'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-256'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-512'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v6'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v7'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='KnightsMill'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4fmaps'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4vnniw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512er'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512pf'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='KnightsMill-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4fmaps'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4vnniw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512er'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512pf'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G4-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tbm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G5-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tbm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SierraForest'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ne-convert'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cmpccxadd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SierraForest-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ne-convert'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cmpccxadd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='athlon'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='athlon-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='core2duo'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='core2duo-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='coreduo'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='coreduo-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='n270'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='n270-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='phenom'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='phenom-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <memoryBacking supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <enum name='sourceType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>file</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>anonymous</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>memfd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </memoryBacking>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <devices>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <disk supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='diskDevice'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>disk</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>cdrom</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>floppy</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>lun</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='bus'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ide</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>fdc</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>scsi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>sata</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-non-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </disk>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <graphics supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vnc</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>egl-headless</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>dbus</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </graphics>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <video supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='modelType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vga</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>cirrus</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>none</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>bochs</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ramfb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </video>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <hostdev supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='mode'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>subsystem</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='startupPolicy'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>default</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>mandatory</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>requisite</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>optional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='subsysType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pci</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>scsi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='capsType'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='pciBackend'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </hostdev>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <rng supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-non-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>random</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>egd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>builtin</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </rng>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <filesystem supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='driverType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>path</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>handle</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtiofs</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </filesystem>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <tpm supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tpm-tis</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tpm-crb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>emulator</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>external</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendVersion'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>2.0</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </tpm>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <redirdev supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='bus'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </redirdev>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <channel supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pty</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>unix</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </channel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <crypto supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>qemu</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>builtin</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </crypto>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <interface supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>default</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>passt</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </interface>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <panic supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>isa</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>hyperv</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </panic>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </devices>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <gic supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <vmcoreinfo supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <genid supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <backingStoreInput supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <backup supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <async-teardown supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <ps2 supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <sev supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <sgx supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <hyperv supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='features'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>relaxed</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vapic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>spinlocks</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vpindex</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>runtime</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>synic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>stimer</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>reset</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vendor_id</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>frequencies</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>reenlightenment</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tlbflush</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ipi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>avic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>emsr_bitmap</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>xmm_input</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </hyperv>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <launchSecurity supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </features>
Sep 30 17:59:15 compute-0 nova_compute[265391]: </domainCapabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.720 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Sep 30 17:59:15 compute-0 nova_compute[265391]: <domainCapabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <domain>kvm</domain>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <machine>pc-q35-rhel9.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <arch>i686</arch>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <vcpu max='4096'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <iothreads supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <os supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <enum name='firmware'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <loader supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>rom</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pflash</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='readonly'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>yes</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>no</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='secure'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>no</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </loader>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </os>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='host-passthrough' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='hostPassthroughMigratable'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>on</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>off</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='maximum' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='maximumMigratable'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>on</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>off</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='host-model' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <vendor>AMD</vendor>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='x2apic'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='hypervisor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='stibp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='overflow-recov'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='succor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='lbrv'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc-scale'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='flushbyasid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pause-filter'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pfthreshold'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='rdctl-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='mds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='gds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='rfds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='disable' name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='custom' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Dhyana-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Genoa'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='auto-ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='auto-ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-128'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-256'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-512'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v6'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v7'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='KnightsMill'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4fmaps'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4vnniw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512er'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512pf'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='KnightsMill-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4fmaps'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4vnniw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512er'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512pf'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G4-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tbm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G5-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tbm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SierraForest'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ne-convert'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cmpccxadd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SierraForest-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ne-convert'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cmpccxadd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='athlon'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='athlon-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='core2duo'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='core2duo-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='coreduo'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='coreduo-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='n270'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='n270-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='phenom'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='phenom-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <memoryBacking supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <enum name='sourceType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>file</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>anonymous</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>memfd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </memoryBacking>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <devices>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <disk supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='diskDevice'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>disk</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>cdrom</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>floppy</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>lun</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='bus'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>fdc</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>scsi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>sata</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-non-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </disk>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <graphics supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vnc</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>egl-headless</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>dbus</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </graphics>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <video supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='modelType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vga</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>cirrus</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>none</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>bochs</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ramfb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </video>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <hostdev supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='mode'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>subsystem</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='startupPolicy'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>default</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>mandatory</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>requisite</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>optional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='subsysType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pci</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>scsi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='capsType'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='pciBackend'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </hostdev>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <rng supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-non-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>random</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>egd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>builtin</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </rng>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <filesystem supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='driverType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>path</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>handle</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtiofs</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </filesystem>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <tpm supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tpm-tis</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tpm-crb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>emulator</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>external</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendVersion'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>2.0</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </tpm>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <redirdev supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='bus'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </redirdev>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <channel supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pty</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>unix</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </channel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <crypto supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>qemu</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>builtin</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </crypto>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <interface supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>default</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>passt</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </interface>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <panic supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>isa</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>hyperv</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </panic>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </devices>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <gic supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <vmcoreinfo supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <genid supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <backingStoreInput supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <backup supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <async-teardown supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <ps2 supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <sev supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <sgx supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <hyperv supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='features'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>relaxed</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vapic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>spinlocks</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vpindex</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>runtime</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>synic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>stimer</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>reset</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vendor_id</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>frequencies</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>reenlightenment</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tlbflush</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ipi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>avic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>emsr_bitmap</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>xmm_input</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </hyperv>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <launchSecurity supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </features>
Sep 30 17:59:15 compute-0 nova_compute[265391]: </domainCapabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.755 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:944
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.759 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Sep 30 17:59:15 compute-0 nova_compute[265391]: <domainCapabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <domain>kvm</domain>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <machine>pc-i440fx-rhel7.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <arch>x86_64</arch>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <vcpu max='240'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <iothreads supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <os supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <enum name='firmware'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <loader supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>rom</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pflash</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='readonly'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>yes</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>no</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='secure'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>no</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </loader>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </os>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='host-passthrough' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='hostPassthroughMigratable'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>on</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>off</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='maximum' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='maximumMigratable'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>on</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>off</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='host-model' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <vendor>AMD</vendor>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='x2apic'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='hypervisor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='stibp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='overflow-recov'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='succor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='lbrv'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc-scale'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='flushbyasid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pause-filter'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pfthreshold'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='rdctl-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='mds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='gds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='rfds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='disable' name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='custom' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Dhyana-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Genoa'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='auto-ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='auto-ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-128'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-256'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-512'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v6'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v7'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='KnightsMill'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4fmaps'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4vnniw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512er'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512pf'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='KnightsMill-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4fmaps'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4vnniw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512er'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512pf'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G4-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tbm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G5-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tbm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SierraForest'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ne-convert'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cmpccxadd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SierraForest-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ne-convert'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cmpccxadd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='athlon'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='athlon-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='core2duo'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='core2duo-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='coreduo'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='coreduo-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='n270'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='n270-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='phenom'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='phenom-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <memoryBacking supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <enum name='sourceType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>file</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>anonymous</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>memfd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </memoryBacking>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <devices>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <disk supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='diskDevice'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>disk</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>cdrom</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>floppy</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>lun</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='bus'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ide</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>fdc</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>scsi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>sata</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-non-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </disk>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <graphics supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vnc</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>egl-headless</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>dbus</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </graphics>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <video supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='modelType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vga</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>cirrus</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>none</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>bochs</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ramfb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </video>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <hostdev supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='mode'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>subsystem</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='startupPolicy'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>default</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>mandatory</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>requisite</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>optional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='subsysType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pci</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>scsi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='capsType'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='pciBackend'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </hostdev>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <rng supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-non-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>random</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>egd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>builtin</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </rng>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <filesystem supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='driverType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>path</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>handle</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtiofs</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </filesystem>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <tpm supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tpm-tis</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tpm-crb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>emulator</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>external</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendVersion'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>2.0</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </tpm>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <redirdev supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='bus'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </redirdev>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <channel supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pty</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>unix</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </channel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <crypto supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>qemu</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>builtin</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </crypto>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <interface supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>default</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>passt</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </interface>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <panic supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>isa</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>hyperv</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </panic>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </devices>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <gic supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <vmcoreinfo supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <genid supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <backingStoreInput supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <backup supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <async-teardown supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <ps2 supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <sev supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <sgx supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <hyperv supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='features'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>relaxed</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vapic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>spinlocks</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vpindex</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>runtime</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>synic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>stimer</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>reset</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vendor_id</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>frequencies</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>reenlightenment</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tlbflush</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ipi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>avic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>emsr_bitmap</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>xmm_input</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </hyperv>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <launchSecurity supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </features>
Sep 30 17:59:15 compute-0 nova_compute[265391]: </domainCapabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.819 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Sep 30 17:59:15 compute-0 nova_compute[265391]: <domainCapabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <domain>kvm</domain>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <machine>pc-q35-rhel9.6.0</machine>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <arch>x86_64</arch>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <vcpu max='4096'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <iothreads supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <os supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <enum name='firmware'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>efi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <loader supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>rom</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pflash</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='readonly'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>yes</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>no</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='secure'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>yes</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>no</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </loader>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </os>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='host-passthrough' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='hostPassthroughMigratable'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>on</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>off</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='maximum' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='maximumMigratable'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>on</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>off</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='host-model' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <vendor>AMD</vendor>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='x2apic'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='hypervisor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='stibp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='overflow-recov'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='succor'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='lbrv'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='tsc-scale'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='flushbyasid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pause-filter'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pfthreshold'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='rdctl-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='mds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='gds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='require' name='rfds-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <feature policy='disable' name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <mode name='custom' supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Broadwell-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Cooperlake-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Denverton-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Dhyana-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Genoa'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='auto-ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='auto-ibrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Milan-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amd-psfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='no-nested-data-bp'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='null-sel-clr-base'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='stibp-always-on'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-Rome-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='EPYC-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='GraniteRapids-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-128'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-256'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx10-512'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='prefetchiti'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Haswell-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v6'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Icelake-Server-v7'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='IvyBridge-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='KnightsMill'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4fmaps'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4vnniw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512er'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512pf'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='KnightsMill-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4fmaps'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-4vnniw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512er'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512pf'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G4-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tbm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Opteron_G5-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fma4'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tbm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xop'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SapphireRapids-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='amx-tile'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-bf16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-fp16'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512-vpopcntdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bitalg'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vbmi2'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrc'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fzrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='la57'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='taa-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='tsx-ldtrk'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xfd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SierraForest'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ne-convert'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cmpccxadd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='SierraForest-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ifma'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-ne-convert'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx-vnni-int8'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='bus-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cmpccxadd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fbsdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='fsrs'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ibrs-all'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mcdt-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pbrsb-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='psdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='sbdr-ssdp-no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='serialize'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vaes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='vpclmulqdq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Client-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 17:59:15 compute-0 systemd[1]: Started libvirt nodedev daemon.
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='hle'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='rtm'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Skylake-Server-v5'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512bw'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512cd'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512dq'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512f'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='avx512vl'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='invpcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pcid'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='pku'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='mpx'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v2'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v3'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='core-capability'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='split-lock-detect'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='Snowridge-v4'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='cldemote'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='erms'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='gfni'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdir64b'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='movdiri'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='xsaves'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='athlon'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='athlon-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='core2duo'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='core2duo-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='coreduo'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='coreduo-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='n270'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='n270-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='ss'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='phenom'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <blockers model='phenom-v1'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnow'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <feature name='3dnowext'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </blockers>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </mode>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </cpu>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <memoryBacking supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <enum name='sourceType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>file</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>anonymous</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <value>memfd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </memoryBacking>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <devices>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <disk supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='diskDevice'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>disk</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>cdrom</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>floppy</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>lun</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='bus'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>fdc</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>scsi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>sata</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-non-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </disk>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <graphics supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vnc</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>egl-headless</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>dbus</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </graphics>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <video supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='modelType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vga</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>cirrus</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>none</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>bochs</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ramfb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </video>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <hostdev supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='mode'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>subsystem</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='startupPolicy'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>default</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>mandatory</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>requisite</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>optional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='subsysType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pci</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>scsi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='capsType'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='pciBackend'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </hostdev>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <rng supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtio-non-transitional</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>random</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>egd</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>builtin</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </rng>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <filesystem supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='driverType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>path</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>handle</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>virtiofs</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </filesystem>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <tpm supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tpm-tis</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tpm-crb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>emulator</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>external</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendVersion'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>2.0</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </tpm>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <redirdev supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='bus'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>usb</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </redirdev>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <channel supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>pty</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>unix</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </channel>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <crypto supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='type'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>qemu</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendModel'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>builtin</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </crypto>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <interface supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='backendType'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>default</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>passt</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </interface>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <panic supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='model'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>isa</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>hyperv</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </panic>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </devices>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   <features>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <gic supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <vmcoreinfo supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <genid supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <backingStoreInput supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <backup supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <async-teardown supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <ps2 supported='yes'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <sev supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <sgx supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <hyperv supported='yes'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       <enum name='features'>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>relaxed</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vapic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>spinlocks</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vpindex</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>runtime</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>synic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>stimer</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>reset</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>vendor_id</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>frequencies</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>reenlightenment</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>tlbflush</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>ipi</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>avic</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>emsr_bitmap</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:         <value>xmm_input</value>
Sep 30 17:59:15 compute-0 nova_compute[265391]:       </enum>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     </hyperv>
Sep 30 17:59:15 compute-0 nova_compute[265391]:     <launchSecurity supported='no'/>
Sep 30 17:59:15 compute-0 nova_compute[265391]:   </features>
Sep 30 17:59:15 compute-0 nova_compute[265391]: </domainCapabilities>
Sep 30 17:59:15 compute-0 nova_compute[265391]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.893 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1877
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.894 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1877
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.894 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1877
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.894 2 INFO nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Secure Boot support detected
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.901 2 INFO nova.virt.libvirt.driver [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Sep 30 17:59:15 compute-0 nova_compute[265391]: 2025-09-30 17:59:15.902 2 INFO nova.virt.libvirt.driver [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Sep 30 17:59:16 compute-0 nova_compute[265391]: 2025-09-30 17:59:16.078 2 DEBUG nova.virt.libvirt.driver [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1177
Sep 30 17:59:16 compute-0 nova_compute[265391]: 2025-09-30 17:59:16.169 2 WARNING nova.virt.libvirt.driver [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Sep 30 17:59:16 compute-0 nova_compute[265391]: 2025-09-30 17:59:16.169 2 DEBUG nova.virt.libvirt.volume.mount [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:130
Sep 30 17:59:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:16 compute-0 nova_compute[265391]: 2025-09-30 17:59:16.591 2 INFO nova.virt.node [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Determined node identity 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from /var/lib/nova/compute_id
Sep 30 17:59:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:16.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:16 compute-0 ceph-mon[73755]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:17.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:17 compute-0 nova_compute[265391]: 2025-09-30 17:59:17.101 2 WARNING nova.compute.manager [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Compute nodes ['5403d2fc-3ca9-4ee2-946b-a15032cca0c2'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Sep 30 17:59:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:17.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:17 compute-0 sshd-session[265724]: Accepted publickey for zuul from 192.168.122.30 port 42586 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 17:59:17 compute-0 systemd-logind[811]: New session 58 of user zuul.
Sep 30 17:59:17 compute-0 systemd[1]: Started Session 58 of User zuul.
Sep 30 17:59:18 compute-0 sshd-session[265724]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 17:59:18 compute-0 sudo[265727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:59:18 compute-0 sudo[265727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:18 compute-0 sudo[265727]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:18 compute-0 nova_compute[265391]: 2025-09-30 17:59:18.117 2 INFO nova.compute.manager [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Sep 30 17:59:18 compute-0 sudo[265774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 17:59:18 compute-0 sudo[265774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:18 compute-0 sudo[265774]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:18.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:59:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 17:59:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:59:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 17:59:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 17:59:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:59:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 17:59:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:59:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 17:59:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 17:59:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 17:59:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:59:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:59:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 17:59:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 17:59:19 compute-0 sudo[265959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:59:19 compute-0 sudo[265959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:19 compute-0 sudo[265959]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:19 compute-0 python3.9[265958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 17:59:19 compute-0 sudo[265984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 17:59:19 compute-0 sudo[265984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.142 2 WARNING nova.compute.manager [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.143 2 DEBUG oslo_concurrency.lockutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.143 2 DEBUG oslo_concurrency.lockutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.143 2 DEBUG oslo_concurrency.lockutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.143 2 DEBUG nova.compute.resource_tracker [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.143 2 DEBUG oslo_concurrency.processutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 17:59:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:19.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:19 compute-0 podman[266097]: 2025-09-30 17:59:19.509080856 +0000 UTC m=+0.037429593 container create b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 17:59:19 compute-0 systemd[1]: Started libpod-conmon-b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377.scope.
Sep 30 17:59:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:19 compute-0 podman[266097]: 2025-09-30 17:59:19.585711126 +0000 UTC m=+0.114059883 container init b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_keldysh, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:59:19 compute-0 podman[266097]: 2025-09-30 17:59:19.491830408 +0000 UTC m=+0.020179165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:59:19 compute-0 podman[266097]: 2025-09-30 17:59:19.59394076 +0000 UTC m=+0.122289497 container start b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_keldysh, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:59:19 compute-0 podman[266097]: 2025-09-30 17:59:19.598578301 +0000 UTC m=+0.126927058 container attach b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 17:59:19 compute-0 lucid_keldysh[266113]: 167 167
Sep 30 17:59:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 17:59:19 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2052937154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 17:59:19 compute-0 systemd[1]: libpod-b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377.scope: Deactivated successfully.
Sep 30 17:59:19 compute-0 podman[266097]: 2025-09-30 17:59:19.600822649 +0000 UTC m=+0.129171386 container died b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Sep 30 17:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8d230108d4f63e64b70a786f92eec8a75396cae17e7af797b7249dcb2d7e9ae-merged.mount: Deactivated successfully.
Sep 30 17:59:19 compute-0 podman[266097]: 2025-09-30 17:59:19.639204716 +0000 UTC m=+0.167553453 container remove b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_keldysh, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.639 2 DEBUG oslo_concurrency.processutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 17:59:19 compute-0 systemd[1]: libpod-conmon-b0042c61e1951e920fa0e8c2cc68af7a04eda22d8bdc7d43d05d1923fa7ff377.scope: Deactivated successfully.
Sep 30 17:59:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:19 compute-0 podman[266141]: 2025-09-30 17:59:19.811498221 +0000 UTC m=+0.044727443 container create e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.844 2 WARNING nova.virt.libvirt.driver [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.846 2 DEBUG oslo_concurrency.processutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 17:59:19 compute-0 systemd[1]: Started libpod-conmon-e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231.scope.
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.869 2 DEBUG oslo_concurrency.processutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.870 2 DEBUG nova.compute.resource_tracker [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4875MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.870 2 DEBUG oslo_concurrency.lockutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:59:19 compute-0 nova_compute[265391]: 2025-09-30 17:59:19.871 2 DEBUG oslo_concurrency.lockutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:59:19 compute-0 podman[266141]: 2025-09-30 17:59:19.793940765 +0000 UTC m=+0.027170007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:59:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972fa226634cc0f67f19c143298a6660bf34a0954b704325d8d18db604be271c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972fa226634cc0f67f19c143298a6660bf34a0954b704325d8d18db604be271c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972fa226634cc0f67f19c143298a6660bf34a0954b704325d8d18db604be271c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972fa226634cc0f67f19c143298a6660bf34a0954b704325d8d18db604be271c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972fa226634cc0f67f19c143298a6660bf34a0954b704325d8d18db604be271c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:19 compute-0 podman[266141]: 2025-09-30 17:59:19.909041145 +0000 UTC m=+0.142270377 container init e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_moore, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 17:59:19 compute-0 podman[266141]: 2025-09-30 17:59:19.919555318 +0000 UTC m=+0.152784530 container start e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_moore, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:59:19 compute-0 podman[266141]: 2025-09-30 17:59:19.923135201 +0000 UTC m=+0.156364493 container attach e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:59:19 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2052937154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:19.977503) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255159977542, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1685, "num_deletes": 251, "total_data_size": 3213854, "memory_usage": 3273728, "flush_reason": "Manual Compaction"}
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255159992922, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3132539, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17866, "largest_seqno": 19550, "table_properties": {"data_size": 3124865, "index_size": 4617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15708, "raw_average_key_size": 19, "raw_value_size": 3109490, "raw_average_value_size": 3931, "num_data_blocks": 207, "num_entries": 791, "num_filter_entries": 791, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759254986, "oldest_key_time": 1759254986, "file_creation_time": 1759255159, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 15485 microseconds, and 8110 cpu microseconds.
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:19.992979) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3132539 bytes OK
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:19.993013) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:19.994982) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:19.995044) EVENT_LOG_v1 {"time_micros": 1759255159995032, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:19.995075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3206740, prev total WAL file size 3206740, number of live WAL files 2.
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:19.996380) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3059KB)], [41(10MB)]
Sep 30 17:59:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255159996448, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 13662979, "oldest_snapshot_seqno": -1}
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4770 keys, 11580499 bytes, temperature: kUnknown
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255160064252, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 11580499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11547375, "index_size": 20086, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 121122, "raw_average_key_size": 25, "raw_value_size": 11459587, "raw_average_value_size": 2402, "num_data_blocks": 831, "num_entries": 4770, "num_filter_entries": 4770, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759255159, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:20.067266) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 11580499 bytes
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:20.069083) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.2 rd, 170.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.7) OK, records in: 5288, records dropped: 518 output_compression: NoCompression
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:20.069104) EVENT_LOG_v1 {"time_micros": 1759255160069096, "job": 20, "event": "compaction_finished", "compaction_time_micros": 67905, "compaction_time_cpu_micros": 22604, "output_level": 6, "num_output_files": 1, "total_output_size": 11580499, "num_input_records": 5288, "num_output_records": 4770, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255160069808, "job": 20, "event": "table_file_deletion", "file_number": 43}
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255160071527, "job": 20, "event": "table_file_deletion", "file_number": 41}
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:19.996260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:20.071660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:20.071668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:20.071671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:20.071674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:59:20 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-17:59:20.071676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 17:59:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:20 compute-0 blissful_moore[266158]: --> passed data devices: 0 physical, 1 LVM
Sep 30 17:59:20 compute-0 blissful_moore[266158]: --> All data devices are unavailable
Sep 30 17:59:20 compute-0 systemd[1]: libpod-e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231.scope: Deactivated successfully.
Sep 30 17:59:20 compute-0 podman[266141]: 2025-09-30 17:59:20.30075625 +0000 UTC m=+0.533985482 container died e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 17:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-972fa226634cc0f67f19c143298a6660bf34a0954b704325d8d18db604be271c-merged.mount: Deactivated successfully.
Sep 30 17:59:20 compute-0 podman[266141]: 2025-09-30 17:59:20.355793899 +0000 UTC m=+0.589023141 container remove e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_moore, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:59:20 compute-0 nova_compute[265391]: 2025-09-30 17:59:20.377 2 WARNING nova.compute.resource_tracker [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:5403d2fc-3ca9-4ee2-946b-a15032cca0c2: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 could not be found.
Sep 30 17:59:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:20 compute-0 systemd[1]: libpod-conmon-e005450a58167f11e342cf432ae24fa9cbb8e88441f2a7c06d6a17466beaa231.scope: Deactivated successfully.
Sep 30 17:59:20 compute-0 sudo[265984]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:20 compute-0 sudo[266242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:59:20 compute-0 sudo[266242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:20 compute-0 sudo[266242]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:20 compute-0 sudo[266285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 17:59:20 compute-0 sudo[266285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130001ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:20 compute-0 sudo[266359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgursovddidmjeghxqzvxcgimowkckeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255159.9717238-52-179749158726007/AnsiballZ_systemd_service.py'
Sep 30 17:59:20 compute-0 sudo[266359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:20.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:20 compute-0 nova_compute[265391]: 2025-09-30 17:59:20.890 2 INFO nova.compute.resource_tracker [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2
Sep 30 17:59:20 compute-0 python3.9[266361]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:59:20 compute-0 systemd[1]: Reloading.
Sep 30 17:59:21 compute-0 ceph-mon[73755]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3542600850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 17:59:21 compute-0 podman[266404]: 2025-09-30 17:59:21.045812002 +0000 UTC m=+0.064870846 container create 3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 17:59:21 compute-0 systemd-rc-local-generator[266443]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:59:21 compute-0 systemd-sysv-generator[266446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:59:21 compute-0 podman[266404]: 2025-09-30 17:59:21.012893567 +0000 UTC m=+0.031952421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:59:21 compute-0 systemd[1]: Started libpod-conmon-3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68.scope.
Sep 30 17:59:21 compute-0 sudo[266359]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:21 compute-0 podman[266404]: 2025-09-30 17:59:21.375949837 +0000 UTC m=+0.395008711 container init 3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 17:59:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:21 compute-0 podman[266404]: 2025-09-30 17:59:21.390849415 +0000 UTC m=+0.409908239 container start 3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_darwin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:59:21 compute-0 podman[266404]: 2025-09-30 17:59:21.395139376 +0000 UTC m=+0.414198250 container attach 3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 17:59:21 compute-0 interesting_darwin[266453]: 167 167
Sep 30 17:59:21 compute-0 systemd[1]: libpod-3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68.scope: Deactivated successfully.
Sep 30 17:59:21 compute-0 conmon[266453]: conmon 3e6918bc6cbaa82e98eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68.scope/container/memory.events
Sep 30 17:59:21 compute-0 podman[266404]: 2025-09-30 17:59:21.400657089 +0000 UTC m=+0.419715913 container died 3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b067252bcbfa818311968ec5f40686634e564bd1323e65be0444b6cc1b546551-merged.mount: Deactivated successfully.
Sep 30 17:59:21 compute-0 podman[266404]: 2025-09-30 17:59:21.457187158 +0000 UTC m=+0.476245982 container remove 3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 17:59:21 compute-0 systemd[1]: libpod-conmon-3e6918bc6cbaa82e98eb479e569ee29879321f28131de7b74278b2a4ee818c68.scope: Deactivated successfully.
Sep 30 17:59:21 compute-0 podman[266532]: 2025-09-30 17:59:21.641282709 +0000 UTC m=+0.048822569 container create 458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 17:59:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:21 compute-0 systemd[1]: Started libpod-conmon-458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016.scope.
Sep 30 17:59:21 compute-0 podman[266532]: 2025-09-30 17:59:21.620288354 +0000 UTC m=+0.027828224 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:59:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1165ec052f622783f64ac001a50dd747cba6db41b7af956f68b354188223bca6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1165ec052f622783f64ac001a50dd747cba6db41b7af956f68b354188223bca6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1165ec052f622783f64ac001a50dd747cba6db41b7af956f68b354188223bca6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1165ec052f622783f64ac001a50dd747cba6db41b7af956f68b354188223bca6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:21 compute-0 podman[266532]: 2025-09-30 17:59:21.777309723 +0000 UTC m=+0.184849563 container init 458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 17:59:21 compute-0 podman[266532]: 2025-09-30 17:59:21.787066716 +0000 UTC m=+0.194606556 container start 458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:59:21 compute-0 podman[266532]: 2025-09-30 17:59:21.795260329 +0000 UTC m=+0.202800169 container attach 458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:59:22 compute-0 ceph-mon[73755]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]: {
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:     "0": [
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:         {
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "devices": [
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "/dev/loop3"
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             ],
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "lv_name": "ceph_lv0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "lv_size": "21470642176",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "name": "ceph_lv0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "tags": {
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.cluster_name": "ceph",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.crush_device_class": "",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.encrypted": "0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.osd_id": "0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.type": "block",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.vdo": "0",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:                 "ceph.with_tpm": "0"
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             },
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "type": "block",
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:             "vg_name": "ceph_vg0"
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:         }
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]:     ]
Sep 30 17:59:22 compute-0 vibrant_lamarr[266572]: }
Sep 30 17:59:22 compute-0 systemd[1]: libpod-458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016.scope: Deactivated successfully.
Sep 30 17:59:22 compute-0 podman[266532]: 2025-09-30 17:59:22.146585955 +0000 UTC m=+0.554125805 container died 458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 17:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1165ec052f622783f64ac001a50dd747cba6db41b7af956f68b354188223bca6-merged.mount: Deactivated successfully.
Sep 30 17:59:22 compute-0 python3.9[266652]: ansible-ansible.builtin.service_facts Invoked
Sep 30 17:59:22 compute-0 podman[266532]: 2025-09-30 17:59:22.207738363 +0000 UTC m=+0.615278203 container remove 458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lamarr, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 17:59:22 compute-0 systemd[1]: libpod-conmon-458c7b7069664f472e39a69e1ccf2958379dbeb56dac96ee19da11185f31d016.scope: Deactivated successfully.
Sep 30 17:59:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:59:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:59:22 compute-0 network[266684]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 17:59:22 compute-0 sudo[266285]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:22 compute-0 network[266685]: 'network-scripts' will be removed from distribution in near future.
Sep 30 17:59:22 compute-0 network[266686]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 17:59:22 compute-0 nova_compute[265391]: 2025-09-30 17:59:22.460 2 DEBUG nova.compute.resource_tracker [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 17:59:22 compute-0 nova_compute[265391]: 2025-09-30 17:59:22.460 2 DEBUG nova.compute.resource_tracker [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 17:59:19 up  1:02,  0 user,  load average: 1.29, 1.28, 1.11\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 17:59:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:22.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:59:23 compute-0 sudo[266691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 17:59:23 compute-0 sudo[266691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:23 compute-0 sudo[266691]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:23 compute-0 sudo[266718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 17:59:23 compute-0 sudo[266718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:23 compute-0 nova_compute[265391]: 2025-09-30 17:59:23.277 2 INFO nova.scheduler.client.report [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] [req-072f6204-f2e6-4b15-a666-5f36b0068f33] Created resource provider record via placement API for resource provider with UUID 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 and name compute-0.ctlplane.example.com.
Sep 30 17:59:23 compute-0 nova_compute[265391]: 2025-09-30 17:59:23.293 2 DEBUG oslo_concurrency.processutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 17:59:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:23.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:23 compute-0 podman[266823]: 2025-09-30 17:59:23.705266299 +0000 UTC m=+0.051421316 container create 738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poitras, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:59:23 compute-0 systemd[1]: Started libpod-conmon-738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047.scope.
Sep 30 17:59:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 17:59:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052332124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 17:59:23 compute-0 sshd-session[266655]: Invalid user sri from 45.252.249.158 port 43138
Sep 30 17:59:23 compute-0 sshd-session[266655]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 17:59:23 compute-0 sshd-session[266655]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 17:59:23 compute-0 podman[266823]: 2025-09-30 17:59:23.680108436 +0000 UTC m=+0.026263493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:59:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:23 compute-0 podman[266823]: 2025-09-30 17:59:23.813543122 +0000 UTC m=+0.159698159 container init 738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poitras, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 17:59:23 compute-0 nova_compute[265391]: 2025-09-30 17:59:23.815 2 DEBUG oslo_concurrency.processutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 17:59:23 compute-0 podman[266823]: 2025-09-30 17:59:23.822910625 +0000 UTC m=+0.169065642 container start 738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poitras, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:59:23 compute-0 nova_compute[265391]: 2025-09-30 17:59:23.824 2 DEBUG nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Sep 30 17:59:23 compute-0 nova_compute[265391]: ] _kernel_supports_amd_sev /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1953
Sep 30 17:59:23 compute-0 nova_compute[265391]: 2025-09-30 17:59:23.824 2 INFO nova.virt.libvirt.host [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] kernel doesn't support AMD SEV
Sep 30 17:59:23 compute-0 nova_compute[265391]: 2025-09-30 17:59:23.825 2 DEBUG nova.compute.provider_tree [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 39, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 17:59:23 compute-0 nova_compute[265391]: 2025-09-30 17:59:23.825 2 DEBUG nova.virt.libvirt.driver [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 17:59:23 compute-0 podman[266823]: 2025-09-30 17:59:23.826999612 +0000 UTC m=+0.173154979 container attach 738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Sep 30 17:59:23 compute-0 naughty_poitras[266845]: 167 167
Sep 30 17:59:23 compute-0 systemd[1]: libpod-738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047.scope: Deactivated successfully.
Sep 30 17:59:23 compute-0 conmon[266845]: conmon 738f33c2f5cf15d75175 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047.scope/container/memory.events
Sep 30 17:59:23 compute-0 podman[266823]: 2025-09-30 17:59:23.832824663 +0000 UTC m=+0.178979680 container died 738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 17:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0b3cb80d3370a9449d269a8ed86bb35ef7538febee22da86c1b81bf6d70c79f-merged.mount: Deactivated successfully.
Sep 30 17:59:23 compute-0 podman[266823]: 2025-09-30 17:59:23.887826992 +0000 UTC m=+0.233982009 container remove 738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:59:23 compute-0 systemd[1]: libpod-conmon-738f33c2f5cf15d7517530acefbcc1c2c7242498f28b1e91accaab2872dbc047.scope: Deactivated successfully.
Sep 30 17:59:24 compute-0 podman[266868]: 2025-09-30 17:59:24.015499758 +0000 UTC m=+0.105314247 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 17:59:24 compute-0 podman[266906]: 2025-09-30 17:59:24.068667379 +0000 UTC m=+0.043986234 container create 038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_clarke, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 17:59:24 compute-0 ceph-mon[73755]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3052332124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 17:59:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3694196857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 17:59:24 compute-0 systemd[1]: Started libpod-conmon-038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41.scope.
Sep 30 17:59:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 17:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018c08547933053f71310691f3cb3f168698fe75ec52f30c5351e928133c54ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018c08547933053f71310691f3cb3f168698fe75ec52f30c5351e928133c54ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018c08547933053f71310691f3cb3f168698fe75ec52f30c5351e928133c54ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018c08547933053f71310691f3cb3f168698fe75ec52f30c5351e928133c54ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 17:59:24 compute-0 podman[266906]: 2025-09-30 17:59:24.051621786 +0000 UTC m=+0.026940661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 17:59:24 compute-0 podman[266906]: 2025-09-30 17:59:24.155923475 +0000 UTC m=+0.131242350 container init 038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 17:59:24 compute-0 podman[266906]: 2025-09-30 17:59:24.163872422 +0000 UTC m=+0.139191297 container start 038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_clarke, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:59:24 compute-0 podman[266906]: 2025-09-30 17:59:24.167706731 +0000 UTC m=+0.143025616 container attach 038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 17:59:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:24 compute-0 nova_compute[265391]: 2025-09-30 17:59:24.397 2 DEBUG nova.scheduler.client.report [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Updated inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 39, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:975
Sep 30 17:59:24 compute-0 nova_compute[265391]: 2025-09-30 17:59:24.398 2 DEBUG nova.compute.provider_tree [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 17:59:24 compute-0 nova_compute[265391]: 2025-09-30 17:59:24.398 2 DEBUG nova.compute.provider_tree [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 17:59:24 compute-0 nova_compute[265391]: 2025-09-30 17:59:24.545 2 DEBUG nova.compute.provider_tree [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 17:59:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:24.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:24 compute-0 lvm[267027]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 17:59:24 compute-0 lvm[267027]: VG ceph_vg0 finished
Sep 30 17:59:24 compute-0 gracious_clarke[266928]: {}
Sep 30 17:59:24 compute-0 systemd[1]: libpod-038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41.scope: Deactivated successfully.
Sep 30 17:59:24 compute-0 systemd[1]: libpod-038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41.scope: Consumed 1.139s CPU time.
Sep 30 17:59:24 compute-0 podman[266906]: 2025-09-30 17:59:24.868985796 +0000 UTC m=+0.844304641 container died 038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 17:59:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-018c08547933053f71310691f3cb3f168698fe75ec52f30c5351e928133c54ed-merged.mount: Deactivated successfully.
Sep 30 17:59:24 compute-0 podman[266906]: 2025-09-30 17:59:24.913576945 +0000 UTC m=+0.888895790 container remove 038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_clarke, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 17:59:24 compute-0 systemd[1]: libpod-conmon-038d76127007ec7b9b8999386bff6dfc9332da0dd5aeb16f62844485d80aca41.scope: Deactivated successfully.
Sep 30 17:59:24 compute-0 sudo[266718]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 17:59:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:59:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 17:59:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:59:25 compute-0 nova_compute[265391]: 2025-09-30 17:59:25.053 2 DEBUG nova.compute.resource_tracker [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 17:59:25 compute-0 nova_compute[265391]: 2025-09-30 17:59:25.054 2 DEBUG oslo_concurrency.lockutils [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.183s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:59:25 compute-0 nova_compute[265391]: 2025-09-30 17:59:25.054 2 DEBUG nova.service [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.12/site-packages/nova/service.py:177
Sep 30 17:59:25 compute-0 sudo[267041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 17:59:25 compute-0 sudo[267041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:25 compute-0 sudo[267041]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:25 compute-0 sudo[267066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:59:25 compute-0 sudo[267066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:25 compute-0 sudo[267066]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:25 compute-0 nova_compute[265391]: 2025-09-30 17:59:25.293 2 DEBUG nova.service [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.12/site-packages/nova/service.py:194
Sep 30 17:59:25 compute-0 nova_compute[265391]: 2025-09-30 17:59:25.294 2 DEBUG nova.servicegroup.drivers.db [None req-e45451e1-49e7-4ef9-85f1-aa930a1c64e9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.12/site-packages/nova/servicegroup/drivers/db.py:44
Sep 30 17:59:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:25 compute-0 sshd-session[266655]: Failed password for invalid user sri from 45.252.249.158 port 43138 ssh2
Sep 30 17:59:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:59:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 17:59:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:27 compute-0 ceph-mon[73755]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:27 compute-0 sshd-session[266655]: Received disconnect from 45.252.249.158 port 43138:11: Bye Bye [preauth]
Sep 30 17:59:27 compute-0 sshd-session[266655]: Disconnected from invalid user sri 45.252.249.158 port 43138 [preauth]
Sep 30 17:59:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:27.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:27.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:28 compute-0 sudo[267296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prpczmpbqoekaayhehrhfckhecickyhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255167.7635276-90-166992067081763/AnsiballZ_systemd_service.py'
Sep 30 17:59:28 compute-0 sudo[267296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:28 compute-0 python3.9[267298]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 17:59:28 compute-0 sudo[267296]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:28.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:28] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:59:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:28] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:59:29 compute-0 ceph-mon[73755]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:29 compute-0 sudo[267449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlkbzkgnjkrzpdcnqceprjmivuqwgxea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255168.803081-110-46837929299318/AnsiballZ_file.py'
Sep 30 17:59:29 compute-0 sudo[267449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:29 compute-0 python3.9[267451]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:29 compute-0 sudo[267449]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:29 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:59:29 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 17:59:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 340 B/s rd, 0 op/s
Sep 30 17:59:30 compute-0 sudo[267604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkkoggxmzdzxksxcaqdrdbrsqbmitxyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255169.721592-126-193535142897290/AnsiballZ_file.py'
Sep 30 17:59:30 compute-0 sudo[267604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:30 compute-0 ceph-mon[73755]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 340 B/s rd, 0 op/s
Sep 30 17:59:30 compute-0 python3.9[267606]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:30 compute-0 sudo[267604]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:30.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:31 compute-0 nova_compute[265391]: 2025-09-30 17:59:31.295 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 17:59:31 compute-0 sudo[267773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boanojhwtarztlqulzahyjahykmsguil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255170.776871-144-74351271761562/AnsiballZ_command.py'
Sep 30 17:59:31 compute-0 sudo[267773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:31 compute-0 podman[267730]: 2025-09-30 17:59:31.34353962 +0000 UTC m=+0.106491207 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 17:59:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:31.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:31 compute-0 python3.9[267777]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:59:31 compute-0 sudo[267773]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:31 compute-0 nova_compute[265391]: 2025-09-30 17:59:31.806 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 17:59:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:32 compute-0 python3.9[267931]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 17:59:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:32.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:32 compute-0 ceph-mon[73755]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:33 compute-0 sudo[268083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgtxsyeexgesqtaivqwoivxgawcuuizw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255172.736446-180-239560198416309/AnsiballZ_systemd_service.py'
Sep 30 17:59:33 compute-0 sudo[268083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:33.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:33 compute-0 python3.9[268085]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 17:59:33 compute-0 systemd[1]: Reloading.
Sep 30 17:59:33 compute-0 systemd-rc-local-generator[268112]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 17:59:33 compute-0 systemd-sysv-generator[268115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 17:59:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:33.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:33 compute-0 sudo[268083]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:34 compute-0 sshd-session[267932]: Invalid user srv from 14.225.220.107 port 47606
Sep 30 17:59:34 compute-0 sshd-session[267932]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 17:59:34 compute-0 sshd-session[267932]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 17:59:34 compute-0 sudo[268272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nelpcchosecfhmetlodmnubjaesewnrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255174.1113193-196-9951107931763/AnsiballZ_command.py'
Sep 30 17:59:34 compute-0 sudo[268272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:34 compute-0 python3.9[268274]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 17:59:34 compute-0 sudo[268272]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:34.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:34 compute-0 ceph-mon[73755]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:35 compute-0 sudo[268425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sccxurivhuemnhoqwjulenvmifffaaaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255174.9709496-214-108333034136190/AnsiballZ_file.py'
Sep 30 17:59:35 compute-0 sudo[268425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:35.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:35 compute-0 python3.9[268427]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:59:35 compute-0 sudo[268425]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:36 compute-0 python3.9[268579]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:59:36 compute-0 sshd-session[267932]: Failed password for invalid user srv from 14.225.220.107 port 47606 ssh2
Sep 30 17:59:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:36.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:36 compute-0 ceph-mon[73755]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:37.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:37 compute-0 python3.9[268732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:59:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 17:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 17:59:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:37.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:37 compute-0 python3.9[268855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255176.6275885-246-2310717584559/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 17:59:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:59:38 compute-0 sshd-session[267932]: Received disconnect from 14.225.220.107 port 47606:11: Bye Bye [preauth]
Sep 30 17:59:38 compute-0 sshd-session[267932]: Disconnected from invalid user srv 14.225.220.107 port 47606 [preauth]
Sep 30 17:59:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300038c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:38 compute-0 podman[268955]: 2025-09-30 17:59:38.519516772 +0000 UTC m=+0.058185982 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Sep 30 17:59:38 compute-0 sudo[269025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfcgwtdryyfzrhljxelzoztypcpixbko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255178.114513-276-164056998144148/AnsiballZ_group.py'
Sep 30 17:59:38 compute-0 sudo[269025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:38.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:38 compute-0 python3.9[269027]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Sep 30 17:59:38 compute-0 sudo[269025]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:38] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:59:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:38] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:59:39 compute-0 ceph-mon[73755]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:39.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:39 compute-0 sudo[269179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hglrmiloejsacyjossmwsahafuqtwaqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255179.1944528-298-276474892998193/AnsiballZ_getent.py'
Sep 30 17:59:39 compute-0 sudo[269179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:39 compute-0 python3.9[269181]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Sep 30 17:59:39 compute-0 sudo[269179]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:40 compute-0 podman[269183]: 2025-09-30 17:59:40.013256022 +0000 UTC m=+0.063088690 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true)
Sep 30 17:59:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:40 compute-0 sudo[269354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbnsgpsjeqddquqhjbcledhcawnijehh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255180.1526532-314-16387520952261/AnsiballZ_group.py'
Sep 30 17:59:40 compute-0 sudo[269354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:40 compute-0 python3.9[269356]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 17:59:40 compute-0 groupadd[269357]: group added to /etc/group: name=ceilometer, GID=42405
Sep 30 17:59:40 compute-0 groupadd[269357]: group added to /etc/gshadow: name=ceilometer
Sep 30 17:59:40 compute-0 groupadd[269357]: new group: name=ceilometer, GID=42405
Sep 30 17:59:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:40 compute-0 sudo[269354]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:41 compute-0 ceph-mon[73755]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:41.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:41 compute-0 sudo[269514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irdcljwuzqbuvncqhndordppzoryldbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255181.0768905-330-184665005845476/AnsiballZ_user.py'
Sep 30 17:59:41 compute-0 sudo[269514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 17:59:41 compute-0 python3.9[269516]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 17:59:41 compute-0 useradd[269518]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Sep 30 17:59:41 compute-0 useradd[269518]: add 'ceilometer' to group 'libvirt'
Sep 30 17:59:41 compute-0 useradd[269518]: add 'ceilometer' to shadow group 'libvirt'
Sep 30 17:59:42 compute-0 sudo[269514]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:42 compute-0 ceph-mon[73755]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300038c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:43 compute-0 python3.9[269674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:43.594Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:44 compute-0 python3.9[269797]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759255183.0299594-382-112615119707073/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:44 compute-0 python3.9[269947]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:44 compute-0 ceph-mon[73755]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:45 compute-0 python3.9[270068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759255184.166822-382-208403777687416/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:45 compute-0 sudo[270093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 17:59:45 compute-0 sudo[270093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 17:59:45 compute-0 sudo[270093]: pam_unix(sudo:session): session closed for user root
Sep 30 17:59:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:45.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:45 compute-0 python3.9[270244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:46 compute-0 python3.9[270366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759255185.3301952-382-96994867326829/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:46 compute-0 ceph-mon[73755]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:47.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:47.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:47 compute-0 python3.9[270517]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:59:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:48 compute-0 python3.9[270670]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 17:59:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:48.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:48] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:59:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:48] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 17:59:48 compute-0 ceph-mon[73755]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:49 compute-0 python3.9[270822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:49.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:49 compute-0 python3.9[270945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255188.635815-500-85822830366274/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:50 compute-0 python3.9[271096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:50 compute-0 python3.9[271172]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:50 compute-0 ceph-mon[73755]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 17:59:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:51.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 17:59:51 compute-0 python3.9[271323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:52 compute-0 python3.9[271445]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255191.0127296-500-155357209642341/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=f3aeda92b1de7a4881364150abf82a5da4c708e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 17:59:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:59:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:52 compute-0 python3.9[271595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:52 compute-0 ceph-mon[73755]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 17:59:53 compute-0 python3.9[271716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255192.2570686-500-250604216540745/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:53.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:53 compute-0 python3.9[271868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:59:54.253 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 17:59:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:59:54.254 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 17:59:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 17:59:54.254 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 17:59:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:54 compute-0 podman[271963]: 2025-09-30 17:59:54.35068128 +0000 UTC m=+0.098465009 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 17:59:54 compute-0 python3.9[272001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255193.4426923-500-183638553456613/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:54 compute-0 ceph-mon[73755]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:55 compute-0 python3.9[272166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 17:59:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:55.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:55 compute-0 python3.9[272288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255194.6179185-500-18696285878240/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:56 compute-0 python3.9[272439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:56.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:56 compute-0 python3.9[272560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255195.81269-500-196506545026934/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:57 compute-0 ceph-mon[73755]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T17:59:57.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 17:59:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:57.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:57 compute-0 python3.9[272710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:58 compute-0 python3.9[272833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255197.0386107-500-171702167892188/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 17:59:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 17:59:58 compute-0 python3.9[272983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 17:59:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:17:59:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:58] "GET /metrics HTTP/1.1" 200 46517 "" "Prometheus/2.51.0"
Sep 30 17:59:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:17:59:58] "GET /metrics HTTP/1.1" 200 46517 "" "Prometheus/2.51.0"
Sep 30 17:59:59 compute-0 ceph-mon[73755]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 17:59:59 compute-0 python3.9[273104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255198.1886883-500-239131864569665/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 17:59:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 17:59:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 17:59:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:17:59:59.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 17:59:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 17:59:59 compute-0 python3.9[273256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:00:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:00:00 compute-0 ceph-mon[73755]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:00:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:00 compute-0 python3.9[273377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255199.4169738-500-279662188806427/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:01 compute-0 ceph-mon[73755]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:01 compute-0 python3.9[273527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:00:01 compute-0 sshd-session[270944]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:00:01 compute-0 sshd-session[270944]: banner exchange: Connection from 115.190.39.222 port 57326: Connection timed out
Sep 30 18:00:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:01.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:01 compute-0 podman[273623]: 2025-09-30 18:00:01.495523154 +0000 UTC m=+0.056129259 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:00:01 compute-0 python3.9[273661]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255200.6408138-500-261587430800397/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:02 compute-0 ceph-mon[73755]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:03.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:03.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:04 compute-0 python3.9[273820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:00:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:04 compute-0 python3.9[273896]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:04.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:04 compute-0 ceph-mon[73755]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:05 compute-0 sudo[274048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:00:05 compute-0 sudo[274048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:05 compute-0 sudo[274048]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:05 compute-0 python3.9[274046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:00:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:06 compute-0 python3.9[274149]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:06.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:06 compute-0 ceph-mon[73755]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:06 compute-0 python3.9[274299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:00:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:07.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:00:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:00:07 compute-0 python3.9[274375]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:00:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:07.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:00:07
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.control', '.nfs', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:00:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:00:07 compute-0 sudo[274527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lajlkbkhemmxqhsdejosporrnsyugasy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255207.5744972-878-198066667728851/AnsiballZ_file.py'
Sep 30 18:00:07 compute-0 sudo[274527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:08 compute-0 python3.9[274529]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:08 compute-0 sudo[274527]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:08 compute-0 sudo[274692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkaoleyxhqcuoaifpnvrivquxwbslbxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255208.3747375-894-234096379669237/AnsiballZ_file.py'
Sep 30 18:00:08 compute-0 sudo[274692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:08 compute-0 podman[274653]: 2025-09-30 18:00:08.676189929 +0000 UTC m=+0.061085487 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=iscsid)
Sep 30 18:00:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:08 compute-0 ceph-mon[73755]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:00:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:00:08 compute-0 python3.9[274700]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:08 compute-0 sudo[274692]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:09 compute-0 sudo[274852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkyleljstxfawgascobasotofezzylxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255209.150469-910-175068349177415/AnsiballZ_file.py'
Sep 30 18:00:09 compute-0 sudo[274852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:09.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:09 compute-0 python3.9[274854]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 18:00:09 compute-0 sudo[274852]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:10 compute-0 sudo[275022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjkkxoajstgflbithuxwmtssfiwrvwnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255210.0411441-926-160684772955572/AnsiballZ_systemd_service.py'
Sep 30 18:00:10 compute-0 podman[274979]: 2025-09-30 18:00:10.385147948 +0000 UTC m=+0.059224880 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0)
Sep 30 18:00:10 compute-0 sudo[275022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:10 compute-0 python3.9[275025]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 18:00:10 compute-0 systemd[1]: Reloading.
Sep 30 18:00:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:10 compute-0 ceph-mon[73755]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:10 compute-0 systemd-rc-local-generator[275052]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 18:00:10 compute-0 systemd-sysv-generator[275056]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 18:00:11 compute-0 systemd[1]: Listening on Podman API Socket.
Sep 30 18:00:11 compute-0 sudo[275022]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:11.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:11 compute-0 sudo[275215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpniytoahxkohbogeaaxwrmktihfxjml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255211.6101568-944-44005905955817/AnsiballZ_stat.py'
Sep 30 18:00:11 compute-0 sudo[275215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:12 compute-0 python3.9[275217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:00:12 compute-0 sudo[275215]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:12 compute-0 sudo[275338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjtffgbxtlyoqlwxffiabntdwodrriqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255211.6101568-944-44005905955817/AnsiballZ_copy.py'
Sep 30 18:00:12 compute-0 sudo[275338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:12 compute-0 python3.9[275340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255211.6101568-944-44005905955817/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 18:00:12 compute-0 sudo[275338]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:12 compute-0 ceph-mon[73755]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 103 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.430 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.430 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.431 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.431 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.431 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.431 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.431 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.432 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.432 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:00:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:13.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:13.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:13 compute-0 sudo[275492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-celfawxwtnywldukpwmatrjfdtmkxqdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255213.2690704-978-67151074667646/AnsiballZ_container_config_data.py'
Sep 30 18:00:13 compute-0 sudo[275492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:13 compute-0 python3.9[275494]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.944 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:00:13 compute-0 nova_compute[265391]: 2025-09-30 18:00:13.945 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:00:13 compute-0 sudo[275492]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:00:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527523402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:00:14 compute-0 nova_compute[265391]: 2025-09-30 18:00:14.432 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:00:14 compute-0 nova_compute[265391]: 2025-09-30 18:00:14.606 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:00:14 compute-0 nova_compute[265391]: 2025-09-30 18:00:14.607 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:00:14 compute-0 nova_compute[265391]: 2025-09-30 18:00:14.623 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:00:14 compute-0 nova_compute[265391]: 2025-09-30 18:00:14.624 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:00:14 compute-0 nova_compute[265391]: 2025-09-30 18:00:14.624 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:00:14 compute-0 nova_compute[265391]: 2025-09-30 18:00:14.625 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:00:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:14 compute-0 sudo[275667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bflkjxnglupjyhxvzucrfwfjxedvmwna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255214.236979-996-122018089229354/AnsiballZ_container_config_hash.py'
Sep 30 18:00:14 compute-0 sudo[275667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:14 compute-0 ceph-mon[73755]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1276608331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:00:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3527523402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:00:14 compute-0 python3.9[275669]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 18:00:14 compute-0 sudo[275667]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:15 compute-0 nova_compute[265391]: 2025-09-30 18:00:15.669 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:00:15 compute-0 nova_compute[265391]: 2025-09-30 18:00:15.669 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:00:14 up  1:03,  0 user,  load average: 1.16, 1.23, 1.10\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:00:15 compute-0 nova_compute[265391]: 2025-09-30 18:00:15.686 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:00:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:15 compute-0 sudo[275822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkkpwbnxumhrxfhiikzgwgppepxeemhk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759255215.316129-1016-208301273498965/AnsiballZ_edpm_container_manage.py'
Sep 30 18:00:15 compute-0 sudo[275822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2491222015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:00:16 compute-0 python3[275824]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 18:00:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:00:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3651911007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:00:16 compute-0 nova_compute[265391]: 2025-09-30 18:00:16.232 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:00:16 compute-0 nova_compute[265391]: 2025-09-30 18:00:16.237 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:00:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:16.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:16 compute-0 nova_compute[265391]: 2025-09-30 18:00:16.764 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:00:16 compute-0 ceph-mon[73755]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3651911007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:00:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:17.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:17 compute-0 nova_compute[265391]: 2025-09-30 18:00:17.273 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:00:17 compute-0 nova_compute[265391]: 2025-09-30 18:00:17.274 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.649s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:00:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:17.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:17 compute-0 podman[275856]: 2025-09-30 18:00:17.940868744 +0000 UTC m=+1.760822257 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Sep 30 18:00:18 compute-0 podman[275919]: 2025-09-30 18:00:18.083973901 +0000 UTC m=+0.043690906 container create ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Sep 30 18:00:18 compute-0 podman[275919]: 2025-09-30 18:00:18.059702791 +0000 UTC m=+0.019419816 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Sep 30 18:00:18 compute-0 python3[275824]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Sep 30 18:00:18 compute-0 sudo[275822]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:18.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:00:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:00:18 compute-0 ceph-mon[73755]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:18 compute-0 sudo[276108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvvrxmavamlzkbsvydvsxqahultypenl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255218.6381822-1032-154022650000924/AnsiballZ_stat.py'
Sep 30 18:00:18 compute-0 sudo[276108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:19 compute-0 python3.9[276110]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 18:00:19 compute-0 sudo[276108]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:19.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:19 compute-0 sudo[276264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwvobhygepcucdagwbduxucstfvkihug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255219.5176768-1050-362490384273/AnsiballZ_file.py'
Sep 30 18:00:19 compute-0 sudo[276264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:20 compute-0 python3.9[276266]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:20 compute-0 sudo[276264]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:20 compute-0 sudo[276415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjwxpbdchbednaghoeumiqoahmqpzlhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255220.1206796-1050-278850673089711/AnsiballZ_copy.py'
Sep 30 18:00:20 compute-0 sudo[276415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:20.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:20 compute-0 python3.9[276417]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759255220.1206796-1050-278850673089711/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:20 compute-0 sudo[276415]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:20 compute-0 ceph-mon[73755]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:21 compute-0 sudo[276492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sorlljwsnqzmhqtyrrlnbzsgvyzatyno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255220.1206796-1050-278850673089711/AnsiballZ_systemd.py'
Sep 30 18:00:21 compute-0 sudo[276492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:21.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:21 compute-0 python3.9[276494]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 18:00:21 compute-0 systemd[1]: Reloading.
Sep 30 18:00:21 compute-0 systemd-sysv-generator[276524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 18:00:21 compute-0 systemd-rc-local-generator[276517]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 18:00:22 compute-0 sudo[276492]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:00:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:00:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:22 compute-0 sudo[276605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwqevjqfnysyrthkwprcfungklbedsye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255220.1206796-1050-278850673089711/AnsiballZ_systemd.py'
Sep 30 18:00:22 compute-0 sudo[276605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:22.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:22 compute-0 python3.9[276607]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 18:00:22 compute-0 systemd[1]: Reloading.
Sep 30 18:00:22 compute-0 ceph-mon[73755]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:00:22 compute-0 systemd-rc-local-generator[276636]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 18:00:23 compute-0 systemd-sysv-generator[276641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 18:00:23 compute-0 systemd[1]: Starting podman_exporter container...
Sep 30 18:00:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea113768f74d2fa7bd20ce95eacac370dc25423b454a44ac45a649ad81075bc/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea113768f74d2fa7bd20ce95eacac370dc25423b454a44ac45a649ad81075bc/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:23 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff.
Sep 30 18:00:23 compute-0 podman[276647]: 2025-09-30 18:00:23.398442662 +0000 UTC m=+0.127212965 container init ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:00:23 compute-0 podman_exporter[276661]: ts=2025-09-30T18:00:23.411Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Sep 30 18:00:23 compute-0 podman_exporter[276661]: ts=2025-09-30T18:00:23.411Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Sep 30 18:00:23 compute-0 podman_exporter[276661]: ts=2025-09-30T18:00:23.411Z caller=handler.go:94 level=info msg="enabled collectors"
Sep 30 18:00:23 compute-0 podman_exporter[276661]: ts=2025-09-30T18:00:23.411Z caller=handler.go:105 level=info collector=container
Sep 30 18:00:23 compute-0 podman[276647]: 2025-09-30 18:00:23.423929814 +0000 UTC m=+0.152700107 container start ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:00:23 compute-0 podman[276647]: podman_exporter
Sep 30 18:00:23 compute-0 systemd[1]: Starting Podman API Service...
Sep 30 18:00:23 compute-0 systemd[1]: Started Podman API Service.
Sep 30 18:00:23 compute-0 systemd[1]: Started podman_exporter container.
Sep 30 18:00:23 compute-0 podman[276673]: time="2025-09-30T18:00:23Z" level=info msg="/usr/bin/podman filtering at log level info"
Sep 30 18:00:23 compute-0 podman[276673]: time="2025-09-30T18:00:23Z" level=info msg="Setting parallel job count to 25"
Sep 30 18:00:23 compute-0 podman[276673]: time="2025-09-30T18:00:23Z" level=info msg="Using sqlite as database backend"
Sep 30 18:00:23 compute-0 podman[276673]: time="2025-09-30T18:00:23Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Sep 30 18:00:23 compute-0 podman[276673]: time="2025-09-30T18:00:23Z" level=info msg="Using systemd socket activation to determine API endpoint"
Sep 30 18:00:23 compute-0 podman[276673]: time="2025-09-30T18:00:23Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Sep 30 18:00:23 compute-0 sudo[276605]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:23 compute-0 podman[276673]: @ - - [30/Sep/2025:18:00:23 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Sep 30 18:00:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:23.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:23 compute-0 podman[276673]: time="2025-09-30T18:00:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:00:23 compute-0 podman[276671]: 2025-09-30 18:00:23.511953591 +0000 UTC m=+0.079317731 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:00:23 compute-0 systemd[1]: ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff-527d78f1e3184a1f.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 18:00:23 compute-0 systemd[1]: ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff-527d78f1e3184a1f.service: Failed with result 'exit-code'.
Sep 30 18:00:23 compute-0 podman[276673]: @ - - [30/Sep/2025:18:00:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 38918 "" "Go-http-client/1.1"
Sep 30 18:00:23 compute-0 podman_exporter[276661]: ts=2025-09-30T18:00:23.520Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Sep 30 18:00:23 compute-0 podman_exporter[276661]: ts=2025-09-30T18:00:23.520Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Sep 30 18:00:23 compute-0 podman_exporter[276661]: ts=2025-09-30T18:00:23.522Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Sep 30 18:00:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:23.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:24 compute-0 sudo[276859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnuehdugnzmdcprfplgiazdezhqlala ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255223.745462-1098-201175420486674/AnsiballZ_systemd.py'
Sep 30 18:00:24 compute-0 sudo[276859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:24 compute-0 python3.9[276861]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 18:00:24 compute-0 systemd[1]: Stopping podman_exporter container...
Sep 30 18:00:24 compute-0 podman[276673]: @ - - [30/Sep/2025:18:00:23 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Sep 30 18:00:24 compute-0 systemd[1]: libpod-ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff.scope: Deactivated successfully.
Sep 30 18:00:24 compute-0 podman[276871]: 2025-09-30 18:00:24.549618813 +0000 UTC m=+0.103003676 container died ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:00:24 compute-0 systemd[1]: ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff-527d78f1e3184a1f.timer: Deactivated successfully.
Sep 30 18:00:24 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff.
Sep 30 18:00:24 compute-0 podman[276863]: 2025-09-30 18:00:24.563881973 +0000 UTC m=+0.152967284 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Sep 30 18:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff-userdata-shm.mount: Deactivated successfully.
Sep 30 18:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ea113768f74d2fa7bd20ce95eacac370dc25423b454a44ac45a649ad81075bc-merged.mount: Deactivated successfully.
Sep 30 18:00:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:24 compute-0 podman[276871]: 2025-09-30 18:00:24.688725416 +0000 UTC m=+0.242110309 container cleanup ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:00:24 compute-0 podman[276871]: podman_exporter
Sep 30 18:00:24 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Sep 30 18:00:24 compute-0 podman[276919]: podman_exporter
Sep 30 18:00:24 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Sep 30 18:00:24 compute-0 systemd[1]: Stopped podman_exporter container.
Sep 30 18:00:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:24.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:24 compute-0 systemd[1]: Starting podman_exporter container...
Sep 30 18:00:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea113768f74d2fa7bd20ce95eacac370dc25423b454a44ac45a649ad81075bc/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea113768f74d2fa7bd20ce95eacac370dc25423b454a44ac45a649ad81075bc/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:24 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff.
Sep 30 18:00:24 compute-0 podman[276932]: 2025-09-30 18:00:24.927661173 +0000 UTC m=+0.132743979 container init ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:00:24 compute-0 podman_exporter[276945]: ts=2025-09-30T18:00:24.941Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Sep 30 18:00:24 compute-0 podman_exporter[276945]: ts=2025-09-30T18:00:24.941Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Sep 30 18:00:24 compute-0 podman_exporter[276945]: ts=2025-09-30T18:00:24.941Z caller=handler.go:94 level=info msg="enabled collectors"
Sep 30 18:00:24 compute-0 podman_exporter[276945]: ts=2025-09-30T18:00:24.941Z caller=handler.go:105 level=info collector=container
Sep 30 18:00:24 compute-0 podman[276673]: @ - - [30/Sep/2025:18:00:24 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Sep 30 18:00:24 compute-0 podman[276673]: time="2025-09-30T18:00:24Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:00:24 compute-0 podman[276932]: 2025-09-30 18:00:24.959184911 +0000 UTC m=+0.164267737 container start ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:00:24 compute-0 podman[276932]: podman_exporter
Sep 30 18:00:24 compute-0 ceph-mon[73755]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 18:00:24 compute-0 systemd[1]: Started podman_exporter container.
Sep 30 18:00:24 compute-0 podman[276673]: @ - - [30/Sep/2025:18:00:24 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 38920 "" "Go-http-client/1.1"
Sep 30 18:00:24 compute-0 podman_exporter[276945]: ts=2025-09-30T18:00:24.996Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Sep 30 18:00:24 compute-0 podman_exporter[276945]: ts=2025-09-30T18:00:24.996Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Sep 30 18:00:24 compute-0 podman_exporter[276945]: ts=2025-09-30T18:00:24.996Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Sep 30 18:00:25 compute-0 sudo[276859]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:25 compute-0 podman[276957]: 2025-09-30 18:00:25.025161915 +0000 UTC m=+0.056200691 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:00:25 compute-0 sudo[277006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:00:25 compute-0 sudo[277006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:25 compute-0 sudo[277006]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:25 compute-0 sudo[277031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:00:25 compute-0 sudo[277031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:25.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:25 compute-0 sudo[277109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:00:25 compute-0 sudo[277109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:25 compute-0 sudo[277109]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:25 compute-0 sudo[277222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fckuilmzpvcqhkhxdyvujpecdytorvqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255225.3904967-1114-53678387946998/AnsiballZ_stat.py'
Sep 30 18:00:25 compute-0 sudo[277222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:25 compute-0 sshd-session[276692]: Invalid user clamav from 45.252.249.158 port 52174
Sep 30 18:00:25 compute-0 sshd-session[276692]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:00:25 compute-0 sshd-session[276692]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:00:25 compute-0 python3.9[277224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:00:25 compute-0 sudo[277222]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:25 compute-0 sudo[277031]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:00:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:00:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:00:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:00:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:00:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:00:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:00:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:00:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:00:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:00:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:00:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:00:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:00:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:00:26 compute-0 sudo[277385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zifrdcfnhkecwbdvciaclnuqoaauhuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255225.3904967-1114-53678387946998/AnsiballZ_copy.py'
Sep 30 18:00:26 compute-0 sudo[277385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:26 compute-0 sudo[277340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:00:26 compute-0 sudo[277340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:26 compute-0 sudo[277340]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:26 compute-0 sudo[277390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:00:26 compute-0 sudo[277390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:26 compute-0 python3.9[277388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759255225.3904967-1114-53678387946998/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 18:00:26 compute-0 sudo[277385]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:26 compute-0 podman[277480]: 2025-09-30 18:00:26.748770795 +0000 UTC m=+0.059932107 container create dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_morse, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:00:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:26.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:26 compute-0 systemd[1]: Started libpod-conmon-dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f.scope.
Sep 30 18:00:26 compute-0 podman[277480]: 2025-09-30 18:00:26.72120929 +0000 UTC m=+0.032370682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:00:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:26 compute-0 podman[277480]: 2025-09-30 18:00:26.867783537 +0000 UTC m=+0.178944929 container init dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_morse, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 18:00:26 compute-0 podman[277480]: 2025-09-30 18:00:26.876469452 +0000 UTC m=+0.187630794 container start dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:00:26 compute-0 podman[277480]: 2025-09-30 18:00:26.880513888 +0000 UTC m=+0.191675320 container attach dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_morse, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:00:26 compute-0 dazzling_morse[277496]: 167 167
Sep 30 18:00:26 compute-0 systemd[1]: libpod-dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f.scope: Deactivated successfully.
Sep 30 18:00:26 compute-0 podman[277480]: 2025-09-30 18:00:26.886553164 +0000 UTC m=+0.197714536 container died dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_morse, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:00:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-79c44e9828524b50cfff96c1988601c07f5816644eea0e77e81af071fd569c14-merged.mount: Deactivated successfully.
Sep 30 18:00:26 compute-0 podman[277480]: 2025-09-30 18:00:26.947782115 +0000 UTC m=+0.258943427 container remove dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:00:26 compute-0 systemd[1]: libpod-conmon-dbaae3c744510c4c5e0b1d18aef6d3616bfe0aa9ae491ebe770272fe2d6b831f.scope: Deactivated successfully.
Sep 30 18:00:26 compute-0 ceph-mon[73755]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:00:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:00:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:00:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:00:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:00:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:00:26 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:00:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:27.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:27 compute-0 podman[277594]: 2025-09-30 18:00:27.117557765 +0000 UTC m=+0.054267951 container create 45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:00:27 compute-0 systemd[1]: Started libpod-conmon-45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413.scope.
Sep 30 18:00:27 compute-0 podman[277594]: 2025-09-30 18:00:27.095514052 +0000 UTC m=+0.032224268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:00:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d229f570964df9678365f6575ff156e958fc8a2e7615a90f4156660c1da190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d229f570964df9678365f6575ff156e958fc8a2e7615a90f4156660c1da190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d229f570964df9678365f6575ff156e958fc8a2e7615a90f4156660c1da190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d229f570964df9678365f6575ff156e958fc8a2e7615a90f4156660c1da190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d229f570964df9678365f6575ff156e958fc8a2e7615a90f4156660c1da190/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:27 compute-0 podman[277594]: 2025-09-30 18:00:27.213982299 +0000 UTC m=+0.150692485 container init 45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:00:27 compute-0 podman[277594]: 2025-09-30 18:00:27.229891652 +0000 UTC m=+0.166601838 container start 45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:00:27 compute-0 podman[277594]: 2025-09-30 18:00:27.236921125 +0000 UTC m=+0.173631311 container attach 45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:00:27 compute-0 sudo[277666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpfayedjffievvoljlcbjquxtebgdzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255226.9248621-1148-137507958137909/AnsiballZ_container_config_data.py'
Sep 30 18:00:27 compute-0 sudo[277666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:27 compute-0 python3.9[277669]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Sep 30 18:00:27 compute-0 sudo[277666]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:27.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:27 compute-0 affectionate_driscoll[277636]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:00:27 compute-0 affectionate_driscoll[277636]: --> All data devices are unavailable
Sep 30 18:00:27 compute-0 systemd[1]: libpod-45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413.scope: Deactivated successfully.
Sep 30 18:00:27 compute-0 podman[277594]: 2025-09-30 18:00:27.590538209 +0000 UTC m=+0.527248395 container died 45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Sep 30 18:00:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-52d229f570964df9678365f6575ff156e958fc8a2e7615a90f4156660c1da190-merged.mount: Deactivated successfully.
Sep 30 18:00:27 compute-0 podman[277594]: 2025-09-30 18:00:27.648107055 +0000 UTC m=+0.584817251 container remove 45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:00:27 compute-0 systemd[1]: libpod-conmon-45dcdc4d86095cb4b3ce4982a7851601f935ae428ba057aa6b9186fbd4b24413.scope: Deactivated successfully.
Sep 30 18:00:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:27 compute-0 sudo[277390]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:27 compute-0 sudo[277720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:00:27 compute-0 sudo[277720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:27 compute-0 sudo[277720]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:27 compute-0 sudo[277768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:00:27 compute-0 sudo[277768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:28 compute-0 sudo[277902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjnoakhwxzlirptehccpxqefsmnfpbei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255227.7708454-1166-62413003867762/AnsiballZ_container_config_hash.py'
Sep 30 18:00:28 compute-0 sudo[277902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:28 compute-0 sshd-session[276692]: Failed password for invalid user clamav from 45.252.249.158 port 52174 ssh2
Sep 30 18:00:28 compute-0 podman[277939]: 2025-09-30 18:00:28.224017374 +0000 UTC m=+0.039032185 container create 9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:00:28 compute-0 systemd[1]: Started libpod-conmon-9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9.scope.
Sep 30 18:00:28 compute-0 python3.9[277910]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 18:00:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:28 compute-0 podman[277939]: 2025-09-30 18:00:28.207963227 +0000 UTC m=+0.022978048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:00:28 compute-0 podman[277939]: 2025-09-30 18:00:28.307240255 +0000 UTC m=+0.122255076 container init 9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lehmann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:00:28 compute-0 sudo[277902]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:28 compute-0 podman[277939]: 2025-09-30 18:00:28.318713653 +0000 UTC m=+0.133728454 container start 9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lehmann, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 18:00:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:28 compute-0 podman[277939]: 2025-09-30 18:00:28.32241697 +0000 UTC m=+0.137431791 container attach 9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:00:28 compute-0 gallant_lehmann[277956]: 167 167
Sep 30 18:00:28 compute-0 systemd[1]: libpod-9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9.scope: Deactivated successfully.
Sep 30 18:00:28 compute-0 podman[277939]: 2025-09-30 18:00:28.324807202 +0000 UTC m=+0.139822003 container died 9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d9b7d1619c06a7573b813896fdbdc9afad2aad4ec6382f3168ff29d4bd653c3-merged.mount: Deactivated successfully.
Sep 30 18:00:28 compute-0 podman[277939]: 2025-09-30 18:00:28.373401844 +0000 UTC m=+0.188416665 container remove 9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:00:28 compute-0 systemd[1]: libpod-conmon-9f2ef818e536b1e3c129479e7340bf630ebd3bb760f9061bdab1765f3f92bfd9.scope: Deactivated successfully.
Sep 30 18:00:28 compute-0 podman[278003]: 2025-09-30 18:00:28.576800187 +0000 UTC m=+0.048128031 container create ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:00:28 compute-0 systemd[1]: Started libpod-conmon-ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207.scope.
Sep 30 18:00:28 compute-0 podman[278003]: 2025-09-30 18:00:28.553988155 +0000 UTC m=+0.025316009 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:00:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c34a2e3d8ade5a186e06b8b408b4709fc1194d20e61599c393f38cfe3e2553b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c34a2e3d8ade5a186e06b8b408b4709fc1194d20e61599c393f38cfe3e2553b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c34a2e3d8ade5a186e06b8b408b4709fc1194d20e61599c393f38cfe3e2553b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c34a2e3d8ade5a186e06b8b408b4709fc1194d20e61599c393f38cfe3e2553b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:28 compute-0 podman[278003]: 2025-09-30 18:00:28.672294558 +0000 UTC m=+0.143622432 container init ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:00:28 compute-0 podman[278003]: 2025-09-30 18:00:28.680235714 +0000 UTC m=+0.151563528 container start ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:00:28 compute-0 podman[278003]: 2025-09-30 18:00:28.683649143 +0000 UTC m=+0.154976957 container attach ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:00:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:28.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:00:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:28] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]: {
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:     "0": [
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:         {
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "devices": [
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "/dev/loop3"
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             ],
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "lv_name": "ceph_lv0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "lv_size": "21470642176",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "name": "ceph_lv0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "tags": {
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.cluster_name": "ceph",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.crush_device_class": "",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.encrypted": "0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.osd_id": "0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.type": "block",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.vdo": "0",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:                 "ceph.with_tpm": "0"
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             },
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "type": "block",
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:             "vg_name": "ceph_vg0"
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:         }
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]:     ]
Sep 30 18:00:28 compute-0 mystifying_ishizaka[278019]: }
Sep 30 18:00:28 compute-0 systemd[1]: libpod-ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207.scope: Deactivated successfully.
Sep 30 18:00:28 compute-0 podman[278003]: 2025-09-30 18:00:28.985997526 +0000 UTC m=+0.457325340 container died ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 18:00:29 compute-0 ceph-mon[73755]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c34a2e3d8ade5a186e06b8b408b4709fc1194d20e61599c393f38cfe3e2553b-merged.mount: Deactivated successfully.
Sep 30 18:00:29 compute-0 podman[278003]: 2025-09-30 18:00:29.036432116 +0000 UTC m=+0.507759920 container remove ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ishizaka, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:00:29 compute-0 systemd[1]: libpod-conmon-ac5ea5bcdd3b25a0213b51627850dcf3fc6575f1d4dbc7a3f84f9e3a858df207.scope: Deactivated successfully.
Sep 30 18:00:29 compute-0 sudo[277768]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:29 compute-0 sshd-session[276692]: Received disconnect from 45.252.249.158 port 52174:11: Bye Bye [preauth]
Sep 30 18:00:29 compute-0 sshd-session[276692]: Disconnected from invalid user clamav 45.252.249.158 port 52174 [preauth]
Sep 30 18:00:29 compute-0 sudo[278165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjuqjjkqndewdlurklorhmrvqafwufbw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759255228.8152616-1186-129704776886034/AnsiballZ_edpm_container_manage.py'
Sep 30 18:00:29 compute-0 sudo[278165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:29 compute-0 sudo[278166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:00:29 compute-0 sudo[278166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:29 compute-0 sudo[278166]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:29 compute-0 sudo[278193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:00:29 compute-0 sudo[278193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:29 compute-0 python3[278184]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 18:00:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:29.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:29 compute-0 podman[278284]: 2025-09-30 18:00:29.611992206 +0000 UTC m=+0.041110639 container create b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sammet, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:00:29 compute-0 systemd[1]: Started libpod-conmon-b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4.scope.
Sep 30 18:00:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:29 compute-0 podman[278284]: 2025-09-30 18:00:29.596090663 +0000 UTC m=+0.025209116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:00:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:29 compute-0 podman[278284]: 2025-09-30 18:00:29.700392372 +0000 UTC m=+0.129510865 container init b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Sep 30 18:00:29 compute-0 podman[278284]: 2025-09-30 18:00:29.708698448 +0000 UTC m=+0.137816891 container start b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sammet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:00:29 compute-0 podman[278284]: 2025-09-30 18:00:29.712435405 +0000 UTC m=+0.141553928 container attach b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 18:00:29 compute-0 unruffled_sammet[278302]: 167 167
Sep 30 18:00:29 compute-0 systemd[1]: libpod-b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4.scope: Deactivated successfully.
Sep 30 18:00:29 compute-0 podman[278284]: 2025-09-30 18:00:29.714902969 +0000 UTC m=+0.144021402 container died b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2fd1f2dc6ba2ddbe93dfce6259ac85c08c2a42c987586f252b8fb33ae62bfad-merged.mount: Deactivated successfully.
Sep 30 18:00:29 compute-0 podman[278284]: 2025-09-30 18:00:29.766394687 +0000 UTC m=+0.195513130 container remove b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:00:29 compute-0 systemd[1]: libpod-conmon-b8e74f7d73f25bc5be51889f88995e9e42908007d2ba715fc0fb82ac6eb958a4.scope: Deactivated successfully.
Sep 30 18:00:29 compute-0 podman[278326]: 2025-09-30 18:00:29.965732404 +0000 UTC m=+0.046688643 container create 4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ardinghelli, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:00:30 compute-0 systemd[1]: Started libpod-conmon-4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03.scope.
Sep 30 18:00:30 compute-0 podman[278326]: 2025-09-30 18:00:29.94361489 +0000 UTC m=+0.024571169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:00:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1425c765acf5c680e19c58051d220d0411d8158241a19eef1f4134cd775b2b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1425c765acf5c680e19c58051d220d0411d8158241a19eef1f4134cd775b2b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1425c765acf5c680e19c58051d220d0411d8158241a19eef1f4134cd775b2b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1425c765acf5c680e19c58051d220d0411d8158241a19eef1f4134cd775b2b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:30 compute-0 podman[278326]: 2025-09-30 18:00:30.060881196 +0000 UTC m=+0.141837545 container init 4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:00:30 compute-0 podman[278326]: 2025-09-30 18:00:30.072113658 +0000 UTC m=+0.153069907 container start 4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ardinghelli, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 18:00:30 compute-0 podman[278326]: 2025-09-30 18:00:30.076456831 +0000 UTC m=+0.157413160 container attach 4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:00:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:30 compute-0 lvm[278431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:00:30 compute-0 lvm[278431]: VG ceph_vg0 finished
Sep 30 18:00:30 compute-0 epic_ardinghelli[278342]: {}
Sep 30 18:00:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:30 compute-0 systemd[1]: libpod-4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03.scope: Deactivated successfully.
Sep 30 18:00:30 compute-0 systemd[1]: libpod-4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03.scope: Consumed 1.125s CPU time.
Sep 30 18:00:30 compute-0 podman[278326]: 2025-09-30 18:00:30.789398319 +0000 UTC m=+0.870354578 container died 4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ardinghelli, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1425c765acf5c680e19c58051d220d0411d8158241a19eef1f4134cd775b2b49-merged.mount: Deactivated successfully.
Sep 30 18:00:30 compute-0 podman[278326]: 2025-09-30 18:00:30.845120206 +0000 UTC m=+0.926076445 container remove 4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_ardinghelli, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:00:30 compute-0 systemd[1]: libpod-conmon-4af1b873fc2d0b39740e2d5f889b317bf3cc86b551a0751a87264f0ae5eb5a03.scope: Deactivated successfully.
Sep 30 18:00:30 compute-0 sudo[278193]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:00:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:00:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:00:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:00:30 compute-0 sudo[278446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:00:30 compute-0 sudo[278446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:30 compute-0 sudo[278446]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:31 compute-0 ceph-mon[73755]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:00:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:00:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:31.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:32 compute-0 podman[278503]: 2025-09-30 18:00:32.131418327 +0000 UTC m=+0.160698725 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 18:00:32 compute-0 podman[278257]: 2025-09-30 18:00:32.251702841 +0000 UTC m=+2.782322840 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Sep 30 18:00:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:32 compute-0 podman[278561]: 2025-09-30 18:00:32.39678593 +0000 UTC m=+0.055951835 container create dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=)
Sep 30 18:00:32 compute-0 podman[278561]: 2025-09-30 18:00:32.363441403 +0000 UTC m=+0.022607378 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Sep 30 18:00:32 compute-0 python3[278184]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Sep 30 18:00:32 compute-0 sudo[278165]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:33 compute-0 ceph-mon[73755]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:33 compute-0 sudo[278751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkgmcpsbrbcjbxzafsolgvpbjbiasxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255232.7431726-1202-25012190185255/AnsiballZ_stat.py'
Sep 30 18:00:33 compute-0 sudo[278751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:33 compute-0 python3.9[278753]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 18:00:33 compute-0 sudo[278751]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:33.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:33.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:33 compute-0 sudo[278907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaklcyxsqighikxaajjsmfdtzhklfvhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255233.6735113-1220-19755787797385/AnsiballZ_file.py'
Sep 30 18:00:33 compute-0 sudo[278907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:34 compute-0 python3.9[278909]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:34 compute-0 sudo[278907]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:34 compute-0 sudo[279058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnelmplzqlpevilskxckmbgyvpipyvsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255234.291291-1220-277665631434747/AnsiballZ_copy.py'
Sep 30 18:00:34 compute-0 sudo[279058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:34 compute-0 python3.9[279060]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759255234.291291-1220-277665631434747/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:34 compute-0 sudo[279058]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:35 compute-0 ceph-mon[73755]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:35 compute-0 sudo[279134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzyotqdufprzvhfdttgssqpaqgxqghfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255234.291291-1220-277665631434747/AnsiballZ_systemd.py'
Sep 30 18:00:35 compute-0 sudo[279134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:35.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:35 compute-0 python3.9[279136]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 18:00:35 compute-0 systemd[1]: Reloading.
Sep 30 18:00:35 compute-0 systemd-rc-local-generator[279160]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 18:00:35 compute-0 systemd-sysv-generator[279166]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 18:00:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:35 compute-0 sudo[279134]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:36 compute-0 ceph-mon[73755]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:36 compute-0 sudo[279246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edhvlgcyzcfnbxgzyuiahxxgktyrroxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255234.291291-1220-277665631434747/AnsiballZ_systemd.py'
Sep 30 18:00:36 compute-0 sudo[279246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:36 compute-0 python3.9[279248]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 18:00:36 compute-0 systemd[1]: Reloading.
Sep 30 18:00:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:36 compute-0 systemd-rc-local-generator[279272]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 18:00:36 compute-0 systemd-sysv-generator[279279]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 18:00:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:36.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:37 compute-0 systemd[1]: Starting openstack_network_exporter container...
Sep 30 18:00:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:37.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1093dd6bacb03d1e4b41026de4c5d0026c0066656ff40546ca92c57ea99ba2a8/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1093dd6bacb03d1e4b41026de4c5d0026c0066656ff40546ca92c57ea99ba2a8/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1093dd6bacb03d1e4b41026de4c5d0026c0066656ff40546ca92c57ea99ba2a8/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047.
Sep 30 18:00:37 compute-0 podman[279287]: 2025-09-30 18:00:37.24513914 +0000 UTC m=+0.185884484 container init dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *bridge.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *coverage.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *datapath.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *iface.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *memory.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *ovnnorthd.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *ovn.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *ovsdbserver.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *pmd_perf.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *pmd_rxq.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: INFO    18:00:37 main.go:48: registering *vswitch.Collector
Sep 30 18:00:37 compute-0 openstack_network_exporter[279303]: NOTICE  18:00:37 main.go:76: listening on https://:9105/metrics
Sep 30 18:00:37 compute-0 podman[279287]: 2025-09-30 18:00:37.272246665 +0000 UTC m=+0.212991979 container start dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container)
Sep 30 18:00:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:00:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:00:37 compute-0 podman[279287]: openstack_network_exporter
Sep 30 18:00:37 compute-0 systemd[1]: Started openstack_network_exporter container.
Sep 30 18:00:37 compute-0 sudo[279246]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:00:37 compute-0 podman[279313]: 2025-09-30 18:00:37.376320001 +0000 UTC m=+0.094716844 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Sep 30 18:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:00:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:37.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:38 compute-0 sudo[279487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbiqwkxznhdgfedbgxjimqgjmerdapuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255237.9036574-1268-265503302630621/AnsiballZ_systemd.py'
Sep 30 18:00:38 compute-0 sudo[279487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:38 compute-0 ceph-mon[73755]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:38 compute-0 python3.9[279489]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 18:00:38 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Sep 30 18:00:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:38 compute-0 systemd[1]: libpod-dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047.scope: Deactivated successfully.
Sep 30 18:00:38 compute-0 podman[279493]: 2025-09-30 18:00:38.696741999 +0000 UTC m=+0.066727166 container died dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git)
Sep 30 18:00:38 compute-0 systemd[1]: dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047-49963c62e2518e74.timer: Deactivated successfully.
Sep 30 18:00:38 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047.
Sep 30 18:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047-userdata-shm.mount: Deactivated successfully.
Sep 30 18:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1093dd6bacb03d1e4b41026de4c5d0026c0066656ff40546ca92c57ea99ba2a8-merged.mount: Deactivated successfully.
Sep 30 18:00:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:38.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:38] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:00:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:38] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:00:38 compute-0 podman[279507]: 2025-09-30 18:00:38.79372813 +0000 UTC m=+0.074274062 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:00:39 compute-0 podman[279493]: 2025-09-30 18:00:39.31528514 +0000 UTC m=+0.685270307 container cleanup dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Sep 30 18:00:39 compute-0 podman[279493]: openstack_network_exporter
Sep 30 18:00:39 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Sep 30 18:00:39 compute-0 podman[279537]: openstack_network_exporter
Sep 30 18:00:39 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Sep 30 18:00:39 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Sep 30 18:00:39 compute-0 systemd[1]: Starting openstack_network_exporter container...
Sep 30 18:00:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:39.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1093dd6bacb03d1e4b41026de4c5d0026c0066656ff40546ca92c57ea99ba2a8/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1093dd6bacb03d1e4b41026de4c5d0026c0066656ff40546ca92c57ea99ba2a8/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1093dd6bacb03d1e4b41026de4c5d0026c0066656ff40546ca92c57ea99ba2a8/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Sep 30 18:00:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047.
Sep 30 18:00:39 compute-0 podman[279551]: 2025-09-30 18:00:39.603962046 +0000 UTC m=+0.173383569 container init dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *bridge.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *coverage.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *datapath.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *iface.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *memory.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *ovnnorthd.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *ovn.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *ovsdbserver.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *pmd_perf.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *pmd_rxq.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: INFO    18:00:39 main.go:48: registering *vswitch.Collector
Sep 30 18:00:39 compute-0 openstack_network_exporter[279566]: NOTICE  18:00:39 main.go:76: listening on https://:9105/metrics
Sep 30 18:00:39 compute-0 podman[279551]: 2025-09-30 18:00:39.64143394 +0000 UTC m=+0.210855453 container start dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, release=1755695350, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Sep 30 18:00:39 compute-0 podman[279551]: openstack_network_exporter
Sep 30 18:00:39 compute-0 systemd[1]: Started openstack_network_exporter container.
Sep 30 18:00:39 compute-0 sudo[279487]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:39 compute-0 podman[279576]: 2025-09-30 18:00:39.711115761 +0000 UTC m=+0.067016433 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:00:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:40 compute-0 sudo[279762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvzmcihgnbxlesokimxrocygngvjjmbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255240.1444895-1284-90077901018294/AnsiballZ_find.py'
Sep 30 18:00:40 compute-0 sudo[279762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:40 compute-0 podman[279723]: 2025-09-30 18:00:40.542491266 +0000 UTC m=+0.085658018 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4)
Sep 30 18:00:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:40 compute-0 python3.9[279771]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 18:00:40 compute-0 ceph-mon[73755]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:40 compute-0 sudo[279762]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:40.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:41 compute-0 unix_chkpwd[279796]: password check failed for user (root)
Sep 30 18:00:41 compute-0 sshd-session[279599]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 18:00:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:41.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:41 compute-0 sudo[279923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxuuexrawsbfevaufebjhsrrplfxqlcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255241.122086-1303-128343288387847/AnsiballZ_podman_container_info.py'
Sep 30 18:00:41 compute-0 sudo[279923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:41 compute-0 python3.9[279925]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Sep 30 18:00:41 compute-0 sudo[279923]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:42 compute-0 sudo[280090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kemoukwgnnulhdgkaexgglqozzfwglss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255242.161702-1311-100669698592926/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:42 compute-0 sudo[280090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:42 compute-0 ceph-mon[73755]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:42.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:42 compute-0 sshd-session[279599]: Failed password for root from 14.225.220.107 port 36104 ssh2
Sep 30 18:00:42 compute-0 python3.9[280092]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:43 compute-0 systemd[1]: Started libpod-conmon-c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4.scope.
Sep 30 18:00:43 compute-0 podman[280093]: 2025-09-30 18:00:43.082936514 +0000 UTC m=+0.099158789 container exec c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, org.label-schema.build-date=20250930, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:00:43 compute-0 podman[280093]: 2025-09-30 18:00:43.12277744 +0000 UTC m=+0.138999625 container exec_died c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4)
Sep 30 18:00:43 compute-0 systemd[1]: libpod-conmon-c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4.scope: Deactivated successfully.
Sep 30 18:00:43 compute-0 sudo[280090]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:43.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:43.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:00:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:43.600Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:00:43 compute-0 sudo[280278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qixenysueegzlcjfpxdozvylblcacfli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255243.3854795-1319-104803035372707/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:43 compute-0 sudo[280278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:43 compute-0 python3.9[280280]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:44 compute-0 systemd[1]: Started libpod-conmon-c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4.scope.
Sep 30 18:00:44 compute-0 podman[280281]: 2025-09-30 18:00:44.041813424 +0000 UTC m=+0.092851975 container exec c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller)
Sep 30 18:00:44 compute-0 podman[280281]: 2025-09-30 18:00:44.076082275 +0000 UTC m=+0.127120746 container exec_died c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:00:44 compute-0 sshd-session[279599]: Received disconnect from 14.225.220.107 port 36104:11: Bye Bye [preauth]
Sep 30 18:00:44 compute-0 sshd-session[279599]: Disconnected from authenticating user root 14.225.220.107 port 36104 [preauth]
Sep 30 18:00:44 compute-0 systemd[1]: libpod-conmon-c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4.scope: Deactivated successfully.
Sep 30 18:00:44 compute-0 sudo[280278]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:44 compute-0 sudo[280464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzonavqpnlbnxcnuvvuxmwkcugepbmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255244.3163068-1327-281251092173553/AnsiballZ_file.py'
Sep 30 18:00:44 compute-0 sudo[280464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:44.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:44 compute-0 ceph-mon[73755]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:44 compute-0 python3.9[280466]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:44 compute-0 sudo[280464]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:45.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:45 compute-0 sudo[280621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mivfuqdtknbrcsptxohiywpwxifourux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255245.107856-1336-92374965652894/AnsiballZ_podman_container_info.py'
Sep 30 18:00:45 compute-0 sudo[280621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:45 compute-0 sudo[280615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:00:45 compute-0 sudo[280615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:00:45 compute-0 sudo[280615]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:45 compute-0 python3.9[280642]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Sep 30 18:00:45 compute-0 sshd-session[280355]: Invalid user ubuntu from 106.36.198.78 port 58444
Sep 30 18:00:45 compute-0 sshd-session[280355]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:00:45 compute-0 sshd-session[280355]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=106.36.198.78
Sep 30 18:00:45 compute-0 sudo[280621]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:46 compute-0 sudo[280808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coqnjunwmyhzwlvuzkmvnqjpnewjfdcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255246.199664-1344-212370728324121/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:46 compute-0 sudo[280808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:46 compute-0 python3.9[280810]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:46 compute-0 systemd[1]: Started libpod-conmon-422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b.scope.
Sep 30 18:00:46 compute-0 podman[280811]: 2025-09-30 18:00:46.79529049 +0000 UTC m=+0.089032266 container exec 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:00:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:46.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:46 compute-0 ceph-mon[73755]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:46 compute-0 podman[280811]: 2025-09-30 18:00:46.832936259 +0000 UTC m=+0.126677995 container exec_died 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:00:46 compute-0 systemd[1]: libpod-conmon-422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b.scope: Deactivated successfully.
Sep 30 18:00:46 compute-0 sudo[280808]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:47.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:47 compute-0 sudo[280991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydnazbepbgjbliqzqxonbjdlqfaleecw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255247.0469255-1352-43226362056663/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:47 compute-0 sudo[280991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:47.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:47 compute-0 python3.9[280994]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:47 compute-0 systemd[1]: Started libpod-conmon-422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b.scope.
Sep 30 18:00:47 compute-0 podman[280995]: 2025-09-30 18:00:47.648773679 +0000 UTC m=+0.067516546 container exec 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Sep 30 18:00:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:47 compute-0 podman[281016]: 2025-09-30 18:00:47.715545045 +0000 UTC m=+0.054435176 container exec_died 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:00:47 compute-0 podman[280995]: 2025-09-30 18:00:47.72303818 +0000 UTC m=+0.141781057 container exec_died 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 18:00:47 compute-0 systemd[1]: libpod-conmon-422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b.scope: Deactivated successfully.
Sep 30 18:00:47 compute-0 sudo[280991]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:48 compute-0 sshd-session[280355]: Failed password for invalid user ubuntu from 106.36.198.78 port 58444 ssh2
Sep 30 18:00:48 compute-0 sudo[281179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lggawvoowhkawfvzragklpwssbhespuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255247.9319286-1360-138302058118238/AnsiballZ_file.py'
Sep 30 18:00:48 compute-0 sudo[281179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:48 compute-0 python3.9[281181]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:48 compute-0 sudo[281179]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:48] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:00:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:48] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:00:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:48.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:48 compute-0 ceph-mon[73755]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:48 compute-0 sudo[281331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfqqlsaqovlyfjllckxyxpcegwwghotb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255248.6632848-1369-121598290063316/AnsiballZ_podman_container_info.py'
Sep 30 18:00:48 compute-0 sudo[281331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:49 compute-0 python3.9[281333]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Sep 30 18:00:49 compute-0 sudo[281331]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:49 compute-0 sudo[281498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svtghebluwbppapbvwzilgcsuzexjcga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255249.4424295-1377-198543256710127/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:49 compute-0 sudo[281498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:49 compute-0 python3.9[281500]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:50 compute-0 systemd[1]: Started libpod-conmon-88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20.scope.
Sep 30 18:00:50 compute-0 podman[281501]: 2025-09-30 18:00:50.025321286 +0000 UTC m=+0.089285643 container exec 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, io.buildah.version=1.41.4)
Sep 30 18:00:50 compute-0 podman[281501]: 2025-09-30 18:00:50.060733546 +0000 UTC m=+0.124697883 container exec_died 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:00:50 compute-0 systemd[1]: libpod-conmon-88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20.scope: Deactivated successfully.
Sep 30 18:00:50 compute-0 sudo[281498]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:50 compute-0 sshd-session[280355]: Received disconnect from 106.36.198.78 port 58444:11:  [preauth]
Sep 30 18:00:50 compute-0 sshd-session[280355]: Disconnected from invalid user ubuntu 106.36.198.78 port 58444 [preauth]
Sep 30 18:00:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:50 compute-0 sudo[281683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lutbmamcafjppjchyxgzdkaegrzibnrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255250.2971854-1385-195789898067950/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:50 compute-0 sudo[281683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:50.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:50 compute-0 python3.9[281685]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:50 compute-0 ceph-mon[73755]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:50 compute-0 systemd[1]: Started libpod-conmon-88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20.scope.
Sep 30 18:00:50 compute-0 podman[281686]: 2025-09-30 18:00:50.920241292 +0000 UTC m=+0.067209878 container exec 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:00:50 compute-0 podman[281705]: 2025-09-30 18:00:50.985514819 +0000 UTC m=+0.051088259 container exec_died 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 18:00:50 compute-0 podman[281686]: 2025-09-30 18:00:50.989955265 +0000 UTC m=+0.136923851 container exec_died 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:00:50 compute-0 systemd[1]: libpod-conmon-88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20.scope: Deactivated successfully.
Sep 30 18:00:51 compute-0 sudo[281683]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:51 compute-0 sudo[281869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idzuraqmewwpfmtxthwgccwwqeyngpny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255251.1826577-1393-68586578659210/AnsiballZ_file.py'
Sep 30 18:00:51 compute-0 sudo[281869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:51.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:51 compute-0 python3.9[281871]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:51 compute-0 sudo[281869]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:52 compute-0 sudo[282022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwnrulwdrkjghdsodfsqybuyhnpukko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255251.9263067-1402-55805286310217/AnsiballZ_podman_container_info.py'
Sep 30 18:00:52 compute-0 sudo[282022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:00:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:52 compute-0 python3.9[282024]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Sep 30 18:00:52 compute-0 sudo[282022]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:52.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:52 compute-0 ceph-mon[73755]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:00:52 compute-0 sudo[282187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmesudtaaukivxivhuhoqrfvqmzcdvmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255252.695512-1410-201224575084704/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:52 compute-0 sudo[282187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:53 compute-0 python3.9[282189]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:53 compute-0 systemd[1]: Started libpod-conmon-6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1.scope.
Sep 30 18:00:53 compute-0 podman[282190]: 2025-09-30 18:00:53.312864727 +0000 UTC m=+0.087957568 container exec 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4)
Sep 30 18:00:53 compute-0 podman[282190]: 2025-09-30 18:00:53.344835678 +0000 UTC m=+0.119928439 container exec_died 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Sep 30 18:00:53 compute-0 systemd[1]: libpod-conmon-6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1.scope: Deactivated successfully.
Sep 30 18:00:53 compute-0 sudo[282187]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:53.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:53.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:53 compute-0 sudo[282373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbjmnwzqkmpgannjdocfkhbqcpyjocqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255253.5740209-1418-55946266237010/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:53 compute-0 sudo[282373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:54 compute-0 python3.9[282375]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:54 compute-0 systemd[1]: Started libpod-conmon-6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1.scope.
Sep 30 18:00:54 compute-0 podman[282376]: 2025-09-30 18:00:54.233628876 +0000 UTC m=+0.075960506 container exec 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:00:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:00:54.258 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:00:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:00:54.259 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:00:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:00:54.259 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:00:54 compute-0 podman[282376]: 2025-09-30 18:00:54.270895895 +0000 UTC m=+0.113227505 container exec_died 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:00:54 compute-0 systemd[1]: libpod-conmon-6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1.scope: Deactivated successfully.
Sep 30 18:00:54 compute-0 sudo[282373]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:54.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:54 compute-0 ceph-mon[73755]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:54 compute-0 sudo[282572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlthebkhvfbuheybztwzvuqfrlvnbam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255254.554971-1426-256745734345887/AnsiballZ_file.py'
Sep 30 18:00:54 compute-0 sudo[282572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:55 compute-0 podman[282532]: 2025-09-30 18:00:55.017981298 +0000 UTC m=+0.122321571 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Sep 30 18:00:55 compute-0 python3.9[282580]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:55 compute-0 sudo[282572]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:00:55 compute-0 podman[282627]: 2025-09-30 18:00:55.524124067 +0000 UTC m=+0.056802138 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:00:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:55.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:55 compute-0 sudo[282760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtjasjsliqgwaurmzgssjthpntzstkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255255.457122-1435-116952223935479/AnsiballZ_podman_container_info.py'
Sep 30 18:00:55 compute-0 sudo[282760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:55 compute-0 python3.9[282762]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Sep 30 18:00:55 compute-0 sudo[282760]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:56 compute-0 sudo[282928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egdevcfbxjgyqbmftthlmtoiautzslke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255256.1639485-1443-1459458562742/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:56 compute-0 sudo[282928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:56 compute-0 python3.9[282930]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:56 compute-0 systemd[1]: Started libpod-conmon-ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff.scope.
Sep 30 18:00:56 compute-0 podman[282931]: 2025-09-30 18:00:56.746940358 +0000 UTC m=+0.078519793 container exec ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:00:56 compute-0 podman[282931]: 2025-09-30 18:00:56.781837465 +0000 UTC m=+0.113416920 container exec_died ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:00:56 compute-0 systemd[1]: libpod-conmon-ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff.scope: Deactivated successfully.
Sep 30 18:00:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:56.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:56 compute-0 sudo[282928]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:56 compute-0 ceph-mon[73755]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:00:57.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:00:57 compute-0 sudo[283113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaitpndocdbzvbahkffjriczdjctjexp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255257.0081046-1451-79724838863300/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:57 compute-0 sudo[283113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:57 compute-0 python3.9[283115]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:00:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:57.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:57 compute-0 systemd[1]: Started libpod-conmon-ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff.scope.
Sep 30 18:00:57 compute-0 podman[283117]: 2025-09-30 18:00:57.619796371 +0000 UTC m=+0.079101518 container exec ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:00:57 compute-0 podman[283117]: 2025-09-30 18:00:57.651833574 +0000 UTC m=+0.111138671 container exec_died ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:00:57 compute-0 systemd[1]: libpod-conmon-ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff.scope: Deactivated successfully.
Sep 30 18:00:57 compute-0 sudo[283113]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:58 compute-0 sudo[283297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjtgjprjujhjcixfqombflldrfcvascc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255257.8869698-1459-43469297295093/AnsiballZ_file.py'
Sep 30 18:00:58 compute-0 sudo[283297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:58 compute-0 python3.9[283299]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:00:58 compute-0 sudo[283297]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:00:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:00:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:58] "GET /metrics HTTP/1.1" 200 46522 "" "Prometheus/2.51.0"
Sep 30 18:00:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:00:58] "GET /metrics HTTP/1.1" 200 46522 "" "Prometheus/2.51.0"
Sep 30 18:00:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:00:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:00:58.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:00:58 compute-0 ceph-mon[73755]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:00:59 compute-0 sudo[283449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdwkcjxnnydnikvpdfdxiyrgfqqdkisl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255258.6772964-1468-35032558183450/AnsiballZ_podman_container_info.py'
Sep 30 18:00:59 compute-0 sudo[283449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:00:59 compute-0 python3.9[283451]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Sep 30 18:00:59 compute-0 sudo[283449]: pam_unix(sudo:session): session closed for user root
Sep 30 18:00:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:00:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:00:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:00:59.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:00:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:00:59 compute-0 sudo[283617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctecilcvzxovagsufhzldsbflhofzxvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255259.4984505-1476-114139908189091/AnsiballZ_podman_container_exec.py'
Sep 30 18:00:59 compute-0 sudo[283617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:00 compute-0 python3.9[283619]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:01:00 compute-0 systemd[1]: Started libpod-conmon-dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047.scope.
Sep 30 18:01:00 compute-0 podman[283620]: 2025-09-30 18:01:00.135141006 +0000 UTC m=+0.081694995 container exec dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:01:00 compute-0 podman[283640]: 2025-09-30 18:01:00.198563295 +0000 UTC m=+0.051838269 container exec_died dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, architecture=x86_64)
Sep 30 18:01:00 compute-0 podman[283620]: 2025-09-30 18:01:00.212682732 +0000 UTC m=+0.159236701 container exec_died dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Sep 30 18:01:00 compute-0 systemd[1]: libpod-conmon-dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047.scope: Deactivated successfully.
Sep 30 18:01:00 compute-0 sudo[283617]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f214000a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:00 compute-0 sudo[283802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyefognlfuquktvsckcryiskwltwalby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255260.4141877-1484-55362447891750/AnsiballZ_podman_container_exec.py'
Sep 30 18:01:00 compute-0 sudo[283802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:00.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:00 compute-0 python3.9[283804]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Sep 30 18:01:00 compute-0 ceph-mon[73755]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:01 compute-0 systemd[1]: Started libpod-conmon-dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047.scope.
Sep 30 18:01:01 compute-0 podman[283805]: 2025-09-30 18:01:01.066059108 +0000 UTC m=+0.105801491 container exec dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350)
Sep 30 18:01:01 compute-0 podman[283805]: 2025-09-30 18:01:01.096430268 +0000 UTC m=+0.136172631 container exec_died dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:01:01 compute-0 systemd[1]: libpod-conmon-dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047.scope: Deactivated successfully.
Sep 30 18:01:01 compute-0 sudo[283802]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:01.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:01 compute-0 sudo[283986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drrjwqmlstisafhurprqvhedqkskmtjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255261.3116484-1492-178485176608288/AnsiballZ_file.py'
Sep 30 18:01:01 compute-0 sudo[283986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:01 compute-0 CROND[283990]: (root) CMD (run-parts /etc/cron.hourly)
Sep 30 18:01:01 compute-0 run-parts[283994]: (/etc/cron.hourly) starting 0anacron
Sep 30 18:01:01 compute-0 run-parts[284000]: (/etc/cron.hourly) finished 0anacron
Sep 30 18:01:01 compute-0 CROND[283989]: (root) CMDEND (run-parts /etc/cron.hourly)
Sep 30 18:01:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:01 compute-0 python3.9[283988]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:01 compute-0 sudo[283986]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:02 compute-0 podman[284025]: 2025-09-30 18:01:02.549510406 +0000 UTC m=+0.083762018 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:01:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:02.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:02 compute-0 ceph-mon[73755]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:03.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:03.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:01:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:04.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:01:05 compute-0 ceph-mon[73755]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:05.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:05 compute-0 sudo[284050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:01:05 compute-0 sudo[284050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:05 compute-0 sudo[284050]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:06 compute-0 ceph-mon[73755]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:06.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:07.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:01:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:01:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:01:07
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.nfs', 'backups', 'volumes', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:01:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:08 compute-0 ceph-mon[73755]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:08] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 18:01:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:08] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 18:01:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:08.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 18:01:09 compute-0 podman[284078]: 2025-09-30 18:01:09.516782484 +0000 UTC m=+0.056698445 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250930, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:01:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:09.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:10 compute-0 podman[284099]: 2025-09-30 18:01:10.568706722 +0000 UTC m=+0.094284152 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public)
Sep 30 18:01:10 compute-0 podman[284120]: 2025-09-30 18:01:10.6743957 +0000 UTC m=+0.075343260 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Sep 30 18:01:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:10 compute-0 ceph-mon[73755]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:12 compute-0 ceph-mon[73755]: pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:12.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:13.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:13.603Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:14 compute-0 ceph-mon[73755]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:14.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:15.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:16.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:16 compute-0 ceph-mon[73755]: pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:17.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.268 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.268 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:17.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.781 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.782 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.782 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.783 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.783 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.783 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.784 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:01:17 compute-0 nova_compute[265391]: 2025-09-30 18:01:17.784 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.296 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.297 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.297 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.297 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.297 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:01:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118001e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:01:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281996859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.770 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:01:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:18] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 18:01:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:18] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 18:01:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:18.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:18 compute-0 ceph-mon[73755]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1893667914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:01:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4281996859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.940 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.941 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.971 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.973 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4829MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.973 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:01:18 compute-0 nova_compute[265391]: 2025-09-30 18:01:18.974 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:01:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:19.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:20 compute-0 nova_compute[265391]: 2025-09-30 18:01:20.023 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:01:20 compute-0 nova_compute[265391]: 2025-09-30 18:01:20.024 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:01:18 up  1:04,  0 user,  load average: 0.58, 1.06, 1.05\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:01:20 compute-0 nova_compute[265391]: 2025-09-30 18:01:20.042 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:01:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:01:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336069450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:01:20 compute-0 nova_compute[265391]: 2025-09-30 18:01:20.514 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:01:20 compute-0 nova_compute[265391]: 2025-09-30 18:01:20.520 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:01:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:20.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:20 compute-0 ceph-mon[73755]: pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1151663774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:01:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3336069450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:01:21 compute-0 nova_compute[265391]: 2025-09-30 18:01:21.029 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:01:21 compute-0 nova_compute[265391]: 2025-09-30 18:01:21.543 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:01:21 compute-0 nova_compute[265391]: 2025-09-30 18:01:21.544 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.570s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:01:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:21.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:01:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:01:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118001e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:22.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:22 compute-0 ceph-mon[73755]: pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:01:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=cleanup t=2025-09-30T18:01:23.521219198Z level=info msg="Completed cleanup jobs" duration=24.753394ms
Sep 30 18:01:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:23.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:23.605Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T18:01:23.614211435Z level=info msg="Update check succeeded" duration=50.646386ms
Sep 30 18:01:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T18:01:23.61744765Z level=info msg="Update check succeeded" duration=50.073231ms
Sep 30 18:01:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:24.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:24 compute-0 ceph-mon[73755]: pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:25 compute-0 podman[284200]: 2025-09-30 18:01:25.572277821 +0000 UTC m=+0.110029152 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Sep 30 18:01:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:25.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:25 compute-0 podman[284255]: 2025-09-30 18:01:25.658139094 +0000 UTC m=+0.060036562 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:01:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:25 compute-0 sudo[284349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:01:25 compute-0 sudo[284349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:25 compute-0 sudo[284349]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:25 compute-0 sudo[284402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slyauqiruvdmkkabrmivakpekqvayvpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255285.5522642-1700-221833853987860/AnsiballZ_file.py'
Sep 30 18:01:25 compute-0 sudo[284402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:26 compute-0 python3.9[284404]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:26 compute-0 sudo[284402]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:26 compute-0 sudo[284554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhcdzmqzzdhzrurjyxjndkakspoanltx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255286.3252146-1716-146662256783517/AnsiballZ_stat.py'
Sep 30 18:01:26 compute-0 sudo[284554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:26 compute-0 python3.9[284556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:26.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:26 compute-0 sudo[284554]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:26 compute-0 ceph-mon[73755]: pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:27.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:27 compute-0 sudo[284677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkfjxxgnuqthewqkmwguahcfrezjseyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255286.3252146-1716-146662256783517/AnsiballZ_copy.py'
Sep 30 18:01:27 compute-0 sudo[284677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:27 compute-0 python3.9[284679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759255286.3252146-1716-146662256783517/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:27 compute-0 sudo[284677]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:27.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:28 compute-0 sudo[284833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xapqmyvzzdadoovfddtftngcoepybqkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255288.066509-1748-145355372338923/AnsiballZ_file.py'
Sep 30 18:01:28 compute-0 sudo[284833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:28 compute-0 python3.9[284835]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:28 compute-0 sudo[284833]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118001e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:28] "GET /metrics HTTP/1.1" 200 46517 "" "Prometheus/2.51.0"
Sep 30 18:01:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:28] "GET /metrics HTTP/1.1" 200 46517 "" "Prometheus/2.51.0"
Sep 30 18:01:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:28.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:28 compute-0 sshd-session[284694]: Invalid user ircd from 45.252.249.158 port 44564
Sep 30 18:01:28 compute-0 sshd-session[284694]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:01:28 compute-0 sshd-session[284694]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:01:28 compute-0 ceph-mon[73755]: pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:29 compute-0 sudo[284985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-curbzewaunvolzmwkrfwylrralpwcpng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255288.9670784-1764-78752686764709/AnsiballZ_stat.py'
Sep 30 18:01:29 compute-0 sudo[284985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:29 compute-0 python3.9[284987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:29 compute-0 sudo[284985]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:29.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:29 compute-0 sudo[285065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdyaxkxunznxhjirvfhydvdijjgmswtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255288.9670784-1764-78752686764709/AnsiballZ_file.py'
Sep 30 18:01:29 compute-0 sudo[285065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:29 compute-0 python3.9[285067]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:30 compute-0 sudo[285065]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:30 compute-0 sshd-session[284694]: Failed password for invalid user ircd from 45.252.249.158 port 44564 ssh2
Sep 30 18:01:30 compute-0 sudo[285217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvmotcqbbcjwkgfyippiwclzeomvhrie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255290.4039936-1788-232781245173674/AnsiballZ_stat.py'
Sep 30 18:01:30 compute-0 sudo[285217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:30.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:30 compute-0 python3.9[285219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:30 compute-0 sudo[285217]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:31 compute-0 ceph-mon[73755]: pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:31 compute-0 sudo[285269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:01:31 compute-0 sudo[285269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:31 compute-0 sudo[285269]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:31 compute-0 sudo[285320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fibmmnjgusfgdfqutplglhpseofptjnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255290.4039936-1788-232781245173674/AnsiballZ_file.py'
Sep 30 18:01:31 compute-0 sudo[285320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:31 compute-0 sudo[285321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:01:31 compute-0 sudo[285321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:31 compute-0 python3.9[285328]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8a2043wf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:31 compute-0 sudo[285320]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:31.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:31 compute-0 sudo[285321]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 18:01:31 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:01:31 compute-0 sshd-session[284694]: Received disconnect from 45.252.249.158 port 44564:11: Bye Bye [preauth]
Sep 30 18:01:31 compute-0 sshd-session[284694]: Disconnected from invalid user ircd 45.252.249.158 port 44564 [preauth]
Sep 30 18:01:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:01:32 compute-0 sudo[285532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqdoxnmnpdyvnyngyjcwuayqdfstuua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255291.8511713-1812-131284120221334/AnsiballZ_stat.py'
Sep 30 18:01:32 compute-0 sudo[285532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:32 compute-0 python3.9[285534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:32 compute-0 sudo[285532]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:32 compute-0 sudo[285625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyonogjxxnhufhebpbvhlhzffzytfkba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255291.8511713-1812-131284120221334/AnsiballZ_file.py'
Sep 30 18:01:32 compute-0 podman[285584]: 2025-09-30 18:01:32.688130262 +0000 UTC m=+0.055235937 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 18:01:32 compute-0 sudo[285625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118001e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:32.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:32 compute-0 python3.9[285632]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:32 compute-0 sudo[285625]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:33 compute-0 ceph-mon[73755]: pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:33.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:01:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:33.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:33 compute-0 sudo[285783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqcncwkoybkkviuqwtswlgixkvmhqxuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255293.2971087-1838-277605363101163/AnsiballZ_command.py'
Sep 30 18:01:33 compute-0 sudo[285783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:33 compute-0 python3.9[285785]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 18:01:33 compute-0 sudo[285783]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21140043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:01:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:01:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:34 compute-0 sudo[285937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jydieujngsoqzekduszjkyfovjdyqudr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759255294.216169-1854-35634287722665/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 18:01:34 compute-0 sudo[285937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:34.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:34 compute-0 python3[285939]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 18:01:34 compute-0 sudo[285937]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:35 compute-0 ceph-mon[73755]: pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 18:01:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:01:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:01:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:01:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:01:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:01:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:01:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:01:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:01:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:01:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:35.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:35 compute-0 sudo[286060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:01:35 compute-0 sudo[286060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:35 compute-0 sudo[286060]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:35 compute-0 sudo[286122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahcshoxdbygblfbwwmthrmdqpjqhpvmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255295.2793672-1870-101149672249062/AnsiballZ_stat.py'
Sep 30 18:01:35 compute-0 sudo[286122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:35 compute-0 sudo[286109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:01:35 compute-0 sudo[286109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:35 compute-0 python3.9[286140]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:35 compute-0 sudo[286122]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:01:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:01:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:01:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:01:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:01:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:01:36 compute-0 sudo[286268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbitugumhiezuooafzelurkmtrjlicny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255295.2793672-1870-101149672249062/AnsiballZ_file.py'
Sep 30 18:01:36 compute-0 sudo[286268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:36 compute-0 podman[286255]: 2025-09-30 18:01:36.23284506 +0000 UTC m=+0.054533789 container create 1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:01:36 compute-0 systemd[1]: Started libpod-conmon-1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2.scope.
Sep 30 18:01:36 compute-0 podman[286255]: 2025-09-30 18:01:36.207704976 +0000 UTC m=+0.029393745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:01:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:01:36 compute-0 podman[286255]: 2025-09-30 18:01:36.340061867 +0000 UTC m=+0.161750676 container init 1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:01:36 compute-0 podman[286255]: 2025-09-30 18:01:36.347192773 +0000 UTC m=+0.168881502 container start 1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lamarr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:01:36 compute-0 podman[286255]: 2025-09-30 18:01:36.353207559 +0000 UTC m=+0.174896318 container attach 1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 18:01:36 compute-0 angry_lamarr[286279]: 167 167
Sep 30 18:01:36 compute-0 systemd[1]: libpod-1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2.scope: Deactivated successfully.
Sep 30 18:01:36 compute-0 podman[286255]: 2025-09-30 18:01:36.355318684 +0000 UTC m=+0.177007473 container died 1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lamarr, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 18:01:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-34dcde95afb6d7de0001e8b5cf402a15ae919e52f36fac64c1f1c36d3d8befc3-merged.mount: Deactivated successfully.
Sep 30 18:01:36 compute-0 podman[286255]: 2025-09-30 18:01:36.408413404 +0000 UTC m=+0.230102133 container remove 1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Sep 30 18:01:36 compute-0 systemd[1]: libpod-conmon-1c51457377fd22daa2186dcd78370068b660f3e5d9d03d335ea593d4b460d6c2.scope: Deactivated successfully.
Sep 30 18:01:36 compute-0 python3.9[286276]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:36 compute-0 sudo[286268]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:36 compute-0 podman[286330]: 2025-09-30 18:01:36.614143513 +0000 UTC m=+0.049306133 container create 062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:01:36 compute-0 systemd[1]: Started libpod-conmon-062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8.scope.
Sep 30 18:01:36 compute-0 podman[286330]: 2025-09-30 18:01:36.597456869 +0000 UTC m=+0.032619489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:01:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce0639a36bed146062eebafa714fd8c025bece1bb04be058626f71b0d3a5cfeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce0639a36bed146062eebafa714fd8c025bece1bb04be058626f71b0d3a5cfeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce0639a36bed146062eebafa714fd8c025bece1bb04be058626f71b0d3a5cfeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce0639a36bed146062eebafa714fd8c025bece1bb04be058626f71b0d3a5cfeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce0639a36bed146062eebafa714fd8c025bece1bb04be058626f71b0d3a5cfeb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118001e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:36 compute-0 podman[286330]: 2025-09-30 18:01:36.730588111 +0000 UTC m=+0.165750761 container init 062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:01:36 compute-0 podman[286330]: 2025-09-30 18:01:36.744775809 +0000 UTC m=+0.179938429 container start 062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:01:36 compute-0 podman[286330]: 2025-09-30 18:01:36.748369733 +0000 UTC m=+0.183532353 container attach 062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:01:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:37 compute-0 sudo[286480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqivdntdszjydesazmpkatnpyxchabqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255296.645299-1894-76341507471944/AnsiballZ_stat.py'
Sep 30 18:01:37 compute-0 sudo[286480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:37.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:37 compute-0 modest_feistel[286375]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:01:37 compute-0 modest_feistel[286375]: --> All data devices are unavailable
Sep 30 18:01:37 compute-0 ceph-mon[73755]: pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:37 compute-0 systemd[1]: libpod-062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8.scope: Deactivated successfully.
Sep 30 18:01:37 compute-0 podman[286330]: 2025-09-30 18:01:37.160438136 +0000 UTC m=+0.595600756 container died 062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce0639a36bed146062eebafa714fd8c025bece1bb04be058626f71b0d3a5cfeb-merged.mount: Deactivated successfully.
Sep 30 18:01:37 compute-0 podman[286330]: 2025-09-30 18:01:37.200908198 +0000 UTC m=+0.636070838 container remove 062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:01:37 compute-0 python3.9[286484]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:37 compute-0 systemd[1]: libpod-conmon-062b13ddfe727d96d055d26ab7e9773dda02238b397e56a3a4b9ec15c006dfe8.scope: Deactivated successfully.
Sep 30 18:01:37 compute-0 sudo[286109]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:37 compute-0 sudo[286480]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:01:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:01:37 compute-0 sudo[286504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:01:37 compute-0 sudo[286504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:37 compute-0 sudo[286504]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:01:37 compute-0 sudo[286541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:01:37 compute-0 sudo[286541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:01:37 compute-0 sudo[286628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cthvhaxiwwavxextmutswrstddowyfbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255296.645299-1894-76341507471944/AnsiballZ_file.py'
Sep 30 18:01:37 compute-0 sudo[286628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:37.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:37 compute-0 python3.9[286630]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:37 compute-0 sudo[286628]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:37 compute-0 podman[286671]: 2025-09-30 18:01:37.784261105 +0000 UTC m=+0.041273764 container create a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:01:37 compute-0 systemd[1]: Started libpod-conmon-a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb.scope.
Sep 30 18:01:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:01:37 compute-0 podman[286671]: 2025-09-30 18:01:37.766979045 +0000 UTC m=+0.023991724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:01:37 compute-0 podman[286671]: 2025-09-30 18:01:37.875964669 +0000 UTC m=+0.132977348 container init a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banach, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:01:37 compute-0 podman[286671]: 2025-09-30 18:01:37.885191829 +0000 UTC m=+0.142204488 container start a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banach, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:01:37 compute-0 podman[286671]: 2025-09-30 18:01:37.888961007 +0000 UTC m=+0.145973696 container attach a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banach, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:01:37 compute-0 cool_banach[286711]: 167 167
Sep 30 18:01:37 compute-0 systemd[1]: libpod-a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb.scope: Deactivated successfully.
Sep 30 18:01:37 compute-0 podman[286671]: 2025-09-30 18:01:37.892767226 +0000 UTC m=+0.149779895 container died a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e19a86b08d4c29aa6833004abbc8e02f0632cac037fb9d2d81bf8702cd2ac5bc-merged.mount: Deactivated successfully.
Sep 30 18:01:37 compute-0 podman[286671]: 2025-09-30 18:01:37.93563637 +0000 UTC m=+0.192649019 container remove a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:01:37 compute-0 systemd[1]: libpod-conmon-a357884b764c1347375f2fc1c3d53a93613b219934774034edc3bf126bcfaceb.scope: Deactivated successfully.
Sep 30 18:01:38 compute-0 podman[286766]: 2025-09-30 18:01:38.125942738 +0000 UTC m=+0.054534219 container create b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:01:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:01:38 compute-0 ceph-mon[73755]: pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:38 compute-0 systemd[1]: Started libpod-conmon-b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a.scope.
Sep 30 18:01:38 compute-0 podman[286766]: 2025-09-30 18:01:38.1025784 +0000 UTC m=+0.031169901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:01:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788ef3b96fa21c2b76017ceca452572c4f4f4e1c91c32262a9ab7a79f0f482a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788ef3b96fa21c2b76017ceca452572c4f4f4e1c91c32262a9ab7a79f0f482a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788ef3b96fa21c2b76017ceca452572c4f4f4e1c91c32262a9ab7a79f0f482a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788ef3b96fa21c2b76017ceca452572c4f4f4e1c91c32262a9ab7a79f0f482a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:38 compute-0 podman[286766]: 2025-09-30 18:01:38.241717378 +0000 UTC m=+0.170308869 container init b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:01:38 compute-0 podman[286766]: 2025-09-30 18:01:38.252677943 +0000 UTC m=+0.181269414 container start b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_germain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:01:38 compute-0 podman[286766]: 2025-09-30 18:01:38.256300457 +0000 UTC m=+0.184891928 container attach b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:01:38 compute-0 sudo[286882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaunlihkwvwyhozxlxjxkggxfgqmvglu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255298.0197504-1918-5005606730549/AnsiballZ_stat.py'
Sep 30 18:01:38 compute-0 sudo[286882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118001e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:38 compute-0 exciting_germain[286827]: {
Sep 30 18:01:38 compute-0 exciting_germain[286827]:     "0": [
Sep 30 18:01:38 compute-0 exciting_germain[286827]:         {
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "devices": [
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "/dev/loop3"
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             ],
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "lv_name": "ceph_lv0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "lv_size": "21470642176",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "name": "ceph_lv0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "tags": {
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.cluster_name": "ceph",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.crush_device_class": "",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.encrypted": "0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.osd_id": "0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.type": "block",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.vdo": "0",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:                 "ceph.with_tpm": "0"
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             },
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "type": "block",
Sep 30 18:01:38 compute-0 exciting_germain[286827]:             "vg_name": "ceph_vg0"
Sep 30 18:01:38 compute-0 exciting_germain[286827]:         }
Sep 30 18:01:38 compute-0 exciting_germain[286827]:     ]
Sep 30 18:01:38 compute-0 exciting_germain[286827]: }
Sep 30 18:01:38 compute-0 systemd[1]: libpod-b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a.scope: Deactivated successfully.
Sep 30 18:01:38 compute-0 podman[286766]: 2025-09-30 18:01:38.584325385 +0000 UTC m=+0.512916866 container died b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:01:38 compute-0 python3.9[286884]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c788ef3b96fa21c2b76017ceca452572c4f4f4e1c91c32262a9ab7a79f0f482a-merged.mount: Deactivated successfully.
Sep 30 18:01:38 compute-0 podman[286766]: 2025-09-30 18:01:38.637830096 +0000 UTC m=+0.566421587 container remove b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:01:38 compute-0 sudo[286882]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:38 compute-0 systemd[1]: libpod-conmon-b61b762a3687a478a1f6b394b9e6f3a0938287a2b10ef2604eabc72c9a3e889a.scope: Deactivated successfully.
Sep 30 18:01:38 compute-0 sudo[286541]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:38 compute-0 sudo[286914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:01:38 compute-0 sudo[286914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:38 compute-0 sudo[286914]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:38] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:01:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:38] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:01:38 compute-0 sudo[286953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:01:38 compute-0 sudo[286953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:38.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:38 compute-0 sudo[287025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvbvcinlsaxhnqpdtnjigxwvipsjbsmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255298.0197504-1918-5005606730549/AnsiballZ_file.py'
Sep 30 18:01:38 compute-0 sudo[287025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:39 compute-0 python3.9[287027]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:39 compute-0 sudo[287025]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:39 compute-0 podman[287076]: 2025-09-30 18:01:39.255589356 +0000 UTC m=+0.050293408 container create 3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nash, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 18:01:39 compute-0 systemd[1]: Started libpod-conmon-3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6.scope.
Sep 30 18:01:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:01:39 compute-0 podman[287076]: 2025-09-30 18:01:39.237475065 +0000 UTC m=+0.032179127 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:01:39 compute-0 podman[287076]: 2025-09-30 18:01:39.362909957 +0000 UTC m=+0.157614019 container init 3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:01:39 compute-0 podman[287076]: 2025-09-30 18:01:39.370153655 +0000 UTC m=+0.164857697 container start 3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nash, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:01:39 compute-0 dazzling_nash[287109]: 167 167
Sep 30 18:01:39 compute-0 systemd[1]: libpod-3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6.scope: Deactivated successfully.
Sep 30 18:01:39 compute-0 podman[287076]: 2025-09-30 18:01:39.398458341 +0000 UTC m=+0.193168673 container attach 3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nash, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:01:39 compute-0 podman[287076]: 2025-09-30 18:01:39.400131134 +0000 UTC m=+0.194835176 container died 3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nash, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 18:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ffd3d14de5fe3f7bd6954879244977202c60796416e6794f9e1e5f8bb0bf863-merged.mount: Deactivated successfully.
Sep 30 18:01:39 compute-0 podman[287076]: 2025-09-30 18:01:39.482191708 +0000 UTC m=+0.276895750 container remove 3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:01:39 compute-0 systemd[1]: libpod-conmon-3b22271fdbb737bfffeb3e551bccfc1d8aff5952650b5448a09a475d17189de6.scope: Deactivated successfully.
Sep 30 18:01:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:39.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:39 compute-0 sudo[287288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydsjhphkqtaauhdtlsqnzxuzzvkyiyfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255299.3957884-1942-236087092530111/AnsiballZ_stat.py'
Sep 30 18:01:39 compute-0 sudo[287288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:39 compute-0 podman[287236]: 2025-09-30 18:01:39.671040868 +0000 UTC m=+0.029160110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:01:39 compute-0 podman[287236]: 2025-09-30 18:01:39.782411953 +0000 UTC m=+0.140531175 container create 914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 18:01:39 compute-0 systemd[1]: Started libpod-conmon-914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798.scope.
Sep 30 18:01:39 compute-0 podman[287238]: 2025-09-30 18:01:39.835072122 +0000 UTC m=+0.178106481 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:01:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e1d1bd2671924e7e43a801c6cc124457a1efb07b24f81e485ccae5e3f32f02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e1d1bd2671924e7e43a801c6cc124457a1efb07b24f81e485ccae5e3f32f02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e1d1bd2671924e7e43a801c6cc124457a1efb07b24f81e485ccae5e3f32f02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e1d1bd2671924e7e43a801c6cc124457a1efb07b24f81e485ccae5e3f32f02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:01:39 compute-0 podman[287236]: 2025-09-30 18:01:39.897265059 +0000 UTC m=+0.255384301 container init 914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_proskuriakova, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:01:39 compute-0 podman[287236]: 2025-09-30 18:01:39.911481729 +0000 UTC m=+0.269600971 container start 914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_proskuriakova, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:01:39 compute-0 podman[287236]: 2025-09-30 18:01:39.915301118 +0000 UTC m=+0.273420350 container attach 914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:01:39 compute-0 python3.9[287290]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:39 compute-0 sudo[287288]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:40 compute-0 sudo[287400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imveuyxezhshhmhpcwkecyoeyazxasqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255299.3957884-1942-236087092530111/AnsiballZ_file.py'
Sep 30 18:01:40 compute-0 sudo[287400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:40 compute-0 python3.9[287405]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:40 compute-0 sudo[287400]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:40 compute-0 lvm[287479]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:01:40 compute-0 lvm[287479]: VG ceph_vg0 finished
Sep 30 18:01:40 compute-0 serene_proskuriakova[287303]: {}
Sep 30 18:01:40 compute-0 systemd[1]: libpod-914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798.scope: Deactivated successfully.
Sep 30 18:01:40 compute-0 systemd[1]: libpod-914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798.scope: Consumed 1.165s CPU time.
Sep 30 18:01:40 compute-0 podman[287236]: 2025-09-30 18:01:40.631009405 +0000 UTC m=+0.989128647 container died 914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7e1d1bd2671924e7e43a801c6cc124457a1efb07b24f81e485ccae5e3f32f02-merged.mount: Deactivated successfully.
Sep 30 18:01:40 compute-0 podman[287236]: 2025-09-30 18:01:40.6800184 +0000 UTC m=+1.038137642 container remove 914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:01:40 compute-0 systemd[1]: libpod-conmon-914c159d213ff27ccc20cb9a0418d1af9744c3ad97156e2530a9d9f6770b7798.scope: Deactivated successfully.
Sep 30 18:01:40 compute-0 podman[287483]: 2025-09-30 18:01:40.712663878 +0000 UTC m=+0.079386395 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=)
Sep 30 18:01:40 compute-0 sudo[286953]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:01:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:01:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:40 compute-0 podman[287513]: 2025-09-30 18:01:40.776166399 +0000 UTC m=+0.059041946 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4)
Sep 30 18:01:40 compute-0 ceph-mon[73755]: pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:01:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:01:40 compute-0 sudo[287532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:01:40 compute-0 sudo[287532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:40 compute-0 sudo[287532]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:40.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:41 compute-0 sudo[287687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrbczkjtzxeqgjkcmvoodzyczrcbmwsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255300.7955492-1966-242008836166646/AnsiballZ_stat.py'
Sep 30 18:01:41 compute-0 sudo[287687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:41 compute-0 python3.9[287689]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 18:01:41 compute-0 sudo[287687]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:41 compute-0 sudo[287814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmcjeyrswcgkipchpnejgplnphliewd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255300.7955492-1966-242008836166646/AnsiballZ_copy.py'
Sep 30 18:01:41 compute-0 sudo[287814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180141 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:01:42 compute-0 python3.9[287816]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759255300.7955492-1966-242008836166646/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:42 compute-0 sudo[287814]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:42 compute-0 sudo[287966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnvfsyverckxhtmpybccmhwncpeibuyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255302.4917748-1996-79062164031397/AnsiballZ_file.py'
Sep 30 18:01:42 compute-0 sudo[287966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:42 compute-0 ceph-mon[73755]: pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:01:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:42.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:01:42 compute-0 python3.9[287968]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:43 compute-0 sudo[287966]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:43.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:43.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:43 compute-0 sudo[288119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-setpghxzybkuftcubzgjpiwnmdkhvhjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255303.353317-2012-104107126440322/AnsiballZ_command.py'
Sep 30 18:01:43 compute-0 sudo[288119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:43 compute-0 python3.9[288121]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 18:01:43 compute-0 sudo[288119]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:44 compute-0 sudo[288275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgutfloqmbuefzkymqpxqknfkccxypiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255304.287801-2028-225677382740996/AnsiballZ_blockinfile.py'
Sep 30 18:01:44 compute-0 sudo[288275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:44 compute-0 ceph-mon[73755]: pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:44.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:44 compute-0 python3.9[288277]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:44 compute-0 sudo[288275]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:45 compute-0 sudo[288428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxvyevchrknvjixrbnxkcdvajniovvjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255305.3230083-2046-253417834962283/AnsiballZ_command.py'
Sep 30 18:01:45 compute-0 sudo[288428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:45.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:45 compute-0 python3.9[288430]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 18:01:45 compute-0 sudo[288428]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:45 compute-0 sudo[288433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:01:45 compute-0 sudo[288433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:01:45 compute-0 sudo[288433]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:46 compute-0 sudo[288607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsltwngjjzbmbavmofktzbcybvwaxnnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255306.178298-2062-226080082047499/AnsiballZ_stat.py'
Sep 30 18:01:46 compute-0 sudo[288607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:46 compute-0 python3.9[288609]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 18:01:46 compute-0 sudo[288607]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:46 compute-0 ceph-mon[73755]: pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:46.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:47.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:47 compute-0 sudo[288761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmwvvbuspdwpfgrerfbnaftssxhyzxrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255307.0518975-2078-180363224035136/AnsiballZ_command.py'
Sep 30 18:01:47 compute-0 sudo[288761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:47 compute-0 python3.9[288763]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 18:01:47 compute-0 sudo[288761]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:47.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:48 compute-0 sudo[288918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpjrcrgjsaovayisayothbuynsyopgcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759255307.9250007-2094-145751866816607/AnsiballZ_file.py'
Sep 30 18:01:48 compute-0 sudo[288918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:01:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:48 compute-0 python3.9[288920]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 18:01:48 compute-0 sudo[288918]: pam_unix(sudo:session): session closed for user root
Sep 30 18:01:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:48] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:01:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:48] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:01:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:48.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:48 compute-0 ceph-mon[73755]: pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:01:49 compute-0 sshd-session[265728]: Connection closed by 192.168.122.30 port 42586
Sep 30 18:01:49 compute-0 sshd-session[265724]: pam_unix(sshd:session): session closed for user zuul
Sep 30 18:01:49 compute-0 systemd-logind[811]: Session 58 logged out. Waiting for processes to exit.
Sep 30 18:01:49 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Sep 30 18:01:49 compute-0 systemd[1]: session-58.scope: Consumed 1min 27.908s CPU time.
Sep 30 18:01:49 compute-0 systemd-logind[811]: Removed session 58.
Sep 30 18:01:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:49.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:01:50 compute-0 unix_chkpwd[288949]: password check failed for user (root)
Sep 30 18:01:50 compute-0 sshd-session[288945]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 18:01:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:01:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:50 compute-0 ceph-mon[73755]: pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:01:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:51.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:01:52 compute-0 sshd-session[288945]: Failed password for root from 14.225.220.107 port 36302 ssh2
Sep 30 18:01:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:01:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:01:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:52 compute-0 ceph-mon[73755]: pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:01:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:01:53 compute-0 sshd-session[288945]: Received disconnect from 14.225.220.107 port 36302:11: Bye Bye [preauth]
Sep 30 18:01:53 compute-0 sshd-session[288945]: Disconnected from authenticating user root 14.225.220.107 port 36302 [preauth]
Sep 30 18:01:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:53.609Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:53.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:01:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:53 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:01:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:53 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:01:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:01:54.260 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:01:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:01:54.261 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:01:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:01:54.261 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:01:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300041e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:54 compute-0 ceph-mon[73755]: pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:01:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:01:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:55.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:01:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400096f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:56 compute-0 podman[288958]: 2025-09-30 18:01:56.559221073 +0000 UTC m=+0.092309221 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:01:56 compute-0 podman[288957]: 2025-09-30 18:01:56.559179942 +0000 UTC m=+0.099174079 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true)
Sep 30 18:01:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 18:01:56 compute-0 ceph-mon[73755]: pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:01:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:01:57.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:01:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:57.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:01:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:01:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:01:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:58] "GET /metrics HTTP/1.1" 200 46517 "" "Prometheus/2.51.0"
Sep 30 18:01:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:01:58] "GET /metrics HTTP/1.1" 200 46517 "" "Prometheus/2.51.0"
Sep 30 18:01:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:01:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:01:58.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:01:59 compute-0 ceph-mon[73755]: pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:01:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:01:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:01:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:01:59.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:01:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:01:59 compute-0 podman[276673]: time="2025-09-30T18:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:01:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:01:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10194 "" "Go-http-client/1.1"
Sep 30 18:02:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400096f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:00.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:01 compute-0 ceph-mon[73755]: pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:02:01 compute-0 openstack_network_exporter[279566]: ERROR   18:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:02:01 compute-0 openstack_network_exporter[279566]: ERROR   18:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:02:01 compute-0 openstack_network_exporter[279566]: ERROR   18:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:02:01 compute-0 openstack_network_exporter[279566]: ERROR   18:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:02:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:02:01 compute-0 openstack_network_exporter[279566]: ERROR   18:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:02:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:02:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:01.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 18:02:02 compute-0 sshd-session[289006]: Invalid user strapi from 154.125.120.7 port 60581
Sep 30 18:02:02 compute-0 sshd-session[289006]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:02:02 compute-0 sshd-session[289006]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.125.120.7
Sep 30 18:02:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130004220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:03 compute-0 ceph-mon[73755]: pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 18:02:03 compute-0 podman[289019]: 2025-09-30 18:02:03.533563835 +0000 UTC m=+0.065389141 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 18:02:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:03.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:03 compute-0 sshd-session[289006]: Failed password for invalid user strapi from 154.125.120.7 port 60581 ssh2
Sep 30 18:02:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:02:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180203 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 18:02:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400096f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:05 compute-0 ceph-mon[73755]: pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:02:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:05 compute-0 sshd-session[289006]: Received disconnect from 154.125.120.7 port 60581:11: Bye Bye [preauth]
Sep 30 18:02:05 compute-0 sshd-session[289006]: Disconnected from invalid user strapi 154.125.120.7 port 60581 [preauth]
Sep 30 18:02:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:05.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:02:06 compute-0 sudo[289043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:02:06 compute-0 sudo[289043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:06 compute-0 sudo[289043]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:06 compute-0 ceph-mon[73755]: pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:02:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130004240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:06.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:07.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:02:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:02:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:02:07
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', '.rgw.root', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'images']
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:02:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:07.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:02:08 compute-0 ceph-mon[73755]: pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:02:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130004240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:08] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:02:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:08] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:02:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:08.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:09.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:02:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:10 compute-0 podman[289074]: 2025-09-30 18:02:10.520121437 +0000 UTC m=+0.055510895 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:02:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:10 compute-0 ceph-mon[73755]: pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:02:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:10.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:11 compute-0 podman[289095]: 2025-09-30 18:02:11.519081778 +0000 UTC m=+0.062832275 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Sep 30 18:02:11 compute-0 podman[289096]: 2025-09-30 18:02:11.52302696 +0000 UTC m=+0.062921346 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:02:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:11.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:02:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130004260 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:12 compute-0 ceph-mon[73755]: pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:02:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:13.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:02:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:14 compute-0 ceph-mon[73755]: pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:02:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:14.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:15.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130004280 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:16 compute-0 ceph-mon[73755]: pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:17.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:17.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:18 compute-0 ceph-mon[73755]: pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:18] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:02:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:18] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:02:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:18.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:19.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:02:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/482529736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:02:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/482529736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:02:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/476006598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:02:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/476006598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:02:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1291920182' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:02:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1291920182' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/482529736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/482529736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/476006598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/476006598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2360249032' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2360249032' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1291920182' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1291920182' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:20.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:02:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4025706531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:02:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4025706531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:02:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1764238396' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:02:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1764238396' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.545 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.546 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.546 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.546 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.546 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.547 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.547 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.547 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:02:21 compute-0 nova_compute[265391]: 2025-09-30 18:02:21.547 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:02:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:21.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4025706531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4025706531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1764238396' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:02:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1764238396' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.067 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.068 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.068 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.068 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.069 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:02:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:02:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:02:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300042a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:02:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3258341083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.560 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.718 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.720 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.736 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.736 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4882MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.737 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:02:22 compute-0 nova_compute[265391]: 2025-09-30 18:02:22.737 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:02:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:22 compute-0 ceph-mon[73755]: pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:02:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/278088296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:02:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3258341083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:02:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:22.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:23.611Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:23.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:23 compute-0 nova_compute[265391]: 2025-09-30 18:02:23.865 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:02:23 compute-0 nova_compute[265391]: 2025-09-30 18:02:23.866 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:02:22 up  1:05,  0 user,  load average: 0.48, 0.94, 1.00\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:02:23 compute-0 nova_compute[265391]: 2025-09-30 18:02:23.895 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:02:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:02:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/234598981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:02:24 compute-0 nova_compute[265391]: 2025-09-30 18:02:24.354 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:02:24 compute-0 nova_compute[265391]: 2025-09-30 18:02:24.360 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:02:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:24 compute-0 ceph-mon[73755]: pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3346942751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:02:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/234598981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:02:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:24.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:25 compute-0 nova_compute[265391]: 2025-09-30 18:02:25.273 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:02:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:25.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:25 compute-0 nova_compute[265391]: 2025-09-30 18:02:25.797 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:02:25 compute-0 nova_compute[265391]: 2025-09-30 18:02:25.797 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.060s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:02:26 compute-0 sudo[289196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:02:26 compute-0 sudo[289196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:26 compute-0 sudo[289196]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300042c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:26 compute-0 ceph-mon[73755]: pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:26.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:27.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:27 compute-0 podman[289223]: 2025-09-30 18:02:27.512723669 +0000 UTC m=+0.056758007 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:02:27 compute-0 podman[289222]: 2025-09-30 18:02:27.573285153 +0000 UTC m=+0.119966530 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:02:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:27.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134002990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:28] "GET /metrics HTTP/1.1" 200 46522 "" "Prometheus/2.51.0"
Sep 30 18:02:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:28] "GET /metrics HTTP/1.1" 200 46522 "" "Prometheus/2.51.0"
Sep 30 18:02:28 compute-0 ceph-mon[73755]: pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:28.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:29 compute-0 PackageKit[195837]: daemon quit
Sep 30 18:02:29 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Sep 30 18:02:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:29.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:29 compute-0 podman[276673]: time="2025-09-30T18:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:02:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:02:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10214 "" "Go-http-client/1.1"
Sep 30 18:02:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21300042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:30 compute-0 ceph-mon[73755]: pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:30.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:31 compute-0 openstack_network_exporter[279566]: ERROR   18:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:02:31 compute-0 openstack_network_exporter[279566]: ERROR   18:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:02:31 compute-0 openstack_network_exporter[279566]: ERROR   18:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:02:31 compute-0 openstack_network_exporter[279566]: ERROR   18:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:02:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:02:31 compute-0 openstack_network_exporter[279566]: ERROR   18:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:02:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:02:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:31.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:32.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:32 compute-0 ceph-mon[73755]: pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:33.614Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:33.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:34 compute-0 podman[289283]: 2025-09-30 18:02:34.553077426 +0000 UTC m=+0.077326851 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4)
Sep 30 18:02:34 compute-0 sshd-session[289279]: Invalid user faisal from 45.252.249.158 port 56980
Sep 30 18:02:34 compute-0 sshd-session[289279]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:02:34 compute-0 sshd-session[289279]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:02:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:34.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:34 compute-0 ceph-mon[73755]: pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:35.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21200046c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:36.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:36 compute-0 sshd-session[289279]: Failed password for invalid user faisal from 45.252.249.158 port 56980 ssh2
Sep 30 18:02:37 compute-0 ceph-mon[73755]: pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:37.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:02:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:02:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:37.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:02:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130004960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:38] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 18:02:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:38] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 18:02:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:38.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:39 compute-0 ceph-mon[73755]: pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:39 compute-0 sshd-session[289279]: Received disconnect from 45.252.249.158 port 56980:11: Bye Bye [preauth]
Sep 30 18:02:39 compute-0 sshd-session[289279]: Disconnected from invalid user faisal 45.252.249.158 port 56980 [preauth]
Sep 30 18:02:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:40 compute-0 ceph-mon[73755]: pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21180019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:40.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:41 compute-0 sudo[289311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:02:41 compute-0 sudo[289311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:41 compute-0 sudo[289311]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:41 compute-0 sudo[289342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 18:02:41 compute-0 sudo[289342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:41 compute-0 podman[289335]: 2025-09-30 18:02:41.10631967 +0000 UTC m=+0.064593660 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=iscsid)
Sep 30 18:02:41 compute-0 podman[289457]: 2025-09-30 18:02:41.629875322 +0000 UTC m=+0.062866246 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:02:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:41.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:41 compute-0 podman[289457]: 2025-09-30 18:02:41.750934109 +0000 UTC m=+0.183925043 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:02:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:41 compute-0 podman[289492]: 2025-09-30 18:02:41.921099683 +0000 UTC m=+0.078106711 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd)
Sep 30 18:02:41 compute-0 podman[289493]: 2025-09-30 18:02:41.926006441 +0000 UTC m=+0.078408640 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350)
Sep 30 18:02:42 compute-0 podman[289614]: 2025-09-30 18:02:42.320057406 +0000 UTC m=+0.063625475 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:02:42 compute-0 podman[289614]: 2025-09-30 18:02:42.328736601 +0000 UTC m=+0.072304680 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:02:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:42 compute-0 podman[289707]: 2025-09-30 18:02:42.678193663 +0000 UTC m=+0.054125880 container exec 76f85129328150eb71d21e7c61e63b1c179a003382bbdb3cb36bea5b2bfe00b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:02:42 compute-0 podman[289707]: 2025-09-30 18:02:42.717597458 +0000 UTC m=+0.093529665 container exec_died 76f85129328150eb71d21e7c61e63b1c179a003382bbdb3cb36bea5b2bfe00b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:02:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:42 compute-0 ceph-mon[73755]: pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:42 compute-0 podman[289773]: 2025-09-30 18:02:42.92714289 +0000 UTC m=+0.063041621 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:02:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:42.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:42 compute-0 podman[289773]: 2025-09-30 18:02:42.945999821 +0000 UTC m=+0.081898582 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:02:43 compute-0 podman[289841]: 2025-09-30 18:02:43.174919737 +0000 UTC m=+0.061767358 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, name=keepalived, vcs-type=git, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Sep 30 18:02:43 compute-0 podman[289841]: 2025-09-30 18:02:43.196622092 +0000 UTC m=+0.083469713 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, distribution-scope=public, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived)
Sep 30 18:02:43 compute-0 podman[289904]: 2025-09-30 18:02:43.417112427 +0000 UTC m=+0.048872441 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:02:43 compute-0 podman[289904]: 2025-09-30 18:02:43.449734906 +0000 UTC m=+0.081494900 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:02:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:43.614Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:43 compute-0 podman[289984]: 2025-09-30 18:02:43.696377453 +0000 UTC m=+0.074227352 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:02:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:43.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:43 compute-0 podman[289984]: 2025-09-30 18:02:43.92722592 +0000 UTC m=+0.305075839 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:02:44 compute-0 podman[290093]: 2025-09-30 18:02:44.407525606 +0000 UTC m=+0.082273921 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:02:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21180019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:44 compute-0 podman[290093]: 2025-09-30 18:02:44.455675159 +0000 UTC m=+0.130423434 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:02:44 compute-0 sudo[289342]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:02:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:02:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:44 compute-0 sudo[290133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:02:44 compute-0 sudo[290133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:44 compute-0 sudo[290133]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:44 compute-0 sudo[290158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:02:44 compute-0 sudo[290158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:44 compute-0 ceph-mon[73755]: pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:44.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:45 compute-0 sudo[290158]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:02:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:02:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:02:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:02:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:02:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:02:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:02:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:45 compute-0 sudo[290213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:02:45 compute-0 sudo[290213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:45 compute-0 sudo[290213]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:45 compute-0 sudo[290238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:02:45 compute-0 sudo[290238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:45.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:02:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:02:45 compute-0 podman[290306]: 2025-09-30 18:02:45.981183271 +0000 UTC m=+0.060588538 container create e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 18:02:46 compute-0 systemd[1]: Started libpod-conmon-e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546.scope.
Sep 30 18:02:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:02:46 compute-0 podman[290306]: 2025-09-30 18:02:45.957925706 +0000 UTC m=+0.037330933 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:02:46 compute-0 podman[290306]: 2025-09-30 18:02:46.065265478 +0000 UTC m=+0.144670945 container init e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:02:46 compute-0 podman[290306]: 2025-09-30 18:02:46.075883835 +0000 UTC m=+0.155289102 container start e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:02:46 compute-0 podman[290306]: 2025-09-30 18:02:46.080043573 +0000 UTC m=+0.159448890 container attach e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:02:46 compute-0 flamboyant_hypatia[290322]: 167 167
Sep 30 18:02:46 compute-0 systemd[1]: libpod-e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546.scope: Deactivated successfully.
Sep 30 18:02:46 compute-0 conmon[290322]: conmon e6f838ab2a2d320fb1bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546.scope/container/memory.events
Sep 30 18:02:46 compute-0 podman[290306]: 2025-09-30 18:02:46.086228234 +0000 UTC m=+0.165633501 container died e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-11d0f26e8c75727d887e5c7c505c7117744529fc4ddfee83a86c2e5202427ab6-merged.mount: Deactivated successfully.
Sep 30 18:02:46 compute-0 podman[290306]: 2025-09-30 18:02:46.146219275 +0000 UTC m=+0.225624512 container remove e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:02:46 compute-0 systemd[1]: libpod-conmon-e6f838ab2a2d320fb1bd37c296386011a1d7ba4dca73f11c2a93778b2b752546.scope: Deactivated successfully.
Sep 30 18:02:46 compute-0 sudo[290338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:02:46 compute-0 sudo[290338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:46 compute-0 sudo[290338]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:46 compute-0 podman[290370]: 2025-09-30 18:02:46.310852408 +0000 UTC m=+0.045490364 container create 87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 18:02:46 compute-0 systemd[1]: Started libpod-conmon-87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09.scope.
Sep 30 18:02:46 compute-0 podman[290370]: 2025-09-30 18:02:46.287620554 +0000 UTC m=+0.022258550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:02:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a80cf067bfcb8b8994e237e7bcd24a8057a6dd11f01baefe0b2ea217c6d3e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a80cf067bfcb8b8994e237e7bcd24a8057a6dd11f01baefe0b2ea217c6d3e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a80cf067bfcb8b8994e237e7bcd24a8057a6dd11f01baefe0b2ea217c6d3e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a80cf067bfcb8b8994e237e7bcd24a8057a6dd11f01baefe0b2ea217c6d3e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a80cf067bfcb8b8994e237e7bcd24a8057a6dd11f01baefe0b2ea217c6d3e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:46 compute-0 podman[290370]: 2025-09-30 18:02:46.426574859 +0000 UTC m=+0.161212875 container init 87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Sep 30 18:02:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:46 compute-0 podman[290370]: 2025-09-30 18:02:46.442871083 +0000 UTC m=+0.177509059 container start 87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_yonath, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:02:46 compute-0 podman[290370]: 2025-09-30 18:02:46.447564995 +0000 UTC m=+0.182203031 container attach 87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_yonath, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:02:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:46 compute-0 hardcore_yonath[290387]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:02:46 compute-0 hardcore_yonath[290387]: --> All data devices are unavailable
Sep 30 18:02:46 compute-0 systemd[1]: libpod-87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09.scope: Deactivated successfully.
Sep 30 18:02:46 compute-0 podman[290370]: 2025-09-30 18:02:46.841275689 +0000 UTC m=+0.575913635 container died 87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:02:46 compute-0 ceph-mon[73755]: pgmap v710: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5a80cf067bfcb8b8994e237e7bcd24a8057a6dd11f01baefe0b2ea217c6d3e3-merged.mount: Deactivated successfully.
Sep 30 18:02:46 compute-0 podman[290370]: 2025-09-30 18:02:46.900540361 +0000 UTC m=+0.635178317 container remove 87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:02:46 compute-0 systemd[1]: libpod-conmon-87c7ac2d9370d9a9acca4f76ae194764900ac178ea05edc345a9727cbf928a09.scope: Deactivated successfully.
Sep 30 18:02:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:46.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:46 compute-0 sudo[290238]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:47 compute-0 sudo[290413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:02:47 compute-0 sudo[290413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:47 compute-0 sudo[290413]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:47 compute-0 sudo[290438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:02:47 compute-0 sudo[290438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:47.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:47 compute-0 podman[290506]: 2025-09-30 18:02:47.552941894 +0000 UTC m=+0.043312777 container create f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 18:02:47 compute-0 systemd[1]: Started libpod-conmon-f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45.scope.
Sep 30 18:02:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:02:47 compute-0 podman[290506]: 2025-09-30 18:02:47.631886729 +0000 UTC m=+0.122257612 container init f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:02:47 compute-0 podman[290506]: 2025-09-30 18:02:47.535567353 +0000 UTC m=+0.025938246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:02:47 compute-0 podman[290506]: 2025-09-30 18:02:47.63963646 +0000 UTC m=+0.130007363 container start f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:02:47 compute-0 podman[290506]: 2025-09-30 18:02:47.646356165 +0000 UTC m=+0.136727058 container attach f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:02:47 compute-0 elegant_shtern[290523]: 167 167
Sep 30 18:02:47 compute-0 systemd[1]: libpod-f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45.scope: Deactivated successfully.
Sep 30 18:02:47 compute-0 podman[290506]: 2025-09-30 18:02:47.648678235 +0000 UTC m=+0.139049118 container died f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:02:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-29740ffcd36b4e0bd1b032dc3da19101027bb91a1b7521061745f35451de0f7d-merged.mount: Deactivated successfully.
Sep 30 18:02:47 compute-0 podman[290506]: 2025-09-30 18:02:47.698556293 +0000 UTC m=+0.188927196 container remove f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:02:47 compute-0 systemd[1]: libpod-conmon-f2358a19813179fa131f2f7313abfc603f5cd6cf269d89e20d4fe6ba4ff47f45.scope: Deactivated successfully.
Sep 30 18:02:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:47.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:47 compute-0 podman[290548]: 2025-09-30 18:02:47.928322391 +0000 UTC m=+0.055648138 container create d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rhodes, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:02:47 compute-0 systemd[1]: Started libpod-conmon-d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7.scope.
Sep 30 18:02:48 compute-0 podman[290548]: 2025-09-30 18:02:47.907180301 +0000 UTC m=+0.034506068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:02:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcccd1a4ad783e777471d548d224c1932c60272c31be8c36eaae37e73760c1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcccd1a4ad783e777471d548d224c1932c60272c31be8c36eaae37e73760c1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcccd1a4ad783e777471d548d224c1932c60272c31be8c36eaae37e73760c1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcccd1a4ad783e777471d548d224c1932c60272c31be8c36eaae37e73760c1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:48 compute-0 podman[290548]: 2025-09-30 18:02:48.029836283 +0000 UTC m=+0.157162060 container init d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rhodes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:02:48 compute-0 podman[290548]: 2025-09-30 18:02:48.036437104 +0000 UTC m=+0.163762831 container start d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:02:48 compute-0 podman[290548]: 2025-09-30 18:02:48.040375027 +0000 UTC m=+0.167700814 container attach d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rhodes, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:02:48 compute-0 epic_rhodes[290565]: {
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:     "0": [
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:         {
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "devices": [
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "/dev/loop3"
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             ],
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "lv_name": "ceph_lv0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "lv_size": "21470642176",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "name": "ceph_lv0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "tags": {
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.cluster_name": "ceph",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.crush_device_class": "",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.encrypted": "0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.osd_id": "0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.type": "block",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.vdo": "0",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:                 "ceph.with_tpm": "0"
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             },
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "type": "block",
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:             "vg_name": "ceph_vg0"
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:         }
Sep 30 18:02:48 compute-0 epic_rhodes[290565]:     ]
Sep 30 18:02:48 compute-0 epic_rhodes[290565]: }
Sep 30 18:02:48 compute-0 systemd[1]: libpod-d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7.scope: Deactivated successfully.
Sep 30 18:02:48 compute-0 podman[290548]: 2025-09-30 18:02:48.330803213 +0000 UTC m=+0.458128950 container died d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rhodes, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dcccd1a4ad783e777471d548d224c1932c60272c31be8c36eaae37e73760c1e-merged.mount: Deactivated successfully.
Sep 30 18:02:48 compute-0 podman[290548]: 2025-09-30 18:02:48.378666469 +0000 UTC m=+0.505992196 container remove d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 18:02:48 compute-0 systemd[1]: libpod-conmon-d2cae7c1ae8921253be125d821995954bfc05b25fc6413f35da41aeb1c38a0f7.scope: Deactivated successfully.
Sep 30 18:02:48 compute-0 sudo[290438]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21180019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:48 compute-0 sudo[290588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:02:48 compute-0 sudo[290588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:48 compute-0 sudo[290588]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:48 compute-0 sudo[290613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:02:48 compute-0 sudo[290613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:48] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 18:02:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:48] "GET /metrics HTTP/1.1" 200 46520 "" "Prometheus/2.51.0"
Sep 30 18:02:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:48 compute-0 ceph-mon[73755]: pgmap v711: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:48.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:49 compute-0 podman[290681]: 2025-09-30 18:02:49.022235833 +0000 UTC m=+0.048060121 container create 4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mclaren, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:02:49 compute-0 systemd[1]: Started libpod-conmon-4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5.scope.
Sep 30 18:02:49 compute-0 podman[290681]: 2025-09-30 18:02:48.99983134 +0000 UTC m=+0.025655658 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:02:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:02:49 compute-0 podman[290681]: 2025-09-30 18:02:49.113286772 +0000 UTC m=+0.139111060 container init 4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mclaren, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:02:49 compute-0 podman[290681]: 2025-09-30 18:02:49.121009333 +0000 UTC m=+0.146833621 container start 4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:02:49 compute-0 podman[290681]: 2025-09-30 18:02:49.124265768 +0000 UTC m=+0.150090056 container attach 4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:02:49 compute-0 amazing_mclaren[290697]: 167 167
Sep 30 18:02:49 compute-0 systemd[1]: libpod-4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5.scope: Deactivated successfully.
Sep 30 18:02:49 compute-0 podman[290681]: 2025-09-30 18:02:49.128755625 +0000 UTC m=+0.154579943 container died 4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 18:02:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-acb6eebf01cb5ed9b146e762993d007e9fe4c516bbf80780ee3818b97875ac21-merged.mount: Deactivated successfully.
Sep 30 18:02:49 compute-0 podman[290681]: 2025-09-30 18:02:49.163009606 +0000 UTC m=+0.188833894 container remove 4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:02:49 compute-0 systemd[1]: libpod-conmon-4c0613d7bd22e004c7d5099b269581960daa57d525a0700e9d478c19722080f5.scope: Deactivated successfully.
Sep 30 18:02:49 compute-0 podman[290723]: 2025-09-30 18:02:49.428100353 +0000 UTC m=+0.070807903 container create 990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:02:49 compute-0 systemd[1]: Started libpod-conmon-990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1.scope.
Sep 30 18:02:49 compute-0 podman[290723]: 2025-09-30 18:02:49.399807777 +0000 UTC m=+0.042515407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:02:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ec8d49865c5f611363fb9ffaf4d326500288f1dbbe1fe26a22c6572b29c0a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ec8d49865c5f611363fb9ffaf4d326500288f1dbbe1fe26a22c6572b29c0a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ec8d49865c5f611363fb9ffaf4d326500288f1dbbe1fe26a22c6572b29c0a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ec8d49865c5f611363fb9ffaf4d326500288f1dbbe1fe26a22c6572b29c0a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:02:49 compute-0 podman[290723]: 2025-09-30 18:02:49.513755222 +0000 UTC m=+0.156462762 container init 990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:02:49 compute-0 podman[290723]: 2025-09-30 18:02:49.520068856 +0000 UTC m=+0.162776386 container start 990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:02:49 compute-0 podman[290723]: 2025-09-30 18:02:49.523146526 +0000 UTC m=+0.165854086 container attach 990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Sep 30 18:02:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:50 compute-0 lvm[290816]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:02:50 compute-0 lvm[290816]: VG ceph_vg0 finished
Sep 30 18:02:50 compute-0 pedantic_swirles[290741]: {}
Sep 30 18:02:50 compute-0 systemd[1]: libpod-990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1.scope: Deactivated successfully.
Sep 30 18:02:50 compute-0 podman[290723]: 2025-09-30 18:02:50.276951909 +0000 UTC m=+0.919659459 container died 990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:02:50 compute-0 systemd[1]: libpod-990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1.scope: Consumed 1.190s CPU time.
Sep 30 18:02:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5ec8d49865c5f611363fb9ffaf4d326500288f1dbbe1fe26a22c6572b29c0a4-merged.mount: Deactivated successfully.
Sep 30 18:02:50 compute-0 podman[290723]: 2025-09-30 18:02:50.332188846 +0000 UTC m=+0.974896386 container remove 990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:02:50 compute-0 systemd[1]: libpod-conmon-990f2465811c7adc30a3e552fddd775d87252d0bd326d4d21a662673b8d62fe1.scope: Deactivated successfully.
Sep 30 18:02:50 compute-0 sudo[290613]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:02:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:02:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:50 compute-0 sudo[290830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:02:50 compute-0 sudo[290830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:02:50 compute-0 sudo[290830]: pam_unix(sudo:session): session closed for user root
Sep 30 18:02:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:50 compute-0 ceph-mon[73755]: pgmap v712: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:02:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:50.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:02:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:02:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21180019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:52 compute-0 ceph-mon[73755]: pgmap v713: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:02:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:52.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:53.615Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:53.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:02:54.263 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:02:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:02:54.263 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:02:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:02:54.263 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:02:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140009660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:54 compute-0 ceph-mon[73755]: pgmap v714: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:54.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:02:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:55.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:56 compute-0 ceph-mon[73755]: pgmap v715: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:02:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:56.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:02:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:02:57.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:02:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:57.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:57.964960) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255377965003, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2098, "num_deletes": 251, "total_data_size": 4094477, "memory_usage": 4142704, "flush_reason": "Manual Compaction"}
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255377992744, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 3963846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19551, "largest_seqno": 21648, "table_properties": {"data_size": 3954413, "index_size": 5927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19243, "raw_average_key_size": 20, "raw_value_size": 3935509, "raw_average_value_size": 4108, "num_data_blocks": 264, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759255160, "oldest_key_time": 1759255160, "file_creation_time": 1759255377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 27852 microseconds, and 15788 cpu microseconds.
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:57.992808) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 3963846 bytes OK
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:57.992838) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:57.994584) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:57.994609) EVENT_LOG_v1 {"time_micros": 1759255377994601, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:57.994635) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4085828, prev total WAL file size 4085828, number of live WAL files 2.
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:57.996593) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3870KB)], [44(11MB)]
Sep 30 18:02:57 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255377996647, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 15544345, "oldest_snapshot_seqno": -1}
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5210 keys, 13470073 bytes, temperature: kUnknown
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255378067503, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 13470073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13432674, "index_size": 23273, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 130745, "raw_average_key_size": 25, "raw_value_size": 13335717, "raw_average_value_size": 2559, "num_data_blocks": 969, "num_entries": 5210, "num_filter_entries": 5210, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759255377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:58.067861) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 13470073 bytes
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:58.070108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.9 rd, 189.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 5728, records dropped: 518 output_compression: NoCompression
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:58.070142) EVENT_LOG_v1 {"time_micros": 1759255378070127, "job": 22, "event": "compaction_finished", "compaction_time_micros": 71016, "compaction_time_cpu_micros": 25446, "output_level": 6, "num_output_files": 1, "total_output_size": 13470073, "num_input_records": 5728, "num_output_records": 5210, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255378071627, "job": 22, "event": "table_file_deletion", "file_number": 46}
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255378074362, "job": 22, "event": "table_file_deletion", "file_number": 44}
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:57.996492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:58.074534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:58.074544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:58.074548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:58.074551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:02:58 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:02:58.074554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:02:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140009660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:58 compute-0 podman[290867]: 2025-09-30 18:02:58.555536651 +0000 UTC m=+0.075266940 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:02:58 compute-0 podman[290866]: 2025-09-30 18:02:58.586507546 +0000 UTC m=+0.109609612 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:02:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:58] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:02:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:02:58] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:02:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:02:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21180019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:02:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:02:58.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:58 compute-0 ceph-mon[73755]: pgmap v716: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:02:59 compute-0 sshd-session[290862]: Invalid user foundry from 14.225.220.107 port 48392
Sep 30 18:02:59 compute-0 sshd-session[290862]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:02:59 compute-0 sshd-session[290862]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:02:59 compute-0 podman[276673]: time="2025-09-30T18:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:02:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:02:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:02:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:02:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:02:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:02:59.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:02:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10211 "" "Go-http-client/1.1"
Sep 30 18:03:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:00.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:00 compute-0 ceph-mon[73755]: pgmap v717: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:01 compute-0 sshd-session[290862]: Failed password for invalid user foundry from 14.225.220.107 port 48392 ssh2
Sep 30 18:03:01 compute-0 openstack_network_exporter[279566]: ERROR   18:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:03:01 compute-0 openstack_network_exporter[279566]: ERROR   18:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:03:01 compute-0 openstack_network_exporter[279566]: ERROR   18:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:03:01 compute-0 openstack_network_exporter[279566]: ERROR   18:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:03:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:03:01 compute-0 openstack_network_exporter[279566]: ERROR   18:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:03:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:03:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:01.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:02 compute-0 sshd-session[290862]: Received disconnect from 14.225.220.107 port 48392:11: Bye Bye [preauth]
Sep 30 18:03:02 compute-0 sshd-session[290862]: Disconnected from invalid user foundry 14.225.220.107 port 48392 [preauth]
Sep 30 18:03:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140009660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:02.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:03 compute-0 ceph-mon[73755]: pgmap v718: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:03.616Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:03.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:04 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140009660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:04.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:05 compute-0 ceph-mon[73755]: pgmap v719: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:05 compute-0 podman[290921]: 2025-09-30 18:03:05.524249393 +0000 UTC m=+0.055147246 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Sep 30 18:03:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v720: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:05.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:06 compute-0 sudo[290941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:03:06 compute-0 sudo[290941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:06 compute-0 sudo[290941]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:06 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:06.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:07 compute-0 ceph-mon[73755]: pgmap v720: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:07.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:03:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:03:07
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'images']
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:03:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:07.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:03:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:08] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 18:03:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:08] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 18:03:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:08 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2140009660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:08.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:09 compute-0 ceph-mon[73755]: pgmap v721: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:09.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21340040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.452977) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255390453016, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 347, "num_deletes": 250, "total_data_size": 209662, "memory_usage": 215808, "flush_reason": "Manual Compaction"}
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255390455452, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 207045, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21649, "largest_seqno": 21995, "table_properties": {"data_size": 204880, "index_size": 329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5649, "raw_average_key_size": 19, "raw_value_size": 200636, "raw_average_value_size": 687, "num_data_blocks": 15, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759255378, "oldest_key_time": 1759255378, "file_creation_time": 1759255390, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 2507 microseconds, and 1059 cpu microseconds.
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.455486) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 207045 bytes OK
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.455501) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.456792) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.456805) EVENT_LOG_v1 {"time_micros": 1759255390456801, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.456822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 207331, prev total WAL file size 207331, number of live WAL files 2.
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.457135) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(202KB)], [47(12MB)]
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255390457193, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 13677118, "oldest_snapshot_seqno": -1}
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4994 keys, 9855660 bytes, temperature: kUnknown
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255390526143, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 9855660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9824169, "index_size": 17877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126674, "raw_average_key_size": 25, "raw_value_size": 9735428, "raw_average_value_size": 1949, "num_data_blocks": 733, "num_entries": 4994, "num_filter_entries": 4994, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759255390, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.526400) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9855660 bytes
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.527703) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.1 rd, 142.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(113.7) write-amplify(47.6) OK, records in: 5502, records dropped: 508 output_compression: NoCompression
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.527721) EVENT_LOG_v1 {"time_micros": 1759255390527713, "job": 24, "event": "compaction_finished", "compaction_time_micros": 69026, "compaction_time_cpu_micros": 23735, "output_level": 6, "num_output_files": 1, "total_output_size": 9855660, "num_input_records": 5502, "num_output_records": 4994, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255390527888, "job": 24, "event": "table_file_deletion", "file_number": 49}
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255390530233, "job": 24, "event": "table_file_deletion", "file_number": 47}
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.457067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.530394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.530400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.530402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.530405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:10 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:10.530407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:10 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:10.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:11 compute-0 ceph-mon[73755]: pgmap v722: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:11 compute-0 podman[290972]: 2025-09-30 18:03:11.524271614 +0000 UTC m=+0.059544380 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930)
Sep 30 18:03:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:11.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:12 compute-0 podman[290994]: 2025-09-30 18:03:12.528487431 +0000 UTC m=+0.065332531 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Sep 30 18:03:12 compute-0 podman[290993]: 2025-09-30 18:03:12.548152992 +0000 UTC m=+0.085219788 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.4)
Sep 30 18:03:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:12 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:12.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:13 compute-0 ceph-mon[73755]: pgmap v723: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:13.617Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:13.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:14 compute-0 ceph-mon[73755]: pgmap v724: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:14 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:14.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:16 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:16 compute-0 ceph-mon[73755]: pgmap v725: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:17.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:18] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 18:03:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:18] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 18:03:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:18 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:18 compute-0 ceph-mon[73755]: pgmap v726: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4228144495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:03:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:18.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:20 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:20 compute-0 ceph-mon[73755]: pgmap v727: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1538901174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:03:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:20.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:21 compute-0 nova_compute[265391]: 2025-09-30 18:03:21.675 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:21 compute-0 nova_compute[265391]: 2025-09-30 18:03:21.675 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.207 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.208 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.208 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.208 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.208 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.208 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.208 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.208 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:03:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:03:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:03:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.725 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.726 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.726 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.726 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:03:22 compute-0 nova_compute[265391]: 2025-09-30 18:03:22.726 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:03:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:22 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:22 compute-0 ceph-mon[73755]: pgmap v728: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:03:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:22.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:03:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/756163064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:03:23 compute-0 nova_compute[265391]: 2025-09-30 18:03:23.244 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:03:23 compute-0 nova_compute[265391]: 2025-09-30 18:03:23.388 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:03:23 compute-0 nova_compute[265391]: 2025-09-30 18:03:23.389 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:03:23 compute-0 nova_compute[265391]: 2025-09-30 18:03:23.405 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:03:23 compute-0 nova_compute[265391]: 2025-09-30 18:03:23.406 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4868MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:03:23 compute-0 nova_compute[265391]: 2025-09-30 18:03:23.406 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:03:23 compute-0 nova_compute[265391]: 2025-09-30 18:03:23.406 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:03:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:23.618Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/756163064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:03:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:24 compute-0 nova_compute[265391]: 2025-09-30 18:03:24.502 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:03:24 compute-0 nova_compute[265391]: 2025-09-30 18:03:24.503 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:03:23 up  1:06,  0 user,  load average: 0.71, 0.92, 0.99\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:03:24 compute-0 nova_compute[265391]: 2025-09-30 18:03:24.514 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:03:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:24 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:03:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459063351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:03:24 compute-0 nova_compute[265391]: 2025-09-30 18:03:24.967 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:03:24 compute-0 nova_compute[265391]: 2025-09-30 18:03:24.973 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:03:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:24.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:24 compute-0 ceph-mon[73755]: pgmap v729: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3459063351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:03:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:25 compute-0 nova_compute[265391]: 2025-09-30 18:03:25.481 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:03:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:25.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:25 compute-0 nova_compute[265391]: 2025-09-30 18:03:25.990 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:03:25 compute-0 nova_compute[265391]: 2025-09-30 18:03:25.991 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.584s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:03:26 compute-0 sudo[291095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:03:26 compute-0 sudo[291095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:26 compute-0 sudo[291095]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:26 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:27 compute-0 ceph-mon[73755]: pgmap v730: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:27.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:28] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 18:03:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:28] "GET /metrics HTTP/1.1" 200 46515 "" "Prometheus/2.51.0"
Sep 30 18:03:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:28 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:29 compute-0 ceph-mon[73755]: pgmap v731: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:29 compute-0 podman[291124]: 2025-09-30 18:03:29.516382745 +0000 UTC m=+0.054320714 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:03:29 compute-0 podman[291123]: 2025-09-30 18:03:29.540513703 +0000 UTC m=+0.085403863 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:03:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:29.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:30 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:30.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:31 compute-0 ceph-mon[73755]: pgmap v732: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:31.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:32 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:32.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:33 compute-0 ceph-mon[73755]: pgmap v733: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:33.619Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:33.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:34 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:34.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:35 compute-0 ceph-mon[73755]: pgmap v734: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:35.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:03:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978917613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:03:36 compute-0 podman[291181]: 2025-09-30 18:03:36.517374089 +0000 UTC m=+0.057608150 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 18:03:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:03:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978917613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:03:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:36 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:36.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:37 compute-0 ceph-mon[73755]: pgmap v735: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1978917613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:03:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1978917613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:03:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:37.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:03:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:03:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:37.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:03:38 compute-0 sshd-session[291200]: Invalid user me from 45.252.249.158 port 41034
Sep 30 18:03:38 compute-0 sshd-session[291200]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:03:38 compute-0 sshd-session[291200]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:03:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:38] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 18:03:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:38] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 18:03:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:38 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2118003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:03:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:38.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:03:39 compute-0 ceph-mon[73755]: pgmap v736: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:39.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:40 compute-0 sshd-session[291200]: Failed password for invalid user me from 45.252.249.158 port 41034 ssh2
Sep 30 18:03:40 compute-0 ceph-mon[73755]: pgmap v737: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:40 compute-0 sshd-session[291200]: Received disconnect from 45.252.249.158 port 41034:11: Bye Bye [preauth]
Sep 30 18:03:40 compute-0 sshd-session[291200]: Disconnected from invalid user me 45.252.249.158 port 41034 [preauth]
Sep 30 18:03:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:40 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2114001ba0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:03:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:41.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:03:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:42 compute-0 podman[291208]: 2025-09-30 18:03:42.518807294 +0000 UTC m=+0.056924582 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid)
Sep 30 18:03:42 compute-0 ceph-mon[73755]: pgmap v738: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:42 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:43.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:43 compute-0 podman[291231]: 2025-09-30 18:03:43.521514554 +0000 UTC m=+0.058749350 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter)
Sep 30 18:03:43 compute-0 podman[291230]: 2025-09-30 18:03:43.528264879 +0000 UTC m=+0.064821227 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Sep 30 18:03:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:43.620Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:44 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:44 compute-0 ceph-mon[73755]: pgmap v739: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:45.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:46 compute-0 sudo[291275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:03:46 compute-0 sudo[291275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:46 compute-0 sudo[291275]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:46 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:47 compute-0 ceph-mon[73755]: pgmap v740: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:47.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:47.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:47.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:48] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 18:03:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:48] "GET /metrics HTTP/1.1" 200 46516 "" "Prometheus/2.51.0"
Sep 30 18:03:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:48 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:03:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:49.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:03:49 compute-0 ceph-mon[73755]: pgmap v741: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:49.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:50 compute-0 ceph-mon[73755]: pgmap v742: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:50 compute-0 sudo[291304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:03:50 compute-0 sudo[291304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:50 compute-0 sudo[291304]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:50 compute-0 sudo[291329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:03:50 compute-0 sudo[291329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:50 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:51.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:51 compute-0 sudo[291329]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:03:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:03:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:03:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:03:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:03:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:03:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:03:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:03:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:03:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:03:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:03:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:03:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:03:51 compute-0 sudo[291389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:03:51 compute-0 sudo[291389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:51 compute-0 sudo[291389]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:51 compute-0 sudo[291415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:03:51 compute-0 sudo[291415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:51.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:52 compute-0 podman[291480]: 2025-09-30 18:03:52.127714201 +0000 UTC m=+0.057978670 container create feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:03:52 compute-0 systemd[1]: Started libpod-conmon-feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab.scope.
Sep 30 18:03:52 compute-0 podman[291480]: 2025-09-30 18:03:52.09732191 +0000 UTC m=+0.027586449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:03:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:03:52 compute-0 podman[291480]: 2025-09-30 18:03:52.217385854 +0000 UTC m=+0.147650333 container init feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:03:52 compute-0 podman[291480]: 2025-09-30 18:03:52.224357455 +0000 UTC m=+0.154621914 container start feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 18:03:52 compute-0 podman[291480]: 2025-09-30 18:03:52.22760542 +0000 UTC m=+0.157869879 container attach feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:03:52 compute-0 vigilant_northcutt[291496]: 167 167
Sep 30 18:03:52 compute-0 podman[291480]: 2025-09-30 18:03:52.23146548 +0000 UTC m=+0.161729969 container died feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_northcutt, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:03:52 compute-0 systemd[1]: libpod-feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab.scope: Deactivated successfully.
Sep 30 18:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b742b27dd7f77ec4998de5432879978a83b4c29ff42909c42ade9dc08d24e43-merged.mount: Deactivated successfully.
Sep 30 18:03:52 compute-0 podman[291480]: 2025-09-30 18:03:52.274311935 +0000 UTC m=+0.204576394 container remove feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:03:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:03:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:03:52 compute-0 systemd[1]: libpod-conmon-feff449b303f130d92c1d53ff4c7b6116078c0a82302ab4f37afa90ba5db33ab.scope: Deactivated successfully.
Sep 30 18:03:52 compute-0 podman[291523]: 2025-09-30 18:03:52.439443461 +0000 UTC m=+0.047270300 container create 04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:03:52 compute-0 systemd[1]: Started libpod-conmon-04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97.scope.
Sep 30 18:03:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76596ffd087b51882064d820b38c5684ca9e79ba1ff26007303ed1f2736b593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76596ffd087b51882064d820b38c5684ca9e79ba1ff26007303ed1f2736b593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76596ffd087b51882064d820b38c5684ca9e79ba1ff26007303ed1f2736b593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76596ffd087b51882064d820b38c5684ca9e79ba1ff26007303ed1f2736b593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76596ffd087b51882064d820b38c5684ca9e79ba1ff26007303ed1f2736b593/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:52 compute-0 podman[291523]: 2025-09-30 18:03:52.418301421 +0000 UTC m=+0.026128300 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:03:52 compute-0 podman[291523]: 2025-09-30 18:03:52.521104216 +0000 UTC m=+0.128931085 container init 04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 18:03:52 compute-0 podman[291523]: 2025-09-30 18:03:52.532818091 +0000 UTC m=+0.140644930 container start 04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:03:52 compute-0 podman[291523]: 2025-09-30 18:03:52.536263361 +0000 UTC m=+0.144090200 container attach 04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:03:52 compute-0 ceph-mon[73755]: pgmap v743: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:03:52 compute-0 laughing_volhard[291540]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:03:52 compute-0 laughing_volhard[291540]: --> All data devices are unavailable
Sep 30 18:03:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:52 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:52 compute-0 systemd[1]: libpod-04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97.scope: Deactivated successfully.
Sep 30 18:03:52 compute-0 podman[291523]: 2025-09-30 18:03:52.88830731 +0000 UTC m=+0.496134139 container died 04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d76596ffd087b51882064d820b38c5684ca9e79ba1ff26007303ed1f2736b593-merged.mount: Deactivated successfully.
Sep 30 18:03:52 compute-0 podman[291523]: 2025-09-30 18:03:52.92864219 +0000 UTC m=+0.536469029 container remove 04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:03:52 compute-0 systemd[1]: libpod-conmon-04f997a2ac9e4655ba5807e759bd21491558990f21b6592509d4cf2af181ae97.scope: Deactivated successfully.
Sep 30 18:03:52 compute-0 sudo[291415]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:53.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:53 compute-0 sudo[291568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:03:53 compute-0 sudo[291568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:53 compute-0 sudo[291568]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:53 compute-0 sudo[291593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:03:53 compute-0 sudo[291593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:53 compute-0 podman[291660]: 2025-09-30 18:03:53.595742626 +0000 UTC m=+0.043173694 container create f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_carver, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:03:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:53.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:53 compute-0 systemd[1]: Started libpod-conmon-f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2.scope.
Sep 30 18:03:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:03:53 compute-0 podman[291660]: 2025-09-30 18:03:53.57782383 +0000 UTC m=+0.025254938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:03:53 compute-0 podman[291660]: 2025-09-30 18:03:53.679070355 +0000 UTC m=+0.126501433 container init f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_carver, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:03:53 compute-0 podman[291660]: 2025-09-30 18:03:53.693772487 +0000 UTC m=+0.141203565 container start f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_carver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:03:53 compute-0 podman[291660]: 2025-09-30 18:03:53.697606607 +0000 UTC m=+0.145037695 container attach f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:03:53 compute-0 xenodochial_carver[291676]: 167 167
Sep 30 18:03:53 compute-0 systemd[1]: libpod-f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2.scope: Deactivated successfully.
Sep 30 18:03:53 compute-0 podman[291660]: 2025-09-30 18:03:53.700704138 +0000 UTC m=+0.148135256 container died f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-56a174d7a452d8f5cf89127c2d1bff5ebc86a936b6faf147926d9d6222e26064-merged.mount: Deactivated successfully.
Sep 30 18:03:53 compute-0 podman[291660]: 2025-09-30 18:03:53.74693165 +0000 UTC m=+0.194362718 container remove f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_carver, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:03:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:53 compute-0 systemd[1]: libpod-conmon-f63bf84a0a003523ccf22193b500423e2e3e83d0f5a3f8ef315b5ddb8951bcd2.scope: Deactivated successfully.
Sep 30 18:03:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:53 compute-0 podman[291702]: 2025-09-30 18:03:53.946897103 +0000 UTC m=+0.037958488 container create 7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 18:03:53 compute-0 systemd[1]: Started libpod-conmon-7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba.scope.
Sep 30 18:03:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab48982ef6e9b7250aca48c5586142869c0379b6597ff62033535dac41ecfa28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab48982ef6e9b7250aca48c5586142869c0379b6597ff62033535dac41ecfa28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab48982ef6e9b7250aca48c5586142869c0379b6597ff62033535dac41ecfa28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab48982ef6e9b7250aca48c5586142869c0379b6597ff62033535dac41ecfa28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:54 compute-0 podman[291702]: 2025-09-30 18:03:53.93024378 +0000 UTC m=+0.021305175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:03:54 compute-0 podman[291702]: 2025-09-30 18:03:54.026116405 +0000 UTC m=+0.117177810 container init 7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_swirles, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 18:03:54 compute-0 podman[291702]: 2025-09-30 18:03:54.032130231 +0000 UTC m=+0.123191606 container start 7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_swirles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:03:54 compute-0 podman[291702]: 2025-09-30 18:03:54.036521825 +0000 UTC m=+0.127583220 container attach 7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_swirles, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:03:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:03:54.264 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:03:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:03:54.266 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:03:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:03:54.266 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]: {
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:     "0": [
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:         {
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "devices": [
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "/dev/loop3"
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             ],
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "lv_name": "ceph_lv0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "lv_size": "21470642176",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "name": "ceph_lv0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "tags": {
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.cluster_name": "ceph",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.crush_device_class": "",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.encrypted": "0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.osd_id": "0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.type": "block",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.vdo": "0",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:                 "ceph.with_tpm": "0"
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             },
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "type": "block",
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:             "vg_name": "ceph_vg0"
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:         }
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]:     ]
Sep 30 18:03:54 compute-0 relaxed_swirles[291718]: }
Sep 30 18:03:54 compute-0 systemd[1]: libpod-7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba.scope: Deactivated successfully.
Sep 30 18:03:54 compute-0 podman[291702]: 2025-09-30 18:03:54.335931796 +0000 UTC m=+0.426993181 container died 7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab48982ef6e9b7250aca48c5586142869c0379b6597ff62033535dac41ecfa28-merged.mount: Deactivated successfully.
Sep 30 18:03:54 compute-0 podman[291702]: 2025-09-30 18:03:54.381792449 +0000 UTC m=+0.472853824 container remove 7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:03:54 compute-0 systemd[1]: libpod-conmon-7cdc00e09587737424f4709e190cdc92baa80106ded124ff6ae6f0c4a9c172ba.scope: Deactivated successfully.
Sep 30 18:03:54 compute-0 sudo[291593]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:54 compute-0 sudo[291739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:03:54 compute-0 sudo[291739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:54 compute-0 sudo[291739]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:54 compute-0 sudo[291764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:03:54 compute-0 sudo[291764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:54 compute-0 ceph-mon[73755]: pgmap v744: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:54 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:54 compute-0 podman[291827]: 2025-09-30 18:03:54.962420066 +0000 UTC m=+0.044107099 container create 02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:03:54 compute-0 systemd[1]: Started libpod-conmon-02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b.scope.
Sep 30 18:03:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:03:55 compute-0 podman[291827]: 2025-09-30 18:03:54.941885642 +0000 UTC m=+0.023572725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:03:55 compute-0 podman[291827]: 2025-09-30 18:03:55.042323434 +0000 UTC m=+0.124010457 container init 02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:03:55 compute-0 podman[291827]: 2025-09-30 18:03:55.04910118 +0000 UTC m=+0.130788203 container start 02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:03:55 compute-0 podman[291827]: 2025-09-30 18:03:55.05254951 +0000 UTC m=+0.134236553 container attach 02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:03:55 compute-0 elegant_ritchie[291843]: 167 167
Sep 30 18:03:55 compute-0 systemd[1]: libpod-02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b.scope: Deactivated successfully.
Sep 30 18:03:55 compute-0 podman[291827]: 2025-09-30 18:03:55.054313486 +0000 UTC m=+0.136000499 container died 02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb73a9c6073f99650024c973063b4de75b272a5ce65148197a748838da0e986a-merged.mount: Deactivated successfully.
Sep 30 18:03:55 compute-0 podman[291827]: 2025-09-30 18:03:55.089459851 +0000 UTC m=+0.171146874 container remove 02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:03:55 compute-0 systemd[1]: libpod-conmon-02f44a6475df405212d91a7f33cd5d7ffc3eaa02a95524fc1fef0c8866d0cd0b.scope: Deactivated successfully.
Sep 30 18:03:55 compute-0 podman[291869]: 2025-09-30 18:03:55.232271136 +0000 UTC m=+0.032987269 container create 6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:03:55 compute-0 systemd[1]: Started libpod-conmon-6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea.scope.
Sep 30 18:03:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300646d4a9a5ab01424027c650ba9aef6c2922dd7f6b2139b26e0d5c5a276b4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300646d4a9a5ab01424027c650ba9aef6c2922dd7f6b2139b26e0d5c5a276b4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300646d4a9a5ab01424027c650ba9aef6c2922dd7f6b2139b26e0d5c5a276b4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300646d4a9a5ab01424027c650ba9aef6c2922dd7f6b2139b26e0d5c5a276b4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:03:55 compute-0 podman[291869]: 2025-09-30 18:03:55.217640486 +0000 UTC m=+0.018356649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:03:55 compute-0 podman[291869]: 2025-09-30 18:03:55.318968262 +0000 UTC m=+0.119684415 container init 6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_mclean, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 18:03:55 compute-0 podman[291869]: 2025-09-30 18:03:55.326389345 +0000 UTC m=+0.127105488 container start 6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_mclean, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 18:03:55 compute-0 podman[291869]: 2025-09-30 18:03:55.330080641 +0000 UTC m=+0.130796794 container attach 6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:03:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.468109) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255435468144, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 629, "num_deletes": 256, "total_data_size": 748233, "memory_usage": 759992, "flush_reason": "Manual Compaction"}
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255435473055, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 737998, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21996, "largest_seqno": 22624, "table_properties": {"data_size": 734763, "index_size": 1143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7347, "raw_average_key_size": 18, "raw_value_size": 728110, "raw_average_value_size": 1788, "num_data_blocks": 53, "num_entries": 407, "num_filter_entries": 407, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759255391, "oldest_key_time": 1759255391, "file_creation_time": 1759255435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 4980 microseconds, and 2376 cpu microseconds.
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.473087) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 737998 bytes OK
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.473107) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.474764) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.474776) EVENT_LOG_v1 {"time_micros": 1759255435474772, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.474792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 744858, prev total WAL file size 744858, number of live WAL files 2.
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.475240) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(720KB)], [50(9624KB)]
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255435475326, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10593658, "oldest_snapshot_seqno": -1}
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4878 keys, 10472124 bytes, temperature: kUnknown
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255435558135, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 10472124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10440316, "index_size": 18514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 125436, "raw_average_key_size": 25, "raw_value_size": 10352496, "raw_average_value_size": 2122, "num_data_blocks": 756, "num_entries": 4878, "num_filter_entries": 4878, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759255435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.558502) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 10472124 bytes
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.560569) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.7 rd, 126.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(28.5) write-amplify(14.2) OK, records in: 5401, records dropped: 523 output_compression: NoCompression
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.560602) EVENT_LOG_v1 {"time_micros": 1759255435560588, "job": 26, "event": "compaction_finished", "compaction_time_micros": 82975, "compaction_time_cpu_micros": 30228, "output_level": 6, "num_output_files": 1, "total_output_size": 10472124, "num_input_records": 5401, "num_output_records": 4878, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255435561954, "job": 26, "event": "table_file_deletion", "file_number": 52}
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255435565573, "job": 26, "event": "table_file_deletion", "file_number": 50}
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.475090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.565791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.565796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.565798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.565799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:55 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:03:55.565801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:03:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:55.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:55 compute-0 lvm[291963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:03:55 compute-0 lvm[291963]: VG ceph_vg0 finished
Sep 30 18:03:55 compute-0 compassionate_mclean[291886]: {}
Sep 30 18:03:55 compute-0 systemd[1]: libpod-6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea.scope: Deactivated successfully.
Sep 30 18:03:55 compute-0 systemd[1]: libpod-6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea.scope: Consumed 1.062s CPU time.
Sep 30 18:03:55 compute-0 podman[291869]: 2025-09-30 18:03:55.99649453 +0000 UTC m=+0.797210653 container died 6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_mclean, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-300646d4a9a5ab01424027c650ba9aef6c2922dd7f6b2139b26e0d5c5a276b4d-merged.mount: Deactivated successfully.
Sep 30 18:03:56 compute-0 podman[291869]: 2025-09-30 18:03:56.040981708 +0000 UTC m=+0.841697841 container remove 6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_mclean, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:03:56 compute-0 systemd[1]: libpod-conmon-6b7782d906c31c18568f70d1435dc15475d3a2e9679563c2e4de98f4006822ea.scope: Deactivated successfully.
Sep 30 18:03:56 compute-0 sudo[291764]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:03:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:03:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:03:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:03:56 compute-0 sudo[291980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:03:56 compute-0 sudo[291980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:03:56 compute-0 sudo[291980]: pam_unix(sudo:session): session closed for user root
Sep 30 18:03:56 compute-0 ceph-mon[73755]: pgmap v745: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:03:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:03:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:56 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:03:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:03:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:03:57.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/888285193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/888285193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:03:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:58 compute-0 ceph-mon[73755]: pgmap v746: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:03:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:58] "GET /metrics HTTP/1.1" 200 46522 "" "Prometheus/2.51.0"
Sep 30 18:03:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:03:58] "GET /metrics HTTP/1.1" 200 46522 "" "Prometheus/2.51.0"
Sep 30 18:03:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:03:58 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2130003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:03:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:03:59.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:03:59 compute-0 podman[276673]: time="2025-09-30T18:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:03:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:03:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10215 "" "Go-http-client/1.1"
Sep 30 18:03:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:03:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:03:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:03:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:03:59.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:04:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2120001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:00 compute-0 podman[292010]: 2025-09-30 18:04:00.530171189 +0000 UTC m=+0.061049159 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:04:00 compute-0 podman[292009]: 2025-09-30 18:04:00.54482584 +0000 UTC m=+0.079648303 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:04:00 compute-0 ceph-mon[73755]: pgmap v747: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:04:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:04:00 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2134003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:04:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:04:01 compute-0 openstack_network_exporter[279566]: ERROR   18:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:04:01 compute-0 openstack_network_exporter[279566]: ERROR   18:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:04:01 compute-0 openstack_network_exporter[279566]: ERROR   18:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:04:01 compute-0 openstack_network_exporter[279566]: ERROR   18:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:04:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:04:01 compute-0 openstack_network_exporter[279566]: ERROR   18:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:04:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:04:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:01.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:02.308 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:04:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:02.309 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:04:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:02.314 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:04:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[259483]: 30/09/2025 18:04:02 : epoch 68dc1a42 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f21400089d0 fd 38 proxy ignored for local
Sep 30 18:04:02 compute-0 kernel: ganesha.nfsd[291272]: segfault at 50 ip 00007f21eeb2e32e sp 00007f21be7fb210 error 4 in libntirpc.so.5.8[7f21eeb13000+2c000] likely on CPU 7 (core 0, socket 7)
Sep 30 18:04:02 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 18:04:02 compute-0 systemd[1]: Started Process Core Dump (PID 292063/UID 0).
Sep 30 18:04:02 compute-0 ceph-mon[73755]: pgmap v748: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:03.623Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:04:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:03.623Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:03 compute-0 systemd-coredump[292064]: Process 259489 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 75:
                                                    #0  0x00007f21eeb2e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 18:04:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:04:03 compute-0 systemd[1]: systemd-coredump@12-292063-0.service: Deactivated successfully.
Sep 30 18:04:03 compute-0 systemd[1]: systemd-coredump@12-292063-0.service: Consumed 1.270s CPU time.
Sep 30 18:04:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:03.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:03 compute-0 podman[292071]: 2025-09-30 18:04:03.897288786 +0000 UTC m=+0.027603169 container died 76f85129328150eb71d21e7c61e63b1c179a003382bbdb3cb36bea5b2bfe00b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:04:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-48b81d77edfe7492ff1ba710bc5e5ad6842e3500fecfcf076c0e191fac9d950e-merged.mount: Deactivated successfully.
Sep 30 18:04:03 compute-0 podman[292071]: 2025-09-30 18:04:03.936431084 +0000 UTC m=+0.066745437 container remove 76f85129328150eb71d21e7c61e63b1c179a003382bbdb3cb36bea5b2bfe00b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 18:04:03 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 18:04:04 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 18:04:04 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 2.119s CPU time.
Sep 30 18:04:04 compute-0 ceph-mon[73755]: pgmap v749: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:04:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:05.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:06 compute-0 sudo[292116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:04:06 compute-0 sudo[292116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:06 compute-0 sudo[292116]: pam_unix(sudo:session): session closed for user root
Sep 30 18:04:06 compute-0 podman[292140]: 2025-09-30 18:04:06.677986384 +0000 UTC m=+0.054358975 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:04:06 compute-0 ceph-mon[73755]: pgmap v750: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:07.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:04:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:04:07
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['vms', '.nfs', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'volumes']
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:04:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:07.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:04:07 compute-0 sshd-session[292160]: Invalid user cma from 14.225.220.107 port 46354
Sep 30 18:04:07 compute-0 sshd-session[292160]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:04:07 compute-0 sshd-session[292160]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:04:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180408 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:04:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:04:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:08] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:04:08 compute-0 ceph-mon[73755]: pgmap v751: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:04:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:09.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:04:10 compute-0 sshd-session[292160]: Failed password for invalid user cma from 14.225.220.107 port 46354 ssh2
Sep 30 18:04:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:10 compute-0 ceph-mon[73755]: pgmap v752: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:11.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:04:11 compute-0 sshd-session[292160]: Received disconnect from 14.225.220.107 port 46354:11: Bye Bye [preauth]
Sep 30 18:04:11 compute-0 sshd-session[292160]: Disconnected from invalid user cma 14.225.220.107 port 46354 [preauth]
Sep 30 18:04:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:11.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.131 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:19:4a 192.168.122.171'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.171/24', 'neutron:device_id': 'ovnmeta-54e2622d-59e1-4c7e-bc2b-c69118964b19', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54e2622d-59e1-4c7e-bc2b-c69118964b19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e2dde567e5c4b1c9802c64cfc281b6d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7703b222-e0b2-4b5b-938b-dfe8edf1b48b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dd3e3f69-f69e-4073-bbf8-546ab98d1097) old=Port_Binding(mac=['fa:16:3e:8f:19:4a'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-54e2622d-59e1-4c7e-bc2b-c69118964b19', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54e2622d-59e1-4c7e-bc2b-c69118964b19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e2dde567e5c4b1c9802c64cfc281b6d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.132 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port dd3e3f69-f69e-4073-bbf8-546ab98d1097 in datapath 54e2622d-59e1-4c7e-bc2b-c69118964b19 updated
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.134 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54e2622d-59e1-4c7e-bc2b-c69118964b19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.135 166158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpxlp_d_8s/privsep.sock']
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.904 166158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.904 166158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpxlp_d_8s/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.753 292173 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.756 292173 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.758 292173 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.758 292173 INFO oslo.privsep.daemon [-] privsep daemon running as pid 292173
Sep 30 18:04:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:12.906 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fc20b2e3-d860-4326-9a45-e450ca0dff63]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:04:13 compute-0 ceph-mon[73755]: pgmap v753: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:04:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:13.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:13 compute-0 nova_compute[265391]: 2025-09-30 18:04:13.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:13 compute-0 nova_compute[265391]: 2025-09-30 18:04:13.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:13 compute-0 nova_compute[265391]: 2025-09-30 18:04:13.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:04:13 compute-0 podman[292179]: 2025-09-30 18:04:13.529415928 +0000 UTC m=+0.063129644 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Sep 30 18:04:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:13.571 292173 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:04:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:13.571 292173 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:04:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:13.571 292173 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:04:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:13.624Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:13 compute-0 podman[292201]: 2025-09-30 18:04:13.628208668 +0000 UTC m=+0.062479577 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 18:04:13 compute-0 podman[292200]: 2025-09-30 18:04:13.651327289 +0000 UTC m=+0.085949027 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:04:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:13.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:13 compute-0 nova_compute[265391]: 2025-09-30 18:04:13.936 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:04:13 compute-0 nova_compute[265391]: 2025-09-30 18:04:13.937 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:13 compute-0 nova_compute[265391]: 2025-09-30 18:04:13.938 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:04:14 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 13.
Sep 30 18:04:14 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 18:04:14 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 2.119s CPU time.
Sep 30 18:04:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:14.192 292173 INFO oslo_service.backend [-] Loading backend: eventlet
Sep 30 18:04:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:14.201 292173 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Sep 30 18:04:14 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 18:04:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:14.279 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[020f034e-b5bd-4759-b3ca-ce8a995a31e3]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:04:14 compute-0 nova_compute[265391]: 2025-09-30 18:04:14.445 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:14 compute-0 podman[292295]: 2025-09-30 18:04:14.470048471 +0000 UTC m=+0.055020692 container create b7497a80b31974815d0f7cf34d76befd732b3041528efa746fcdc7f09538bfa9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988fa460153acad45016cba23905e64883a52f629670b96d677c2dbfce0d17b5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988fa460153acad45016cba23905e64883a52f629670b96d677c2dbfce0d17b5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988fa460153acad45016cba23905e64883a52f629670b96d677c2dbfce0d17b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988fa460153acad45016cba23905e64883a52f629670b96d677c2dbfce0d17b5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:14 compute-0 podman[292295]: 2025-09-30 18:04:14.448045439 +0000 UTC m=+0.033017720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:04:14 compute-0 podman[292295]: 2025-09-30 18:04:14.541964882 +0000 UTC m=+0.126937123 container init b7497a80b31974815d0f7cf34d76befd732b3041528efa746fcdc7f09538bfa9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:04:14 compute-0 podman[292295]: 2025-09-30 18:04:14.55148864 +0000 UTC m=+0.136460851 container start b7497a80b31974815d0f7cf34d76befd732b3041528efa746fcdc7f09538bfa9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:04:14 compute-0 bash[292295]: b7497a80b31974815d0f7cf34d76befd732b3041528efa746fcdc7f09538bfa9
Sep 30 18:04:14 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 18:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 18:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 18:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 18:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 18:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 18:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 18:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 18:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:04:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:15 compute-0 ceph-mon[73755]: pgmap v754: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:04:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:15.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:17 compute-0 ceph-mon[73755]: pgmap v755: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:04:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:17.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:04:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:17.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:17 compute-0 nova_compute[265391]: 2025-09-30 18:04:17.954 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:17 compute-0 nova_compute[265391]: 2025-09-30 18:04:17.955 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:18 compute-0 nova_compute[265391]: 2025-09-30 18:04:18.471 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:04:18 compute-0 nova_compute[265391]: 2025-09-30 18:04:18.472 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:04:18 compute-0 nova_compute[265391]: 2025-09-30 18:04:18.472 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:04:18 compute-0 nova_compute[265391]: 2025-09-30 18:04:18.472 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:04:18 compute-0 nova_compute[265391]: 2025-09-30 18:04:18.472 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:04:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:04:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:18] "GET /metrics HTTP/1.1" 200 46518 "" "Prometheus/2.51.0"
Sep 30 18:04:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:04:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529259135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:04:18 compute-0 nova_compute[265391]: 2025-09-30 18:04:18.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:04:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:19 compute-0 ceph-mon[73755]: pgmap v756: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:04:19 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1529259135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:04:19 compute-0 nova_compute[265391]: 2025-09-30 18:04:19.127 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:04:19 compute-0 nova_compute[265391]: 2025-09-30 18:04:19.128 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:04:19 compute-0 nova_compute[265391]: 2025-09-30 18:04:19.158 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.029s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:04:19 compute-0 nova_compute[265391]: 2025-09-30 18:04:19.158 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4743MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:04:19 compute-0 nova_compute[265391]: 2025-09-30 18:04:19.158 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:04:19 compute-0 nova_compute[265391]: 2025-09-30 18:04:19.159 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:04:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:04:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3575243734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:04:20 compute-0 nova_compute[265391]: 2025-09-30 18:04:20.319 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:04:20 compute-0 nova_compute[265391]: 2025-09-30 18:04:20.319 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:04:19 up  1:07,  0 user,  load average: 0.41, 0.81, 0.95\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:04:20 compute-0 nova_compute[265391]: 2025-09-30 18:04:20.435 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:04:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:20 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:04:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:20 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:04:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:04:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3199141060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:04:20 compute-0 nova_compute[265391]: 2025-09-30 18:04:20.936 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:04:20 compute-0 nova_compute[265391]: 2025-09-30 18:04:20.941 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:04:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:21 compute-0 ceph-mon[73755]: pgmap v757: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:04:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3199141060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:04:21 compute-0 nova_compute[265391]: 2025-09-30 18:04:21.448 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:04:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:04:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:21 compute-0 nova_compute[265391]: 2025-09-30 18:04:21.957 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:04:21 compute-0 nova_compute[265391]: 2025-09-30 18:04:21.957 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.798s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:04:22 compute-0 ceph-mon[73755]: pgmap v758: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:04:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:04:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:04:22 compute-0 nova_compute[265391]: 2025-09-30 18:04:22.426 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:22 compute-0 nova_compute[265391]: 2025-09-30 18:04:22.426 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:22 compute-0 nova_compute[265391]: 2025-09-30 18:04:22.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:22 compute-0 nova_compute[265391]: 2025-09-30 18:04:22.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:22 compute-0 nova_compute[265391]: 2025-09-30 18:04:22.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:04:22 compute-0 nova_compute[265391]: 2025-09-30 18:04:22.427 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:04:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:23.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/577583544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:04:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:04:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:23.624Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:04:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:23.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:24 compute-0 ceph-mon[73755]: pgmap v759: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:04:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:25.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:04:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 18:04:26 compute-0 sudo[292409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:04:26 compute-0 sudo[292409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:26 compute-0 sudo[292409]: pam_unix(sudo:session): session closed for user root
Sep 30 18:04:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:26 compute-0 ceph-mon[73755]: pgmap v760: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:04:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:27.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:27.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:04:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:27.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:04:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:04:27 compute-0 unix_chkpwd[292452]: password check failed for user (root)
Sep 30 18:04:27 compute-0 sshd-session[292448]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74  user=root
Sep 30 18:04:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:28 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:28] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:04:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:28] "GET /metrics HTTP/1.1" 200 46519 "" "Prometheus/2.51.0"
Sep 30 18:04:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:28 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:28 compute-0 ceph-mon[73755]: pgmap v761: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:04:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:29.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:29 compute-0 sshd-session[292448]: Failed password for root from 78.128.112.74 port 56112 ssh2
Sep 30 18:04:29 compute-0 podman[276673]: time="2025-09-30T18:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:04:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:04:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10225 "" "Go-http-client/1.1"
Sep 30 18:04:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:04:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:29.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180430 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 18:04:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:30 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:30 compute-0 sshd-session[292448]: Connection closed by authenticating user root 78.128.112.74 port 56112 [preauth]
Sep 30 18:04:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:30 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:30 compute-0 ceph-mon[73755]: pgmap v762: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:04:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:31 compute-0 openstack_network_exporter[279566]: ERROR   18:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:04:31 compute-0 openstack_network_exporter[279566]: ERROR   18:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:04:31 compute-0 openstack_network_exporter[279566]: ERROR   18:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:04:31 compute-0 openstack_network_exporter[279566]: ERROR   18:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:04:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:04:31 compute-0 openstack_network_exporter[279566]: ERROR   18:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:04:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:04:31 compute-0 podman[292457]: 2025-09-30 18:04:31.510302361 +0000 UTC m=+0.047560899 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:04:31 compute-0 podman[292456]: 2025-09-30 18:04:31.538222647 +0000 UTC m=+0.079797497 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:04:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:04:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:32 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:32 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:32 compute-0 ceph-mon[73755]: pgmap v763: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:04:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:33.625Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:04:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:34 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:34 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:35 compute-0 ceph-mon[73755]: pgmap v764: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:04:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:04:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:04:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/502183397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:04:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:04:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/502183397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:04:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:36 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:36 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:37 compute-0 ceph-mon[73755]: pgmap v765: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:04:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/502183397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:04:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/502183397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:04:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:37.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:04:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:04:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:04:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:04:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:04:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:04:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:04:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:04:37 compute-0 podman[292512]: 2025-09-30 18:04:37.533060113 +0000 UTC m=+0.064446478 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 18:04:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:04:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:04:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:38 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:38] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 18:04:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:38] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 18:04:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:38 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:39.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:39 compute-0 ceph-mon[73755]: pgmap v766: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:04:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:04:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:40 compute-0 ceph-mon[73755]: pgmap v767: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:04:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:40 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:40 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:41.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:42 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:42 compute-0 sshd-session[292534]: Invalid user tim from 45.252.249.158 port 44082
Sep 30 18:04:42 compute-0 sshd-session[292534]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:04:42 compute-0 sshd-session[292534]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:04:42 compute-0 ceph-mon[73755]: pgmap v768: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:42 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:43.626Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:43.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:44 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:44 compute-0 podman[292542]: 2025-09-30 18:04:44.565553527 +0000 UTC m=+0.082552639 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:04:44 compute-0 podman[292540]: 2025-09-30 18:04:44.575700451 +0000 UTC m=+0.098849343 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:04:44 compute-0 podman[292541]: 2025-09-30 18:04:44.575234609 +0000 UTC m=+0.098565766 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Sep 30 18:04:44 compute-0 sshd-session[292534]: Failed password for invalid user tim from 45.252.249.158 port 44082 ssh2
Sep 30 18:04:44 compute-0 ceph-mon[73755]: pgmap v769: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:44 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:45.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:45.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:46 compute-0 sshd-session[292534]: Received disconnect from 45.252.249.158 port 44082:11: Bye Bye [preauth]
Sep 30 18:04:46 compute-0 sshd-session[292534]: Disconnected from invalid user tim 45.252.249.158 port 44082 [preauth]
Sep 30 18:04:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:46 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:46 compute-0 sudo[292600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:04:46 compute-0 sudo[292600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:46 compute-0 sudo[292600]: pam_unix(sudo:session): session closed for user root
Sep 30 18:04:46 compute-0 ceph-mon[73755]: pgmap v770: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:46 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e40032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:47.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:47.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:47.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:48 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:48] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 18:04:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:48] "GET /metrics HTTP/1.1" 200 46521 "" "Prometheus/2.51.0"
Sep 30 18:04:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:48 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:48 compute-0 ceph-mon[73755]: pgmap v771: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=404 latency=0.003000078s ======
Sep 30 18:04:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:49.275 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.19" - latency=0.003000078s
Sep 30 18:04:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=404 latency=0.002000052s ======
Sep 30 18:04:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:49.286 +0000] "GET /healthcheck HTTP/1.1" 404 242 - "python-urllib3/1.26.19" - latency=0.002000052s
Sep 30 18:04:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:04:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:49.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:50 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:50 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:50 compute-0 ceph-mon[73755]: pgmap v772: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:04:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:51.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:04:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:04:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:52 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:52 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Sep 30 18:04:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e130 e130: 2 total, 2 up, 2 in
Sep 30 18:04:52 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e130: 2 total, 2 up, 2 in
Sep 30 18:04:52 compute-0 ceph-mon[73755]: pgmap v773: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:04:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:04:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:53.627Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Sep 30 18:04:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:53.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Sep 30 18:04:53 compute-0 ceph-mon[73755]: osdmap e130: 2 total, 2 up, 2 in
Sep 30 18:04:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e131 e131: 2 total, 2 up, 2 in
Sep 30 18:04:54 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e131: 2 total, 2 up, 2 in
Sep 30 18:04:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:54.267 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:04:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:54.267 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:04:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:04:54.267 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:04:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:54 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:54 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Sep 30 18:04:55 compute-0 ceph-mon[73755]: pgmap v775: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Sep 30 18:04:55 compute-0 ceph-mon[73755]: osdmap e131: 2 total, 2 up, 2 in
Sep 30 18:04:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e132 e132: 2 total, 2 up, 2 in
Sep 30 18:04:55 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e132: 2 total, 2 up, 2 in
Sep 30 18:04:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:04:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s wr, 0 op/s
Sep 30 18:04:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:04:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:55.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:04:56 compute-0 ceph-mon[73755]: osdmap e132: 2 total, 2 up, 2 in
Sep 30 18:04:56 compute-0 sudo[292636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:04:56 compute-0 sudo[292636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:56 compute-0 sudo[292636]: pam_unix(sudo:session): session closed for user root
Sep 30 18:04:56 compute-0 sudo[292661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:04:56 compute-0 sudo[292661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:56 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:56 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Sep 30 18:04:57 compute-0 ceph-mon[73755]: pgmap v778: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 170 B/s wr, 0 op/s
Sep 30 18:04:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:57.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e133 e133: 2 total, 2 up, 2 in
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e133: 2 total, 2 up, 2 in
Sep 30 18:04:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:04:57.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:04:57 compute-0 sudo[292661]: pam_unix(sudo:session): session closed for user root
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:04:57 compute-0 sudo[292719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:04:57 compute-0 sudo[292719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:57 compute-0 sudo[292719]: pam_unix(sudo:session): session closed for user root
Sep 30 18:04:57 compute-0 sudo[292745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:04:57 compute-0 sudo[292745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2190557213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:04:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:04:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2190557213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:04:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 211 B/s wr, 0 op/s
Sep 30 18:04:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:57.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:57 compute-0 podman[292813]: 2025-09-30 18:04:57.983363506 +0000 UTC m=+0.046245905 container create bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_engelbart, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 18:04:58 compute-0 systemd[1]: Started libpod-conmon-bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d.scope.
Sep 30 18:04:58 compute-0 podman[292813]: 2025-09-30 18:04:57.961519377 +0000 UTC m=+0.024401786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:04:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:04:58 compute-0 podman[292813]: 2025-09-30 18:04:58.098277785 +0000 UTC m=+0.161160264 container init bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_engelbart, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:04:58 compute-0 ceph-mon[73755]: osdmap e133: 2 total, 2 up, 2 in
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2190557213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:04:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2190557213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:04:58 compute-0 podman[292813]: 2025-09-30 18:04:58.113006389 +0000 UTC m=+0.175888818 container start bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:04:58 compute-0 podman[292813]: 2025-09-30 18:04:58.118956463 +0000 UTC m=+0.181838952 container attach bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:04:58 compute-0 mystifying_engelbart[292830]: 167 167
Sep 30 18:04:58 compute-0 systemd[1]: libpod-bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d.scope: Deactivated successfully.
Sep 30 18:04:58 compute-0 conmon[292830]: conmon bc539408e4fc72c0d5a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d.scope/container/memory.events
Sep 30 18:04:58 compute-0 podman[292813]: 2025-09-30 18:04:58.125604726 +0000 UTC m=+0.188487205 container died bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_engelbart, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-dea503574756843e46d20eb2d594588cda98279d1176da1da0385a1477b0f067-merged.mount: Deactivated successfully.
Sep 30 18:04:58 compute-0 podman[292813]: 2025-09-30 18:04:58.183258546 +0000 UTC m=+0.246140935 container remove bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 18:04:58 compute-0 systemd[1]: libpod-conmon-bc539408e4fc72c0d5a0f6fa40600fbefaf5cb545523f6c6dcc96904cf39f14d.scope: Deactivated successfully.
Sep 30 18:04:58 compute-0 podman[292854]: 2025-09-30 18:04:58.359085551 +0000 UTC m=+0.042966189 container create 9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:04:58 compute-0 systemd[1]: Started libpod-conmon-9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051.scope.
Sep 30 18:04:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:04:58 compute-0 podman[292854]: 2025-09-30 18:04:58.339483681 +0000 UTC m=+0.023364339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4cf09343d3ceca12283a68e0e55195ff6546c285183f35a0b5b4d23db07aaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4cf09343d3ceca12283a68e0e55195ff6546c285183f35a0b5b4d23db07aaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4cf09343d3ceca12283a68e0e55195ff6546c285183f35a0b5b4d23db07aaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4cf09343d3ceca12283a68e0e55195ff6546c285183f35a0b5b4d23db07aaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4cf09343d3ceca12283a68e0e55195ff6546c285183f35a0b5b4d23db07aaa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:58 compute-0 podman[292854]: 2025-09-30 18:04:58.460557001 +0000 UTC m=+0.144437679 container init 9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:04:58 compute-0 podman[292854]: 2025-09-30 18:04:58.472474932 +0000 UTC m=+0.156355580 container start 9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 18:04:58 compute-0 podman[292854]: 2025-09-30 18:04:58.47662892 +0000 UTC m=+0.160509558 container attach 9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:04:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:58 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:58] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 18:04:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:04:58] "GET /metrics HTTP/1.1" 200 46512 "" "Prometheus/2.51.0"
Sep 30 18:04:58 compute-0 peaceful_lewin[292870]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:04:58 compute-0 peaceful_lewin[292870]: --> All data devices are unavailable
Sep 30 18:04:58 compute-0 systemd[1]: libpod-9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051.scope: Deactivated successfully.
Sep 30 18:04:58 compute-0 podman[292854]: 2025-09-30 18:04:58.832893739 +0000 UTC m=+0.516774407 container died 9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba4cf09343d3ceca12283a68e0e55195ff6546c285183f35a0b5b4d23db07aaa-merged.mount: Deactivated successfully.
Sep 30 18:04:58 compute-0 podman[292854]: 2025-09-30 18:04:58.886458053 +0000 UTC m=+0.570338681 container remove 9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:04:58 compute-0 systemd[1]: libpod-conmon-9fc3fe87d9da1a44dd4011cf82899e1090a6776a82bb7b6c91a42ca8a67c0051.scope: Deactivated successfully.
Sep 30 18:04:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:04:58 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:04:58 compute-0 sudo[292745]: pam_unix(sudo:session): session closed for user root
Sep 30 18:04:58 compute-0 sudo[292897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:04:58 compute-0 sudo[292897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:58 compute-0 sudo[292897]: pam_unix(sudo:session): session closed for user root
Sep 30 18:04:59 compute-0 sudo[292922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:04:59 compute-0 sudo[292922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:04:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:04:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:59 compute-0 ceph-mon[73755]: pgmap v780: 353 pgs: 353 active+clean; 458 KiB data, 107 MiB used, 40 GiB / 40 GiB avail; 211 B/s wr, 0 op/s
Sep 30 18:04:59 compute-0 podman[292988]: 2025-09-30 18:04:59.478250709 +0000 UTC m=+0.038363079 container create a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 18:04:59 compute-0 systemd[1]: Started libpod-conmon-a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595.scope.
Sep 30 18:04:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:04:59 compute-0 podman[292988]: 2025-09-30 18:04:59.551950597 +0000 UTC m=+0.112062967 container init a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:04:59 compute-0 podman[292988]: 2025-09-30 18:04:59.46060347 +0000 UTC m=+0.020715850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:04:59 compute-0 podman[292988]: 2025-09-30 18:04:59.56359874 +0000 UTC m=+0.123711140 container start a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sammet, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 18:04:59 compute-0 quizzical_sammet[293004]: 167 167
Sep 30 18:04:59 compute-0 systemd[1]: libpod-a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595.scope: Deactivated successfully.
Sep 30 18:04:59 compute-0 podman[292988]: 2025-09-30 18:04:59.568077937 +0000 UTC m=+0.128190347 container attach a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:04:59 compute-0 podman[292988]: 2025-09-30 18:04:59.568473197 +0000 UTC m=+0.128585607 container died a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sammet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd4fcf831eab9e8b9893fb68e6b80528ba142f25dc8a114e2569bb61a1e139c9-merged.mount: Deactivated successfully.
Sep 30 18:04:59 compute-0 podman[292988]: 2025-09-30 18:04:59.61202135 +0000 UTC m=+0.172133750 container remove a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_sammet, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:04:59 compute-0 systemd[1]: libpod-conmon-a5bfa60e9aee06028a4238a4737d4d5a218e2265c48737d5851ee458232f5595.scope: Deactivated successfully.
Sep 30 18:04:59 compute-0 podman[276673]: time="2025-09-30T18:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:04:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:04:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10235 "" "Go-http-client/1.1"
Sep 30 18:04:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Sep 30 18:04:59 compute-0 podman[293031]: 2025-09-30 18:04:59.869715625 +0000 UTC m=+0.060795813 container create d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 18:04:59 compute-0 systemd[1]: Started libpod-conmon-d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16.scope.
Sep 30 18:04:59 compute-0 podman[293031]: 2025-09-30 18:04:59.846975963 +0000 UTC m=+0.038056201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:04:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153dc030086c80396853a8dd208162e11e344375ef7357f0712b4b2f2906040d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153dc030086c80396853a8dd208162e11e344375ef7357f0712b4b2f2906040d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153dc030086c80396853a8dd208162e11e344375ef7357f0712b4b2f2906040d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153dc030086c80396853a8dd208162e11e344375ef7357f0712b4b2f2906040d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:04:59 compute-0 podman[293031]: 2025-09-30 18:04:59.967326524 +0000 UTC m=+0.158406742 container init d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:04:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:04:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:04:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:04:59.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:04:59 compute-0 podman[293031]: 2025-09-30 18:04:59.978627038 +0000 UTC m=+0.169707226 container start d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:04:59 compute-0 podman[293031]: 2025-09-30 18:04:59.982720665 +0000 UTC m=+0.173800843 container attach d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_leavitt, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:05:00 compute-0 ceph-mon[73755]: pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Sep 30 18:05:00 compute-0 sad_leavitt[293047]: {
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:     "0": [
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:         {
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "devices": [
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "/dev/loop3"
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             ],
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "lv_name": "ceph_lv0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "lv_size": "21470642176",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "name": "ceph_lv0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "tags": {
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.cluster_name": "ceph",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.crush_device_class": "",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.encrypted": "0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.osd_id": "0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.type": "block",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.vdo": "0",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:                 "ceph.with_tpm": "0"
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             },
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "type": "block",
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:             "vg_name": "ceph_vg0"
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:         }
Sep 30 18:05:00 compute-0 sad_leavitt[293047]:     ]
Sep 30 18:05:00 compute-0 sad_leavitt[293047]: }
Sep 30 18:05:00 compute-0 systemd[1]: libpod-d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16.scope: Deactivated successfully.
Sep 30 18:05:00 compute-0 podman[293031]: 2025-09-30 18:05:00.341043558 +0000 UTC m=+0.532123746 container died d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:05:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-153dc030086c80396853a8dd208162e11e344375ef7357f0712b4b2f2906040d-merged.mount: Deactivated successfully.
Sep 30 18:05:00 compute-0 podman[293031]: 2025-09-30 18:05:00.386581763 +0000 UTC m=+0.577661961 container remove d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:05:00 compute-0 systemd[1]: libpod-conmon-d725aa202960625a2774312f244fcf824e95a65c1e599da991828f1f9bc3ff16.scope: Deactivated successfully.
Sep 30 18:05:00 compute-0 sudo[292922]: pam_unix(sudo:session): session closed for user root
Sep 30 18:05:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Sep 30 18:05:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 e134: 2 total, 2 up, 2 in
Sep 30 18:05:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e134: 2 total, 2 up, 2 in
Sep 30 18:05:00 compute-0 sudo[293066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:05:00 compute-0 sudo[293066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:05:00 compute-0 sudo[293066]: pam_unix(sudo:session): session closed for user root
Sep 30 18:05:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:00 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:00 compute-0 sudo[293091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:05:00 compute-0 sudo[293091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:05:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:00 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:01 compute-0 podman[293156]: 2025-09-30 18:05:01.008689579 +0000 UTC m=+0.046378648 container create da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 18:05:01 compute-0 systemd[1]: Started libpod-conmon-da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b.scope.
Sep 30 18:05:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:05:01 compute-0 podman[293156]: 2025-09-30 18:05:00.993090693 +0000 UTC m=+0.030779792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:05:01 compute-0 podman[293156]: 2025-09-30 18:05:01.088933687 +0000 UTC m=+0.126622776 container init da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:05:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:01.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:01 compute-0 podman[293156]: 2025-09-30 18:05:01.100557699 +0000 UTC m=+0.138246778 container start da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:05:01 compute-0 podman[293156]: 2025-09-30 18:05:01.105003415 +0000 UTC m=+0.142692484 container attach da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:05:01 compute-0 upbeat_thompson[293173]: 167 167
Sep 30 18:05:01 compute-0 systemd[1]: libpod-da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b.scope: Deactivated successfully.
Sep 30 18:05:01 compute-0 conmon[293173]: conmon da386828db848f8b196f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b.scope/container/memory.events
Sep 30 18:05:01 compute-0 podman[293156]: 2025-09-30 18:05:01.109087541 +0000 UTC m=+0.146776610 container died da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b6405ec6dde2b8d42e9a2229db82ec142294d141500f42a6b8a11d861dce737-merged.mount: Deactivated successfully.
Sep 30 18:05:01 compute-0 podman[293156]: 2025-09-30 18:05:01.148167568 +0000 UTC m=+0.185856647 container remove da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:05:01 compute-0 systemd[1]: libpod-conmon-da386828db848f8b196fa69c84ee9ab1c72feb00cf14aa7f3c61f03e1dd1f39b.scope: Deactivated successfully.
Sep 30 18:05:01 compute-0 podman[293196]: 2025-09-30 18:05:01.34616418 +0000 UTC m=+0.051191533 container create 6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 18:05:01 compute-0 systemd[1]: Started libpod-conmon-6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42.scope.
Sep 30 18:05:01 compute-0 openstack_network_exporter[279566]: ERROR   18:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:05:01 compute-0 podman[293196]: 2025-09-30 18:05:01.323669104 +0000 UTC m=+0.028696547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:05:01 compute-0 openstack_network_exporter[279566]: ERROR   18:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:05:01 compute-0 openstack_network_exporter[279566]: ERROR   18:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:05:01 compute-0 openstack_network_exporter[279566]: ERROR   18:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:05:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:05:01 compute-0 openstack_network_exporter[279566]: ERROR   18:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:05:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:05:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d45eaf570ec8ef103363cf1ab9019b309cd439858a2dc094eb5ac6656f8c004/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d45eaf570ec8ef103363cf1ab9019b309cd439858a2dc094eb5ac6656f8c004/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d45eaf570ec8ef103363cf1ab9019b309cd439858a2dc094eb5ac6656f8c004/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d45eaf570ec8ef103363cf1ab9019b309cd439858a2dc094eb5ac6656f8c004/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:05:01 compute-0 podman[293196]: 2025-09-30 18:05:01.46341258 +0000 UTC m=+0.168439943 container init 6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mayer, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:05:01 compute-0 podman[293196]: 2025-09-30 18:05:01.471449659 +0000 UTC m=+0.176477042 container start 6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mayer, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:05:01 compute-0 podman[293196]: 2025-09-30 18:05:01.475658489 +0000 UTC m=+0.180685852 container attach 6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:05:01 compute-0 ceph-mon[73755]: osdmap e134: 2 total, 2 up, 2 in
Sep 30 18:05:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 39 KiB/s rd, 6.0 MiB/s wr, 56 op/s
Sep 30 18:05:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:01.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:02 compute-0 lvm[293307]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:05:02 compute-0 lvm[293307]: VG ceph_vg0 finished
Sep 30 18:05:02 compute-0 wonderful_mayer[293214]: {}
Sep 30 18:05:02 compute-0 podman[293289]: 2025-09-30 18:05:02.155269111 +0000 UTC m=+0.066733867 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:05:02 compute-0 podman[293288]: 2025-09-30 18:05:02.177683005 +0000 UTC m=+0.090243449 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 18:05:02 compute-0 systemd[1]: libpod-6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42.scope: Deactivated successfully.
Sep 30 18:05:02 compute-0 podman[293196]: 2025-09-30 18:05:02.180742884 +0000 UTC m=+0.885770237 container died 6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:05:02 compute-0 systemd[1]: libpod-6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42.scope: Consumed 1.148s CPU time.
Sep 30 18:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d45eaf570ec8ef103363cf1ab9019b309cd439858a2dc094eb5ac6656f8c004-merged.mount: Deactivated successfully.
Sep 30 18:05:02 compute-0 podman[293196]: 2025-09-30 18:05:02.318390116 +0000 UTC m=+1.023417459 container remove 6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 18:05:02 compute-0 systemd[1]: libpod-conmon-6305c1a9fa64979f6fc74e81ce16e4c06075d3ad80fd64b5f6d3d3ec1ce8cf42.scope: Deactivated successfully.
Sep 30 18:05:02 compute-0 sudo[293091]: pam_unix(sudo:session): session closed for user root
Sep 30 18:05:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:05:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:05:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:05:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:05:02 compute-0 ceph-mon[73755]: pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 39 KiB/s rd, 6.0 MiB/s wr, 56 op/s
Sep 30 18:05:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:05:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:05:02 compute-0 sudo[293353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:05:02 compute-0 sudo[293353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:05:02 compute-0 sudo[293353]: pam_unix(sudo:session): session closed for user root
Sep 30 18:05:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:02 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:02 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:03.629Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Sep 30 18:05:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:03.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:04 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:04 compute-0 ceph-mon[73755]: pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Sep 30 18:05:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:04 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:05.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 4.7 MiB/s wr, 43 op/s
Sep 30 18:05:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:05.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:06 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:06 compute-0 ceph-mon[73755]: pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 4.7 MiB/s wr, 43 op/s
Sep 30 18:05:06 compute-0 sudo[293382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:05:06 compute-0 sudo[293382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:05:06 compute-0 sudo[293382]: pam_unix(sudo:session): session closed for user root
Sep 30 18:05:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:06 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:07.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:05:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:05:07
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'backups', 'vms', 'volumes', 'images', 'default.rgw.control']
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:05:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Sep 30 18:05:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:05:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:07.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:08 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:08 compute-0 podman[293409]: 2025-09-30 18:05:08.548320617 +0000 UTC m=+0.076598364 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:05:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:08] "GET /metrics HTTP/1.1" 200 46572 "" "Prometheus/2.51.0"
Sep 30 18:05:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:08] "GET /metrics HTTP/1.1" 200 46572 "" "Prometheus/2.51.0"
Sep 30 18:05:08 compute-0 ceph-mon[73755]: pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Sep 30 18:05:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:08 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:09.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 204 B/s rd, 0 op/s
Sep 30 18:05:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:09.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:10 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:10 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:10 compute-0 ceph-mon[73755]: pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 204 B/s rd, 0 op/s
Sep 30 18:05:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 180 B/s rd, 0 op/s
Sep 30 18:05:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:11.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:12 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:12 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:12 compute-0 ceph-mon[73755]: pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 180 B/s rd, 0 op/s
Sep 30 18:05:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:13.629Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:13.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:14 compute-0 nova_compute[265391]: 2025-09-30 18:05:14.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:14 compute-0 ceph-mon[73755]: pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:15 compute-0 podman[293440]: 2025-09-30 18:05:15.514021931 +0000 UTC m=+0.053129954 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Sep 30 18:05:15 compute-0 podman[293439]: 2025-09-30 18:05:15.520255033 +0000 UTC m=+0.062342533 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:05:15 compute-0 podman[293441]: 2025-09-30 18:05:15.521854515 +0000 UTC m=+0.057521458 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal)
Sep 30 18:05:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:15.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:16 compute-0 sshd-session[293436]: Invalid user farhan from 14.225.220.107 port 34154
Sep 30 18:05:16 compute-0 sshd-session[293436]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:05:16 compute-0 sshd-session[293436]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:05:16 compute-0 nova_compute[265391]: 2025-09-30 18:05:16.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:16 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:16 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:16 compute-0 ceph-mon[73755]: pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:17.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:17.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:18.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:18 compute-0 sshd-session[293436]: Failed password for invalid user farhan from 14.225.220.107 port 34154 ssh2
Sep 30 18:05:18 compute-0 nova_compute[265391]: 2025-09-30 18:05:18.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:18 compute-0 nova_compute[265391]: 2025-09-30 18:05:18.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:18 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:18] "GET /metrics HTTP/1.1" 200 46572 "" "Prometheus/2.51.0"
Sep 30 18:05:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:18] "GET /metrics HTTP/1.1" 200 46572 "" "Prometheus/2.51.0"
Sep 30 18:05:18 compute-0 nova_compute[265391]: 2025-09-30 18:05:18.938 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:18 compute-0 nova_compute[265391]: 2025-09-30 18:05:18.938 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:18 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:19 compute-0 ceph-mon[73755]: pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:19 compute-0 nova_compute[265391]: 2025-09-30 18:05:19.461 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:05:19 compute-0 nova_compute[265391]: 2025-09-30 18:05:19.462 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:05:19 compute-0 nova_compute[265391]: 2025-09-30 18:05:19.462 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:05:19 compute-0 nova_compute[265391]: 2025-09-30 18:05:19.463 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:05:19 compute-0 nova_compute[265391]: 2025-09-30 18:05:19.463 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:05:19 compute-0 sshd-session[293436]: Received disconnect from 14.225.220.107 port 34154:11: Bye Bye [preauth]
Sep 30 18:05:19 compute-0 sshd-session[293436]: Disconnected from invalid user farhan 14.225.220.107 port 34154 [preauth]
Sep 30 18:05:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:05:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:05:19 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351808412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:05:19 compute-0 nova_compute[265391]: 2025-09-30 18:05:19.916 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:05:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:20.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2351808412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:05:20 compute-0 nova_compute[265391]: 2025-09-30 18:05:20.088 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:05:20 compute-0 nova_compute[265391]: 2025-09-30 18:05:20.089 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:05:20 compute-0 nova_compute[265391]: 2025-09-30 18:05:20.121 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:05:20 compute-0 nova_compute[265391]: 2025-09-30 18:05:20.122 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4769MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:05:20 compute-0 nova_compute[265391]: 2025-09-30 18:05:20.122 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:05:20 compute-0 nova_compute[265391]: 2025-09-30 18:05:20.122 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:05:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:20 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:20 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:21 compute-0 ceph-mon[73755]: pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:05:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3501653568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:05:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:05:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:21.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.240 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.240 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:05:20 up  1:08,  0 user,  load average: 0.70, 0.80, 0.94\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.297 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.356 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.357 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.374 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.404 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.429 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:05:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:05:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/910263476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.898 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:05:21 compute-0 nova_compute[265391]: 2025-09-30 18:05:21.903 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:05:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:22.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/910263476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:05:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:05:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:05:22 compute-0 nova_compute[265391]: 2025-09-30 18:05:22.415 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:05:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:22 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:22 compute-0 nova_compute[265391]: 2025-09-30 18:05:22.925 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:05:22 compute-0 nova_compute[265391]: 2025-09-30 18:05:22.926 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.803s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:05:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:22 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:23 compute-0 ceph-mon[73755]: pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/31460186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:05:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:05:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:23.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:23.630Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:24.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:24 compute-0 sshd[192864]: Timeout before authentication for connection from 115.190.39.222 to 38.102.83.202, pid = 291044
Sep 30 18:05:24 compute-0 nova_compute[265391]: 2025-09-30 18:05:24.416 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:24 compute-0 nova_compute[265391]: 2025-09-30 18:05:24.416 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:24 compute-0 nova_compute[265391]: 2025-09-30 18:05:24.416 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:05:24 compute-0 nova_compute[265391]: 2025-09-30 18:05:24.417 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:05:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:24 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:24 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:25 compute-0 ceph-mon[73755]: pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:25.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:26.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:27 compute-0 sudo[293552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:05:27 compute-0 sudo[293552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:05:27 compute-0 sudo[293552]: pam_unix(sudo:session): session closed for user root
Sep 30 18:05:27 compute-0 ceph-mon[73755]: pgmap v795: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:27.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:27.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:28.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:28 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:28] "GET /metrics HTTP/1.1" 200 46570 "" "Prometheus/2.51.0"
Sep 30 18:05:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:28] "GET /metrics HTTP/1.1" 200 46570 "" "Prometheus/2.51.0"
Sep 30 18:05:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:28 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:29 compute-0 ceph-mon[73755]: pgmap v796: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:29.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:29 compute-0 podman[276673]: time="2025-09-30T18:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:05:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:05:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10228 "" "Go-http-client/1.1"
Sep 30 18:05:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:05:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:30.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:30 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:30 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:31.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:31 compute-0 ceph-mon[73755]: pgmap v797: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:05:31 compute-0 openstack_network_exporter[279566]: ERROR   18:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:05:31 compute-0 openstack_network_exporter[279566]: ERROR   18:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:05:31 compute-0 openstack_network_exporter[279566]: ERROR   18:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:05:31 compute-0 openstack_network_exporter[279566]: ERROR   18:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:05:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:05:31 compute-0 openstack_network_exporter[279566]: ERROR   18:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:05:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:05:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:32.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:32 compute-0 ceph-mon[73755]: pgmap v798: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:32 compute-0 podman[293587]: 2025-09-30 18:05:32.558149611 +0000 UTC m=+0.087348484 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:05:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:32 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:32 compute-0 podman[293586]: 2025-09-30 18:05:32.569957978 +0000 UTC m=+0.101685216 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:05:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:32 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:33.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:33.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:34.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:34 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:34 compute-0 ceph-mon[73755]: pgmap v799: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:34 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:35.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:36.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:36 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:36 compute-0 ceph-mon[73755]: pgmap v800: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4252940435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:05:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4252940435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:05:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:36 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:37.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:37.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:05:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:05:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:05:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:05:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:05:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:05:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:05:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:05:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:05:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:38.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:38 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:38] "GET /metrics HTTP/1.1" 200 46574 "" "Prometheus/2.51.0"
Sep 30 18:05:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:38] "GET /metrics HTTP/1.1" 200 46574 "" "Prometheus/2.51.0"
Sep 30 18:05:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:38 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c80038a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:38 compute-0 ceph-mon[73755]: pgmap v801: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:39.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:39 compute-0 podman[293645]: 2025-09-30 18:05:39.548671023 +0000 UTC m=+0.082898678 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Sep 30 18:05:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:05:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:40.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:40 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:40 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:40 compute-0 ceph-mon[73755]: pgmap v802: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:05:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:41.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:42 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0001f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:42 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c80038c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:43 compute-0 ceph-mon[73755]: pgmap v803: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:43.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:43.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:44.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:44 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:44 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:45 compute-0 ceph-mon[73755]: pgmap v804: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:45.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:46.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:46 compute-0 podman[293676]: 2025-09-30 18:05:46.526853293 +0000 UTC m=+0.058770631 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal)
Sep 30 18:05:46 compute-0 podman[293674]: 2025-09-30 18:05:46.546251917 +0000 UTC m=+0.075815053 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:05:46 compute-0 podman[293675]: 2025-09-30 18:05:46.549468151 +0000 UTC m=+0.073964045 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid)
Sep 30 18:05:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:46 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0001f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:46 compute-0 sshd-session[293670]: Invalid user farhan from 45.252.249.158 port 47686
Sep 30 18:05:46 compute-0 sshd-session[293670]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:05:46 compute-0 sshd-session[293670]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:05:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:46 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c80038e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:47 compute-0 ceph-mon[73755]: pgmap v805: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:47.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:05:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:47.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:05:47 compute-0 sudo[293735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:05:47 compute-0 sudo[293735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:05:47 compute-0 sudo[293735]: pam_unix(sudo:session): session closed for user root
Sep 30 18:05:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:48 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:48 compute-0 sshd-session[293670]: Failed password for invalid user farhan from 45.252.249.158 port 47686 ssh2
Sep 30 18:05:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:48] "GET /metrics HTTP/1.1" 200 46574 "" "Prometheus/2.51.0"
Sep 30 18:05:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:48] "GET /metrics HTTP/1.1" 200 46574 "" "Prometheus/2.51.0"
Sep 30 18:05:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:48 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:49 compute-0 ceph-mon[73755]: pgmap v806: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:49.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:05:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:50.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:50 compute-0 sshd-session[293670]: Received disconnect from 45.252.249.158 port 47686:11: Bye Bye [preauth]
Sep 30 18:05:50 compute-0 sshd-session[293670]: Disconnected from invalid user farhan 45.252.249.158 port 47686 [preauth]
Sep 30 18:05:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:50 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0001f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:50 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:51 compute-0 ceph-mon[73755]: pgmap v807: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:05:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:51.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:52.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:05:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:05:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:52 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:52 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:53 compute-0 ceph-mon[73755]: pgmap v808: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:05:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:53.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:53.633Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:54.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:05:54.268 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:05:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:05:54.269 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:05:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:05:54.269 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:05:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:54 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:54 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:55 compute-0 ceph-mon[73755]: pgmap v809: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:55.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:05:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:56.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:56 compute-0 ceph-mon[73755]: pgmap v810: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:56 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:56 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:05:57.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:05:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:57.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:05:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2696418884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:05:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:05:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2696418884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:05:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2696418884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:05:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2696418884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:05:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:05:58.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:58 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:58 compute-0 ceph-mon[73755]: pgmap v811: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:05:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:58] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Sep 30 18:05:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:05:58] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Sep 30 18:05:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:05:58 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:05:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:05:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:05:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:05:59.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:05:59 compute-0 podman[276673]: time="2025-09-30T18:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:05:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:05:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10244 "" "Go-http-client/1.1"
Sep 30 18:05:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:00.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:00 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:00 compute-0 ceph-mon[73755]: pgmap v812: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:00 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:01.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:01 compute-0 openstack_network_exporter[279566]: ERROR   18:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:06:01 compute-0 openstack_network_exporter[279566]: ERROR   18:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:06:01 compute-0 openstack_network_exporter[279566]: ERROR   18:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:06:01 compute-0 openstack_network_exporter[279566]: ERROR   18:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:06:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:06:01 compute-0 openstack_network_exporter[279566]: ERROR   18:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:06:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:06:01 compute-0 anacron[4242]: Job `cron.weekly' started
Sep 30 18:06:01 compute-0 anacron[4242]: Job `cron.weekly' terminated
Sep 30 18:06:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:02.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:02 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:02 compute-0 sudo[293781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:06:02 compute-0 sudo[293781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:02 compute-0 sudo[293781]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:02 compute-0 sudo[293821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:06:02 compute-0 sudo[293821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:02 compute-0 podman[293806]: 2025-09-30 18:06:02.868936716 +0000 UTC m=+0.101412019 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:06:02 compute-0 podman[293805]: 2025-09-30 18:06:02.897270634 +0000 UTC m=+0.127305764 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:06:02 compute-0 ceph-mon[73755]: pgmap v813: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:03 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:03.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:03 compute-0 sudo[293821]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:06:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:06:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:06:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:06:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:06:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:06:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:06:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:06:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:06:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:06:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:06:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:06:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:06:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:06:03 compute-0 sudo[293915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:06:03 compute-0 sudo[293915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:03 compute-0 sudo[293915]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:03.634Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:03 compute-0 sudo[293940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:06:03 compute-0 sudo[293940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:06:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:06:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:06:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:06:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:06:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:06:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:06:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:04.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:04 compute-0 podman[294007]: 2025-09-30 18:06:04.077606613 +0000 UTC m=+0.055300570 container create 53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_villani, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:06:04 compute-0 systemd[1]: Started libpod-conmon-53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4.scope.
Sep 30 18:06:04 compute-0 podman[294007]: 2025-09-30 18:06:04.048383063 +0000 UTC m=+0.026077120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:06:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:06:04 compute-0 podman[294007]: 2025-09-30 18:06:04.170070489 +0000 UTC m=+0.147764456 container init 53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_villani, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:06:04 compute-0 podman[294007]: 2025-09-30 18:06:04.178995241 +0000 UTC m=+0.156689188 container start 53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:06:04 compute-0 podman[294007]: 2025-09-30 18:06:04.182304587 +0000 UTC m=+0.159998554 container attach 53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:06:04 compute-0 compassionate_villani[294024]: 167 167
Sep 30 18:06:04 compute-0 systemd[1]: libpod-53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4.scope: Deactivated successfully.
Sep 30 18:06:04 compute-0 podman[294007]: 2025-09-30 18:06:04.185443299 +0000 UTC m=+0.163137286 container died 53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_villani, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-10a40779e5430cb5d31bbb87f333212b9b68e10295e85cbd889d4b267bc228c1-merged.mount: Deactivated successfully.
Sep 30 18:06:04 compute-0 podman[294007]: 2025-09-30 18:06:04.227018391 +0000 UTC m=+0.204712328 container remove 53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_villani, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:06:04 compute-0 systemd[1]: libpod-conmon-53ec6eccd8cdde6af19cfa9e2b7b643ecf3556694082180ec37898d239f6aaa4.scope: Deactivated successfully.
Sep 30 18:06:04 compute-0 podman[294050]: 2025-09-30 18:06:04.419856398 +0000 UTC m=+0.066235064 container create ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banach, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 18:06:04 compute-0 systemd[1]: Started libpod-conmon-ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210.scope.
Sep 30 18:06:04 compute-0 podman[294050]: 2025-09-30 18:06:04.392518507 +0000 UTC m=+0.038897173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:06:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:06:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578e329c2559b07be419501cab2d3dd125aa8f7c043547f9a2af346649989b6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578e329c2559b07be419501cab2d3dd125aa8f7c043547f9a2af346649989b6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578e329c2559b07be419501cab2d3dd125aa8f7c043547f9a2af346649989b6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578e329c2559b07be419501cab2d3dd125aa8f7c043547f9a2af346649989b6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578e329c2559b07be419501cab2d3dd125aa8f7c043547f9a2af346649989b6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:04 compute-0 podman[294050]: 2025-09-30 18:06:04.506234875 +0000 UTC m=+0.152613521 container init ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Sep 30 18:06:04 compute-0 podman[294050]: 2025-09-30 18:06:04.513966467 +0000 UTC m=+0.160345093 container start ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banach, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:06:04 compute-0 podman[294050]: 2025-09-30 18:06:04.517989241 +0000 UTC m=+0.164367897 container attach ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banach, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:06:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:04 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:04 compute-0 stoic_banach[294066]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:06:04 compute-0 stoic_banach[294066]: --> All data devices are unavailable
Sep 30 18:06:04 compute-0 systemd[1]: libpod-ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210.scope: Deactivated successfully.
Sep 30 18:06:04 compute-0 podman[294081]: 2025-09-30 18:06:04.848770518 +0000 UTC m=+0.029728495 container died ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banach, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-578e329c2559b07be419501cab2d3dd125aa8f7c043547f9a2af346649989b6c-merged.mount: Deactivated successfully.
Sep 30 18:06:04 compute-0 podman[294081]: 2025-09-30 18:06:04.899665052 +0000 UTC m=+0.080623029 container remove ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:06:04 compute-0 systemd[1]: libpod-conmon-ff02d82c3ecbead72902aa9386a8bc8f7256591aca00bcb695d2b63b2f359210.scope: Deactivated successfully.
Sep 30 18:06:04 compute-0 sudo[293940]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:04 compute-0 ceph-mon[73755]: pgmap v814: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:05 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:05 compute-0 sudo[294096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:06:05 compute-0 sudo[294096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:05 compute-0 sudo[294096]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:05 compute-0 sudo[294121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:06:05 compute-0 sudo[294121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:05.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:05 compute-0 podman[294188]: 2025-09-30 18:06:05.488278827 +0000 UTC m=+0.042777204 container create cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shirley, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:06:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:05 compute-0 systemd[1]: Started libpod-conmon-cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302.scope.
Sep 30 18:06:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:06:05 compute-0 podman[294188]: 2025-09-30 18:06:05.469264362 +0000 UTC m=+0.023762789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:06:05 compute-0 podman[294188]: 2025-09-30 18:06:05.570581038 +0000 UTC m=+0.125079445 container init cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shirley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:06:05 compute-0 podman[294188]: 2025-09-30 18:06:05.576349258 +0000 UTC m=+0.130847635 container start cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:06:05 compute-0 podman[294188]: 2025-09-30 18:06:05.580092286 +0000 UTC m=+0.134590663 container attach cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shirley, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:06:05 compute-0 goofy_shirley[294205]: 167 167
Sep 30 18:06:05 compute-0 systemd[1]: libpod-cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302.scope: Deactivated successfully.
Sep 30 18:06:05 compute-0 podman[294188]: 2025-09-30 18:06:05.58332555 +0000 UTC m=+0.137823927 container died cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-94c42af92bc8041cbba1cead2f89872a5aefb3fb21177f8ae5d679b41029fba1-merged.mount: Deactivated successfully.
Sep 30 18:06:05 compute-0 podman[294188]: 2025-09-30 18:06:05.62099362 +0000 UTC m=+0.175491997 container remove cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:06:05 compute-0 systemd[1]: libpod-conmon-cfc8927bc8470c6be1769b887548e1796ce7c83ec9af216d695c257264291302.scope: Deactivated successfully.
Sep 30 18:06:05 compute-0 podman[294230]: 2025-09-30 18:06:05.801711552 +0000 UTC m=+0.041279675 container create b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 18:06:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:05 compute-0 systemd[1]: Started libpod-conmon-b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71.scope.
Sep 30 18:06:05 compute-0 podman[294230]: 2025-09-30 18:06:05.780998153 +0000 UTC m=+0.020566256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:06:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02672d3cbad860ccb7a0e05e9438f4421fb02c5ea0e4a41c996f5375c618679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02672d3cbad860ccb7a0e05e9438f4421fb02c5ea0e4a41c996f5375c618679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02672d3cbad860ccb7a0e05e9438f4421fb02c5ea0e4a41c996f5375c618679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02672d3cbad860ccb7a0e05e9438f4421fb02c5ea0e4a41c996f5375c618679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:05 compute-0 podman[294230]: 2025-09-30 18:06:05.917885284 +0000 UTC m=+0.157453437 container init b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:06:05 compute-0 podman[294230]: 2025-09-30 18:06:05.932407453 +0000 UTC m=+0.171975576 container start b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 18:06:05 compute-0 podman[294230]: 2025-09-30 18:06:05.936719755 +0000 UTC m=+0.176287888 container attach b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:06:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:06.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]: {
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:     "0": [
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:         {
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "devices": [
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "/dev/loop3"
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             ],
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "lv_name": "ceph_lv0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "lv_size": "21470642176",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "name": "ceph_lv0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "tags": {
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.cluster_name": "ceph",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.crush_device_class": "",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.encrypted": "0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.osd_id": "0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.type": "block",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.vdo": "0",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:                 "ceph.with_tpm": "0"
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             },
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "type": "block",
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:             "vg_name": "ceph_vg0"
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:         }
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]:     ]
Sep 30 18:06:06 compute-0 vigilant_varahamihira[294247]: }
Sep 30 18:06:06 compute-0 systemd[1]: libpod-b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71.scope: Deactivated successfully.
Sep 30 18:06:06 compute-0 podman[294230]: 2025-09-30 18:06:06.259880393 +0000 UTC m=+0.499448516 container died b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f02672d3cbad860ccb7a0e05e9438f4421fb02c5ea0e4a41c996f5375c618679-merged.mount: Deactivated successfully.
Sep 30 18:06:06 compute-0 podman[294230]: 2025-09-30 18:06:06.300292184 +0000 UTC m=+0.539860267 container remove b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:06:06 compute-0 systemd[1]: libpod-conmon-b5055aae597d28544c1a7a3c65719fd6754eadd03fa2a241ae713efd770a8d71.scope: Deactivated successfully.
Sep 30 18:06:06 compute-0 sudo[294121]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:06 compute-0 sudo[294270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:06:06 compute-0 sudo[294270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:06 compute-0 sudo[294270]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:06 compute-0 sudo[294295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:06:06 compute-0 sudo[294295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:06 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:06 compute-0 podman[294362]: 2025-09-30 18:06:06.94353636 +0000 UTC m=+0.039385315 container create c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:06:06 compute-0 ceph-mon[73755]: pgmap v815: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:06 compute-0 systemd[1]: Started libpod-conmon-c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14.scope.
Sep 30 18:06:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:07 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:06:07 compute-0 podman[294362]: 2025-09-30 18:06:06.925730747 +0000 UTC m=+0.021579732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:06:07 compute-0 podman[294362]: 2025-09-30 18:06:07.024944118 +0000 UTC m=+0.120793113 container init c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:06:07 compute-0 podman[294362]: 2025-09-30 18:06:07.031417227 +0000 UTC m=+0.127266192 container start c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:06:07 compute-0 podman[294362]: 2025-09-30 18:06:07.035082982 +0000 UTC m=+0.130931977 container attach c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:06:07 compute-0 vibrant_mccarthy[294378]: 167 167
Sep 30 18:06:07 compute-0 systemd[1]: libpod-c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14.scope: Deactivated successfully.
Sep 30 18:06:07 compute-0 podman[294362]: 2025-09-30 18:06:07.036336425 +0000 UTC m=+0.132185420 container died c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-317211d57a32e786240f610c6ad6b2798b81e506f517d1e8510409220339dbae-merged.mount: Deactivated successfully.
Sep 30 18:06:07 compute-0 podman[294362]: 2025-09-30 18:06:07.080226847 +0000 UTC m=+0.176075812 container remove c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mccarthy, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:06:07 compute-0 systemd[1]: libpod-conmon-c7bf8ac9de1e4558944022b71671513548882c3fa40580149c24cca37bde0e14.scope: Deactivated successfully.
Sep 30 18:06:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:07.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:06:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:07.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:06:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:07.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:07 compute-0 podman[294401]: 2025-09-30 18:06:07.253604328 +0000 UTC m=+0.047956309 container create 06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kilby, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 18:06:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:06:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:06:07 compute-0 systemd[1]: Started libpod-conmon-06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf.scope.
Sep 30 18:06:07 compute-0 podman[294401]: 2025-09-30 18:06:07.2329439 +0000 UTC m=+0.027295911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:06:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:06:07 compute-0 sudo[294413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd0e04f945e79df1432c5dd37a272dbb2ef4270f2efcc19816b87c81dea4f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:07 compute-0 sudo[294413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd0e04f945e79df1432c5dd37a272dbb2ef4270f2efcc19816b87c81dea4f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd0e04f945e79df1432c5dd37a272dbb2ef4270f2efcc19816b87c81dea4f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd0e04f945e79df1432c5dd37a272dbb2ef4270f2efcc19816b87c81dea4f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:07 compute-0 sudo[294413]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:07 compute-0 podman[294401]: 2025-09-30 18:06:07.35477468 +0000 UTC m=+0.149126661 container init 06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kilby, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:06:07 compute-0 podman[294401]: 2025-09-30 18:06:07.365498079 +0000 UTC m=+0.159850060 container start 06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:06:07 compute-0 podman[294401]: 2025-09-30 18:06:07.36936978 +0000 UTC m=+0.163721761 container attach 06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kilby, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:06:07
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.mgr', 'backups', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', '.nfs']
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:06:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:06:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:08.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:08 compute-0 lvm[294517]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:06:08 compute-0 lvm[294517]: VG ceph_vg0 finished
Sep 30 18:06:08 compute-0 elated_kilby[294438]: {}
Sep 30 18:06:08 compute-0 systemd[1]: libpod-06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf.scope: Deactivated successfully.
Sep 30 18:06:08 compute-0 systemd[1]: libpod-06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf.scope: Consumed 1.205s CPU time.
Sep 30 18:06:08 compute-0 podman[294520]: 2025-09-30 18:06:08.2099785 +0000 UTC m=+0.031052159 container died 06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kilby, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-23cd0e04f945e79df1432c5dd37a272dbb2ef4270f2efcc19816b87c81dea4f3-merged.mount: Deactivated successfully.
Sep 30 18:06:08 compute-0 podman[294520]: 2025-09-30 18:06:08.255586837 +0000 UTC m=+0.076660446 container remove 06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:06:08 compute-0 systemd[1]: libpod-conmon-06798d575e52e67a7688885c79bcc78151cf2b0128255311c7b15c00e4168bbf.scope: Deactivated successfully.
Sep 30 18:06:08 compute-0 sudo[294295]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:06:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:06:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:06:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:06:08 compute-0 sudo[294535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:06:08 compute-0 sudo[294535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:08 compute-0 sudo[294535]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:08 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:08] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Sep 30 18:06:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:08] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Sep 30 18:06:09 compute-0 ceph-mon[73755]: pgmap v816: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:06:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:06:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:09 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:09.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:10.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:10 compute-0 podman[294562]: 2025-09-30 18:06:10.520292681 +0000 UTC m=+0.057852076 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:06:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:10 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:11 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:11 compute-0 ceph-mon[73755]: pgmap v817: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:11.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:12.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:12 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:13 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:13 compute-0 ceph-mon[73755]: pgmap v818: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:13.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:13.635Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:14.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:14 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:15 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:15 compute-0 ceph-mon[73755]: pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:15.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:15 compute-0 nova_compute[265391]: 2025-09-30 18:06:15.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:06:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:06:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:16.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:06:16 compute-0 nova_compute[265391]: 2025-09-30 18:06:16.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:06:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:16 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:17 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:17 compute-0 ceph-mon[73755]: pgmap v820: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:17.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:17.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:17 compute-0 podman[294592]: 2025-09-30 18:06:17.521649594 +0000 UTC m=+0.054566431 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, name=ubi9-minimal, release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git)
Sep 30 18:06:17 compute-0 podman[294590]: 2025-09-30 18:06:17.537098106 +0000 UTC m=+0.077666572 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Sep 30 18:06:17 compute-0 podman[294591]: 2025-09-30 18:06:17.546922701 +0000 UTC m=+0.079412867 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:06:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:18.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:18 compute-0 nova_compute[265391]: 2025-09-30 18:06:18.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:06:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:18 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:18] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Sep 30 18:06:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:18] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Sep 30 18:06:18 compute-0 nova_compute[265391]: 2025-09-30 18:06:18.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:06:18 compute-0 nova_compute[265391]: 2025-09-30 18:06:18.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:06:18 compute-0 nova_compute[265391]: 2025-09-30 18:06:18.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:06:18 compute-0 nova_compute[265391]: 2025-09-30 18:06:18.939 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:06:18 compute-0 nova_compute[265391]: 2025-09-30 18:06:18.939 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:06:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:19 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc002ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:19 compute-0 ceph-mon[73755]: pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:19.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:06:19 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195776977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:06:19 compute-0 nova_compute[265391]: 2025-09-30 18:06:19.430 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:06:19 compute-0 nova_compute[265391]: 2025-09-30 18:06:19.580 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:06:19 compute-0 nova_compute[265391]: 2025-09-30 18:06:19.581 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:06:19 compute-0 nova_compute[265391]: 2025-09-30 18:06:19.597 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:06:19 compute-0 nova_compute[265391]: 2025-09-30 18:06:19.598 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4749MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:06:19 compute-0 nova_compute[265391]: 2025-09-30 18:06:19.598 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:06:19 compute-0 nova_compute[265391]: 2025-09-30 18:06:19.598 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:06:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4195776977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:06:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:20.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:20 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:20 compute-0 nova_compute[265391]: 2025-09-30 18:06:20.651 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:06:20 compute-0 nova_compute[265391]: 2025-09-30 18:06:20.652 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:06:19 up  1:09,  0 user,  load average: 0.59, 0.73, 0.90\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:06:20 compute-0 nova_compute[265391]: 2025-09-30 18:06:20.671 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:06:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:21 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc002ae0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:06:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238874090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:06:21 compute-0 ceph-mon[73755]: pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1238874090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:06:21 compute-0 nova_compute[265391]: 2025-09-30 18:06:21.128 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:06:21 compute-0 nova_compute[265391]: 2025-09-30 18:06:21.133 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:06:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:21.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:21 compute-0 nova_compute[265391]: 2025-09-30 18:06:21.640 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:06:21 compute-0 unix_chkpwd[294700]: password check failed for user (root)
Sep 30 18:06:21 compute-0 sshd-session[294674]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 18:06:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:22.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1712915588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:06:22 compute-0 nova_compute[265391]: 2025-09-30 18:06:22.150 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:06:22 compute-0 nova_compute[265391]: 2025-09-30 18:06:22.151 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.552s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:06:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:06:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:06:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:22 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:23 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:23 compute-0 nova_compute[265391]: 2025-09-30 18:06:23.147 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:06:23 compute-0 nova_compute[265391]: 2025-09-30 18:06:23.148 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:06:23 compute-0 nova_compute[265391]: 2025-09-30 18:06:23.148 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:06:23 compute-0 nova_compute[265391]: 2025-09-30 18:06:23.148 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:06:23 compute-0 nova_compute[265391]: 2025-09-30 18:06:23.148 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:06:23 compute-0 nova_compute[265391]: 2025-09-30 18:06:23.148 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:06:23 compute-0 ceph-mon[73755]: pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:06:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:23.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:23.636Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:24 compute-0 sshd-session[294674]: Failed password for root from 14.225.220.107 port 44294 ssh2
Sep 30 18:06:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:24.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2948922853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:06:24 compute-0 ceph-mon[73755]: pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:24 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:24 compute-0 sshd-session[294674]: Received disconnect from 14.225.220.107 port 44294:11: Bye Bye [preauth]
Sep 30 18:06:24 compute-0 sshd-session[294674]: Disconnected from authenticating user root 14.225.220.107 port 44294 [preauth]
Sep 30 18:06:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:25 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:25.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:26.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:26 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:26 compute-0 ceph-mon[73755]: pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:27 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:27.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:27 compute-0 sudo[294705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:06:27 compute-0 sudo[294705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:27 compute-0 sudo[294705]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:27.945996) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255587946033, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1629, "num_deletes": 251, "total_data_size": 2897629, "memory_usage": 2937792, "flush_reason": "Manual Compaction"}
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255587960881, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 2819351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22625, "largest_seqno": 24253, "table_properties": {"data_size": 2811986, "index_size": 4308, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15738, "raw_average_key_size": 20, "raw_value_size": 2796952, "raw_average_value_size": 3572, "num_data_blocks": 193, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759255436, "oldest_key_time": 1759255436, "file_creation_time": 1759255587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 14946 microseconds, and 6983 cpu microseconds.
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:27.960938) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 2819351 bytes OK
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:27.960966) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:27.962605) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:27.962625) EVENT_LOG_v1 {"time_micros": 1759255587962618, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:27.962650) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2890679, prev total WAL file size 2890679, number of live WAL files 2.
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:27.963728) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(2753KB)], [53(10226KB)]
Sep 30 18:06:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255587963760, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 13291475, "oldest_snapshot_seqno": -1}
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5141 keys, 11231098 bytes, temperature: kUnknown
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255588025891, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 11231098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11196996, "index_size": 20168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 131554, "raw_average_key_size": 25, "raw_value_size": 11103949, "raw_average_value_size": 2159, "num_data_blocks": 824, "num_entries": 5141, "num_filter_entries": 5141, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759255587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:28.026210) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 11231098 bytes
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:28.027697) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.6 rd, 180.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 5661, records dropped: 520 output_compression: NoCompression
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:28.027718) EVENT_LOG_v1 {"time_micros": 1759255588027708, "job": 28, "event": "compaction_finished", "compaction_time_micros": 62234, "compaction_time_cpu_micros": 22630, "output_level": 6, "num_output_files": 1, "total_output_size": 11231098, "num_input_records": 5661, "num_output_records": 5141, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255588028502, "job": 28, "event": "table_file_deletion", "file_number": 55}
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255588030769, "job": 28, "event": "table_file_deletion", "file_number": 53}
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:27.963649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:28.030925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:28.030937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:28.030942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:28.030947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:06:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:06:28.030953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:06:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:28.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:28 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:28] "GET /metrics HTTP/1.1" 200 46585 "" "Prometheus/2.51.0"
Sep 30 18:06:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:28] "GET /metrics HTTP/1.1" 200 46585 "" "Prometheus/2.51.0"
Sep 30 18:06:28 compute-0 ceph-mon[73755]: pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:29 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:29.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:29 compute-0 podman[276673]: time="2025-09-30T18:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:06:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:06:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10235 "" "Go-http-client/1.1"
Sep 30 18:06:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:29 compute-0 sshd-session[294732]: Invalid user test from 154.125.120.7 port 40324
Sep 30 18:06:29 compute-0 sshd-session[294732]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:06:29 compute-0 sshd-session[294732]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.125.120.7
Sep 30 18:06:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:30.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:30 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3bc003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:31 compute-0 ceph-mon[73755]: pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:31 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:31.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:31 compute-0 openstack_network_exporter[279566]: ERROR   18:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:06:31 compute-0 openstack_network_exporter[279566]: ERROR   18:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:06:31 compute-0 openstack_network_exporter[279566]: ERROR   18:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:06:31 compute-0 openstack_network_exporter[279566]: ERROR   18:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:06:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:06:31 compute-0 openstack_network_exporter[279566]: ERROR   18:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:06:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:06:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:31 compute-0 sshd-session[294732]: Failed password for invalid user test from 154.125.120.7 port 40324 ssh2
Sep 30 18:06:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:32 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:32 compute-0 sshd-session[294732]: Received disconnect from 154.125.120.7 port 40324:11: Bye Bye [preauth]
Sep 30 18:06:32 compute-0 sshd-session[294732]: Disconnected from invalid user test 154.125.120.7 port 40324 [preauth]
Sep 30 18:06:33 compute-0 ceph-mon[73755]: pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:33 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3d8002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:33.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:33 compute-0 podman[294744]: 2025-09-30 18:06:33.516941571 +0000 UTC m=+0.051542882 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:06:33 compute-0 podman[294743]: 2025-09-30 18:06:33.552297791 +0000 UTC m=+0.092324573 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:06:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:33.636Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:06:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.3 total, 600.0 interval
                                           Cumulative writes: 5323 writes, 24K keys, 5321 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5323 writes, 5321 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1454 writes, 6359 keys, 1454 commit groups, 1.0 writes per commit group, ingest: 10.53 MB, 0.02 MB/s
                                           Interval WAL: 1454 writes, 1454 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     73.1      0.47              0.10        14    0.034       0      0       0.0       0.0
                                             L6      1/0   10.71 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   4.0    181.0    154.3      0.90              0.34        13    0.069     63K   6755       0.0       0.0
                                            Sum      1/0   10.71 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.0    118.5    126.2      1.37              0.44        27    0.051     63K   6755       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.2    152.0    153.6      0.42              0.16        10    0.042     27K   2587       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    181.0    154.3      0.90              0.34        13    0.069     63K   6755       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    148.5      0.23              0.10        13    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1801.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.034, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.10 MB/s write, 0.16 GB read, 0.09 MB/s read, 1.4 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 304.00 MB usage: 13.47 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(784,12.98 MB,4.26932%) FilterBlock(28,178.17 KB,0.0572355%) IndexBlock(28,328.88 KB,0.105647%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 18:06:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:34.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:34 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:35 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:35 compute-0 ceph-mon[73755]: pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:35.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:36.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:06:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2465583842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:06:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:06:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2465583842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:06:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:36 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0002920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:37 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:37.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:37 compute-0 ceph-mon[73755]: pgmap v830: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2465583842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:06:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2465583842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:06:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:37.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:06:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:06:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:06:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:06:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:06:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:06:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:06:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:06:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:38.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:06:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:38 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:38] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Sep 30 18:06:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:38] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Sep 30 18:06:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:39 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e4004cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:39 compute-0 ceph-mon[73755]: pgmap v831: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:39.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:40.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:40 compute-0 ceph-mon[73755]: pgmap v832: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:06:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:40 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e0002920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:41 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c8003b00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:06:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:41.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:41 compute-0 podman[294800]: 2025-09-30 18:06:41.536742524 +0000 UTC m=+0.065609248 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:06:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:42.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[292310]: 30/09/2025 18:06:42 : epoch 68dc1b9e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0001b80 fd 38 proxy ignored for local
Sep 30 18:06:42 compute-0 kernel: ganesha.nfsd[294741]: segfault at 50 ip 00007fb49b00732e sp 00007fb4577fd210 error 4 in libntirpc.so.5.8[7fb49afec000+2c000] likely on CPU 0 (core 0, socket 0)
Sep 30 18:06:42 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 18:06:42 compute-0 systemd[1]: Started Process Core Dump (PID 294821/UID 0).
Sep 30 18:06:42 compute-0 ceph-mon[73755]: pgmap v833: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:43.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:43.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:43 compute-0 systemd-coredump[294822]: Process 292314 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007fb49b00732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 18:06:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:43 compute-0 systemd[1]: systemd-coredump@13-294821-0.service: Deactivated successfully.
Sep 30 18:06:43 compute-0 systemd[1]: systemd-coredump@13-294821-0.service: Consumed 1.245s CPU time.
Sep 30 18:06:44 compute-0 podman[294829]: 2025-09-30 18:06:44.017472158 +0000 UTC m=+0.029931320 container died b7497a80b31974815d0f7cf34d76befd732b3041528efa746fcdc7f09538bfa9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Sep 30 18:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-988fa460153acad45016cba23905e64883a52f629670b96d677c2dbfce0d17b5-merged.mount: Deactivated successfully.
Sep 30 18:06:44 compute-0 podman[294829]: 2025-09-30 18:06:44.055161238 +0000 UTC m=+0.067620380 container remove b7497a80b31974815d0f7cf34d76befd732b3041528efa746fcdc7f09538bfa9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:06:44 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 18:06:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:06:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:44.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:06:44 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 18:06:44 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.735s CPU time.
Sep 30 18:06:44 compute-0 ceph-mon[73755]: pgmap v834: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:45.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:46.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:46 compute-0 ceph-mon[73755]: pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:47.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:47.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:47 compute-0 sshd[192864]: drop connection #1 from [115.190.39.222]:38482 on [38.102.83.202]:22 penalty: exceeded LoginGraceTime
Sep 30 18:06:47 compute-0 sudo[294877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:06:47 compute-0 sudo[294877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:06:47 compute-0 sudo[294877]: pam_unix(sudo:session): session closed for user root
Sep 30 18:06:47 compute-0 podman[294901]: 2025-09-30 18:06:47.622679588 +0000 UTC m=+0.058347169 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6)
Sep 30 18:06:47 compute-0 podman[294909]: 2025-09-30 18:06:47.629942097 +0000 UTC m=+0.051303086 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:06:47 compute-0 podman[294902]: 2025-09-30 18:06:47.649621329 +0000 UTC m=+0.075757772 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:06:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:48.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:48 compute-0 sshd-session[294874]: Invalid user jia from 45.252.249.158 port 48278
Sep 30 18:06:48 compute-0 sshd-session[294874]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:06:48 compute-0 sshd-session[294874]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:06:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180648 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:06:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:48] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Sep 30 18:06:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:48] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Sep 30 18:06:48 compute-0 ceph-mon[73755]: pgmap v836: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:06:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:06:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:50.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:50 compute-0 sshd-session[294874]: Failed password for invalid user jia from 45.252.249.158 port 48278 ssh2
Sep 30 18:06:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:50 compute-0 sshd-session[294874]: Received disconnect from 45.252.249.158 port 48278:11: Bye Bye [preauth]
Sep 30 18:06:50 compute-0 sshd-session[294874]: Disconnected from invalid user jia 45.252.249.158 port 48278 [preauth]
Sep 30 18:06:50 compute-0 ceph-mon[73755]: pgmap v837: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:06:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:51.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:06:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:52.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:06:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:06:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:53.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:53 compute-0 ceph-mon[73755]: pgmap v838: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:06:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:06:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:53.639Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:06:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:53.639Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:06:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:54.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:06:54.271 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:06:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:06:54.271 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:06:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:06:54.271 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:06:54 compute-0 ceph-mon[73755]: pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:06:54 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 14.
Sep 30 18:06:54 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 18:06:54 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 1.735s CPU time.
Sep 30 18:06:54 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 18:06:54 compute-0 podman[295019]: 2025-09-30 18:06:54.702434872 +0000 UTC m=+0.067928408 container create a75bc0871d2876441e946a87cea944e2694054e1fcd6c747829e72a413847002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:06:54 compute-0 podman[295019]: 2025-09-30 18:06:54.659541996 +0000 UTC m=+0.025035552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98432b906fea09105c2677e80942597164be1da303b7b5bf2e1eda23e3b30e01/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98432b906fea09105c2677e80942597164be1da303b7b5bf2e1eda23e3b30e01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98432b906fea09105c2677e80942597164be1da303b7b5bf2e1eda23e3b30e01/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98432b906fea09105c2677e80942597164be1da303b7b5bf2e1eda23e3b30e01/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:06:54 compute-0 podman[295019]: 2025-09-30 18:06:54.826643624 +0000 UTC m=+0.192137190 container init a75bc0871d2876441e946a87cea944e2694054e1fcd6c747829e72a413847002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:06:54 compute-0 podman[295019]: 2025-09-30 18:06:54.833329468 +0000 UTC m=+0.198823024 container start a75bc0871d2876441e946a87cea944e2694054e1fcd6c747829e72a413847002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:06:54 compute-0 bash[295019]: a75bc0871d2876441e946a87cea944e2694054e1fcd6c747829e72a413847002
Sep 30 18:06:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:06:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 18:06:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:06:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 18:06:54 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 18:06:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:06:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 18:06:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:06:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 18:06:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:06:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 18:06:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:06:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 18:06:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:06:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 18:06:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:06:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:06:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:55.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:06:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:06:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:56.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:57 compute-0 ceph-mon[73755]: pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:06:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:06:57.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:06:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:06:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:57.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:06:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:06:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3339462164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:06:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:06:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3339462164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:06:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:06:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3339462164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:06:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3339462164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:06:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:06:58.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:58] "GET /metrics HTTP/1.1" 200 46584 "" "Prometheus/2.51.0"
Sep 30 18:06:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:06:58] "GET /metrics HTTP/1.1" 200 46584 "" "Prometheus/2.51.0"
Sep 30 18:06:59 compute-0 ceph-mon[73755]: pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:06:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:06:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:06:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:06:59.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:06:59 compute-0 podman[276673]: time="2025-09-30T18:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:06:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:06:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10223 "" "Go-http-client/1.1"
Sep 30 18:06:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:07:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:00.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:01 compute-0 ceph-mon[73755]: pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:07:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:07:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:07:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:01.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:01 compute-0 openstack_network_exporter[279566]: ERROR   18:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:07:01 compute-0 openstack_network_exporter[279566]: ERROR   18:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:07:01 compute-0 openstack_network_exporter[279566]: ERROR   18:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:07:01 compute-0 openstack_network_exporter[279566]: ERROR   18:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:07:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:07:01 compute-0 openstack_network_exporter[279566]: ERROR   18:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:07:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:07:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:07:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:02.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:02 compute-0 ceph-mon[73755]: pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:07:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:03.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:03.640Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:07:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:04.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:04 compute-0 podman[295087]: 2025-09-30 18:07:04.54578127 +0000 UTC m=+0.068561046 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:07:04 compute-0 podman[295086]: 2025-09-30 18:07:04.599206681 +0000 UTC m=+0.132706206 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Sep 30 18:07:04 compute-0 ceph-mon[73755]: pgmap v844: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:07:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:07:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:06.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:06 compute-0 ceph-mon[73755]: pgmap v845: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:07.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 18:07:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 18:07:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:07:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:07:07
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'vms', 'images', 'cephfs.cephfs.data', '.nfs', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control']
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:07:07 compute-0 sudo[295151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:07:07 compute-0 sudo[295151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:07 compute-0 sudo[295151]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:07:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:07:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:08.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc244000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:08 compute-0 sudo[295177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:07:08 compute-0 sudo[295177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:08 compute-0 sudo[295177]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:08 compute-0 sudo[295204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:07:08 compute-0 sudo[295204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:08] "GET /metrics HTTP/1.1" 200 46579 "" "Prometheus/2.51.0"
Sep 30 18:07:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:08] "GET /metrics HTTP/1.1" 200 46579 "" "Prometheus/2.51.0"
Sep 30 18:07:09 compute-0 ceph-mon[73755]: pgmap v846: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:07:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:09 compute-0 sudo[295204]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:07:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:07:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:07:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:07:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:07:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:07:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:07:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:07:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:07:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:07:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:07:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:07:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:07:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:07:09 compute-0 sudo[295261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:07:09 compute-0 sudo[295261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:09 compute-0 sudo[295261]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:09 compute-0 sudo[295287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:07:09 compute-0 sudo[295287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:07:10 compute-0 podman[295352]: 2025-09-30 18:07:10.03027464 +0000 UTC m=+0.081945614 container create 6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:07:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:07:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:07:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:07:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:07:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:07:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:07:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:07:10 compute-0 podman[295352]: 2025-09-30 18:07:09.972597279 +0000 UTC m=+0.024268363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:07:10 compute-0 systemd[1]: Started libpod-conmon-6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd.scope.
Sep 30 18:07:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:07:10 compute-0 podman[295352]: 2025-09-30 18:07:10.132990814 +0000 UTC m=+0.184661818 container init 6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:07:10 compute-0 podman[295352]: 2025-09-30 18:07:10.145274463 +0000 UTC m=+0.196945447 container start 6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_tharp, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:07:10 compute-0 podman[295352]: 2025-09-30 18:07:10.149172265 +0000 UTC m=+0.200843249 container attach 6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 18:07:10 compute-0 eloquent_tharp[295369]: 167 167
Sep 30 18:07:10 compute-0 systemd[1]: libpod-6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd.scope: Deactivated successfully.
Sep 30 18:07:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:10.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:10 compute-0 podman[295374]: 2025-09-30 18:07:10.229931667 +0000 UTC m=+0.050942257 container died 6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_tharp, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ff5cc6652df985db8aa42dc108ba2a9181929cc75950c3cb4ab39b43ba6403e-merged.mount: Deactivated successfully.
Sep 30 18:07:10 compute-0 podman[295374]: 2025-09-30 18:07:10.276028767 +0000 UTC m=+0.097039277 container remove 6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_tharp, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:07:10 compute-0 systemd[1]: libpod-conmon-6e4718ebfe1af6c91c69d154cfeae206eed0d4f00aba1ae0fc2f7d99f47813bd.scope: Deactivated successfully.
Sep 30 18:07:10 compute-0 podman[295395]: 2025-09-30 18:07:10.494811742 +0000 UTC m=+0.051958154 container create 1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:07:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:10 compute-0 systemd[1]: Started libpod-conmon-1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b.scope.
Sep 30 18:07:10 compute-0 podman[295395]: 2025-09-30 18:07:10.474317658 +0000 UTC m=+0.031464090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:07:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdbe9ebe3d4adc12caaa5a1e5e0aafab7fbb447e4cbbc0eca137ab2156c9583/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdbe9ebe3d4adc12caaa5a1e5e0aafab7fbb447e4cbbc0eca137ab2156c9583/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdbe9ebe3d4adc12caaa5a1e5e0aafab7fbb447e4cbbc0eca137ab2156c9583/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdbe9ebe3d4adc12caaa5a1e5e0aafab7fbb447e4cbbc0eca137ab2156c9583/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdbe9ebe3d4adc12caaa5a1e5e0aafab7fbb447e4cbbc0eca137ab2156c9583/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:10 compute-0 podman[295395]: 2025-09-30 18:07:10.614637501 +0000 UTC m=+0.171783923 container init 1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:07:10 compute-0 podman[295395]: 2025-09-30 18:07:10.629547619 +0000 UTC m=+0.186694041 container start 1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:07:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180710 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 18:07:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:10 compute-0 podman[295395]: 2025-09-30 18:07:10.634827827 +0000 UTC m=+0.191974239 container attach 1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:07:10 compute-0 pensive_jepsen[295411]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:07:10 compute-0 pensive_jepsen[295411]: --> All data devices are unavailable
Sep 30 18:07:10 compute-0 systemd[1]: libpod-1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b.scope: Deactivated successfully.
Sep 30 18:07:11 compute-0 podman[295427]: 2025-09-30 18:07:11.015835954 +0000 UTC m=+0.024767165 container died 1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffdbe9ebe3d4adc12caaa5a1e5e0aafab7fbb447e4cbbc0eca137ab2156c9583-merged.mount: Deactivated successfully.
Sep 30 18:07:11 compute-0 ceph-mon[73755]: pgmap v847: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:07:11 compute-0 podman[295427]: 2025-09-30 18:07:11.054151061 +0000 UTC m=+0.063082242 container remove 1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:07:11 compute-0 systemd[1]: libpod-conmon-1169167b66a4d32972ca2d2033074ea2a8b3b8b726586a4b16519ca0658d827b.scope: Deactivated successfully.
Sep 30 18:07:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:11 compute-0 sudo[295287]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:11 compute-0 sudo[295442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:07:11 compute-0 sudo[295442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:11 compute-0 sudo[295442]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:11 compute-0 sudo[295467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:07:11 compute-0 sudo[295467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:11.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:11 compute-0 podman[295535]: 2025-09-30 18:07:11.63919146 +0000 UTC m=+0.043449232 container create 3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bohr, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:07:11 compute-0 systemd[1]: Started libpod-conmon-3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719.scope.
Sep 30 18:07:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:07:11 compute-0 podman[295535]: 2025-09-30 18:07:11.619472467 +0000 UTC m=+0.023730259 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:07:11 compute-0 podman[295535]: 2025-09-30 18:07:11.7206149 +0000 UTC m=+0.124872752 container init 3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bohr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:07:11 compute-0 podman[295535]: 2025-09-30 18:07:11.728143816 +0000 UTC m=+0.132401578 container start 3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:07:11 compute-0 podman[295535]: 2025-09-30 18:07:11.731440361 +0000 UTC m=+0.135698223 container attach 3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bohr, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:07:11 compute-0 infallible_bohr[295554]: 167 167
Sep 30 18:07:11 compute-0 systemd[1]: libpod-3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719.scope: Deactivated successfully.
Sep 30 18:07:11 compute-0 podman[295535]: 2025-09-30 18:07:11.736077582 +0000 UTC m=+0.140335394 container died 3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bohr, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 18:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b8f840dc759e4597421fc3264ca95c59c88009d7236798068285c5cf85d933-merged.mount: Deactivated successfully.
Sep 30 18:07:11 compute-0 podman[295535]: 2025-09-30 18:07:11.782592623 +0000 UTC m=+0.186850395 container remove 3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:07:11 compute-0 systemd[1]: libpod-conmon-3c0524d3b093684b256becb9ce0c9ed8139579a0f0e0082a3f4f33d8348bf719.scope: Deactivated successfully.
Sep 30 18:07:11 compute-0 podman[295550]: 2025-09-30 18:07:11.795299274 +0000 UTC m=+0.107725945 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:07:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:07:11 compute-0 podman[295597]: 2025-09-30 18:07:11.968814599 +0000 UTC m=+0.057210190 container create 443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lichterman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:07:12 compute-0 systemd[1]: Started libpod-conmon-443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0.scope.
Sep 30 18:07:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:07:12 compute-0 podman[295597]: 2025-09-30 18:07:11.938906591 +0000 UTC m=+0.027302232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24041fe9ffc0c70af4e3ae88b3b593f1976f0d8ff1d5ab9a07680ab17fdc6245/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24041fe9ffc0c70af4e3ae88b3b593f1976f0d8ff1d5ab9a07680ab17fdc6245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24041fe9ffc0c70af4e3ae88b3b593f1976f0d8ff1d5ab9a07680ab17fdc6245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24041fe9ffc0c70af4e3ae88b3b593f1976f0d8ff1d5ab9a07680ab17fdc6245/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:12 compute-0 podman[295597]: 2025-09-30 18:07:12.058600486 +0000 UTC m=+0.146996127 container init 443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 18:07:12 compute-0 podman[295597]: 2025-09-30 18:07:12.065769853 +0000 UTC m=+0.154165434 container start 443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 18:07:12 compute-0 podman[295597]: 2025-09-30 18:07:12.069826589 +0000 UTC m=+0.158222140 container attach 443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lichterman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:07:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:07:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:12.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]: {
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:     "0": [
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:         {
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "devices": [
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "/dev/loop3"
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             ],
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "lv_name": "ceph_lv0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "lv_size": "21470642176",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "name": "ceph_lv0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "tags": {
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.cluster_name": "ceph",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.crush_device_class": "",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.encrypted": "0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.osd_id": "0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.type": "block",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.vdo": "0",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:                 "ceph.with_tpm": "0"
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             },
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "type": "block",
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:             "vg_name": "ceph_vg0"
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:         }
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]:     ]
Sep 30 18:07:12 compute-0 zealous_lichterman[295613]: }
Sep 30 18:07:12 compute-0 systemd[1]: libpod-443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0.scope: Deactivated successfully.
Sep 30 18:07:12 compute-0 podman[295597]: 2025-09-30 18:07:12.41101354 +0000 UTC m=+0.499409101 container died 443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 18:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-24041fe9ffc0c70af4e3ae88b3b593f1976f0d8ff1d5ab9a07680ab17fdc6245-merged.mount: Deactivated successfully.
Sep 30 18:07:12 compute-0 podman[295597]: 2025-09-30 18:07:12.458326531 +0000 UTC m=+0.546722072 container remove 443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:07:12 compute-0 systemd[1]: libpod-conmon-443982e370eedb1eb18e2023c877b8b020b5cc8278f37f5f65a3024b4158a3e0.scope: Deactivated successfully.
Sep 30 18:07:12 compute-0 sudo[295467]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:12 compute-0 sudo[295631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:07:12 compute-0 sudo[295631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:12 compute-0 sudo[295631]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:12 compute-0 sudo[295656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:07:12 compute-0 sudo[295656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:13 compute-0 ceph-mon[73755]: pgmap v848: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:07:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:13 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:13 compute-0 podman[295720]: 2025-09-30 18:07:13.102292404 +0000 UTC m=+0.043577896 container create a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Sep 30 18:07:13 compute-0 systemd[1]: Started libpod-conmon-a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0.scope.
Sep 30 18:07:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:07:13 compute-0 podman[295720]: 2025-09-30 18:07:13.177494031 +0000 UTC m=+0.118779603 container init a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_gates, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:07:13 compute-0 podman[295720]: 2025-09-30 18:07:13.087287363 +0000 UTC m=+0.028572885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:07:13 compute-0 podman[295720]: 2025-09-30 18:07:13.188400825 +0000 UTC m=+0.129686357 container start a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_gates, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:07:13 compute-0 podman[295720]: 2025-09-30 18:07:13.192328867 +0000 UTC m=+0.133614449 container attach a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_gates, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:07:13 compute-0 funny_gates[295736]: 167 167
Sep 30 18:07:13 compute-0 systemd[1]: libpod-a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0.scope: Deactivated successfully.
Sep 30 18:07:13 compute-0 podman[295720]: 2025-09-30 18:07:13.196188778 +0000 UTC m=+0.137474330 container died a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:07:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6449120ae02d58464b748a4d6c61d90145cabd46f30ae5b03352450c30aad85-merged.mount: Deactivated successfully.
Sep 30 18:07:13 compute-0 podman[295720]: 2025-09-30 18:07:13.249814424 +0000 UTC m=+0.191099906 container remove a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:07:13 compute-0 systemd[1]: libpod-conmon-a81fe46a7f39b39f1156d050f1e2ddcee1b99032a61699c78f4a5c828826f8d0.scope: Deactivated successfully.
Sep 30 18:07:13 compute-0 podman[295762]: 2025-09-30 18:07:13.488814555 +0000 UTC m=+0.060433354 container create 59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:07:13 compute-0 systemd[1]: Started libpod-conmon-59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af.scope.
Sep 30 18:07:13 compute-0 podman[295762]: 2025-09-30 18:07:13.464108712 +0000 UTC m=+0.035727521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:07:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d36c1540ee038d29eb550064dd3f667b7582bee323d6b25125cfcd6c17ed3319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d36c1540ee038d29eb550064dd3f667b7582bee323d6b25125cfcd6c17ed3319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d36c1540ee038d29eb550064dd3f667b7582bee323d6b25125cfcd6c17ed3319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d36c1540ee038d29eb550064dd3f667b7582bee323d6b25125cfcd6c17ed3319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:07:13 compute-0 podman[295762]: 2025-09-30 18:07:13.587961606 +0000 UTC m=+0.159580405 container init 59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:07:13 compute-0 podman[295762]: 2025-09-30 18:07:13.595326007 +0000 UTC m=+0.166944796 container start 59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:07:13 compute-0 podman[295762]: 2025-09-30 18:07:13.598482229 +0000 UTC m=+0.170101018 container attach 59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:07:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:13.641Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:07:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:14.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:14 compute-0 lvm[295853]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:07:14 compute-0 lvm[295853]: VG ceph_vg0 finished
Sep 30 18:07:14 compute-0 condescending_chatelet[295778]: {}
Sep 30 18:07:14 compute-0 systemd[1]: libpod-59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af.scope: Deactivated successfully.
Sep 30 18:07:14 compute-0 systemd[1]: libpod-59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af.scope: Consumed 1.292s CPU time.
Sep 30 18:07:14 compute-0 podman[295762]: 2025-09-30 18:07:14.395948948 +0000 UTC m=+0.967567737 container died 59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d36c1540ee038d29eb550064dd3f667b7582bee323d6b25125cfcd6c17ed3319-merged.mount: Deactivated successfully.
Sep 30 18:07:14 compute-0 podman[295762]: 2025-09-30 18:07:14.437962431 +0000 UTC m=+1.009581220 container remove 59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:07:14 compute-0 systemd[1]: libpod-conmon-59cd88a7aaeb01ddbb7a4b23b62938f9512e081e505386cd65ea89d9af0730af.scope: Deactivated successfully.
Sep 30 18:07:14 compute-0 sudo[295656]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:07:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:07:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:07:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:07:14 compute-0 sudo[295868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:07:14 compute-0 sudo[295868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:14 compute-0 sudo[295868]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:15 compute-0 ceph-mon[73755]: pgmap v849: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:07:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:07:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:07:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:15 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:15 compute-0 nova_compute[265391]: 2025-09-30 18:07:15.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:07:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:16.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:16 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:17 compute-0 ceph-mon[73755]: pgmap v850: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:07:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:17.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:07:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:18.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:18 compute-0 nova_compute[265391]: 2025-09-30 18:07:18.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:18 compute-0 nova_compute[265391]: 2025-09-30 18:07:18.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:18 compute-0 podman[295899]: 2025-09-30 18:07:18.561156375 +0000 UTC m=+0.086037031 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Sep 30 18:07:18 compute-0 podman[295898]: 2025-09-30 18:07:18.570429986 +0000 UTC m=+0.095329212 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:07:18 compute-0 podman[295900]: 2025-09-30 18:07:18.573979919 +0000 UTC m=+0.094988294 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal)
Sep 30 18:07:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:18 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:18] "GET /metrics HTTP/1.1" 200 46579 "" "Prometheus/2.51.0"
Sep 30 18:07:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:18] "GET /metrics HTTP/1.1" 200 46579 "" "Prometheus/2.51.0"
Sep 30 18:07:18 compute-0 nova_compute[265391]: 2025-09-30 18:07:18.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:07:18 compute-0 nova_compute[265391]: 2025-09-30 18:07:18.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:07:18 compute-0 nova_compute[265391]: 2025-09-30 18:07:18.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:07:18 compute-0 nova_compute[265391]: 2025-09-30 18:07:18.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:07:18 compute-0 nova_compute[265391]: 2025-09-30 18:07:18.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:07:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:19 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:19 compute-0 ceph-mon[73755]: pgmap v851: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:07:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:19.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:07:19 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1560331497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:07:19 compute-0 nova_compute[265391]: 2025-09-30 18:07:19.402 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:07:19 compute-0 nova_compute[265391]: 2025-09-30 18:07:19.567 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:07:19 compute-0 nova_compute[265391]: 2025-09-30 18:07:19.568 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:07:19 compute-0 nova_compute[265391]: 2025-09-30 18:07:19.599 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:07:19 compute-0 nova_compute[265391]: 2025-09-30 18:07:19.600 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4719MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:07:19 compute-0 nova_compute[265391]: 2025-09-30 18:07:19.600 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:07:19 compute-0 nova_compute[265391]: 2025-09-30 18:07:19.601 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:07:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:07:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1560331497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:07:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:20.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:20 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:20 compute-0 nova_compute[265391]: 2025-09-30 18:07:20.654 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:07:20 compute-0 nova_compute[265391]: 2025-09-30 18:07:20.654 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:07:19 up  1:10,  0 user,  load average: 0.84, 0.78, 0.91\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:07:20 compute-0 nova_compute[265391]: 2025-09-30 18:07:20.679 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:07:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:21 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:21 compute-0 ceph-mon[73755]: pgmap v852: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:07:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:07:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3632844503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:07:21 compute-0 nova_compute[265391]: 2025-09-30 18:07:21.189 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:07:21 compute-0 nova_compute[265391]: 2025-09-30 18:07:21.195 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:07:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:21 compute-0 nova_compute[265391]: 2025-09-30 18:07:21.704 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:07:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 18:07:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3632844503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:07:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:22.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:22 compute-0 nova_compute[265391]: 2025-09-30 18:07:22.217 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:07:22 compute-0 nova_compute[265391]: 2025-09-30 18:07:22.218 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.617s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:07:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:07:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:07:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:22 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:23 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:23 compute-0 ceph-mon[73755]: pgmap v853: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:07:23 compute-0 nova_compute[265391]: 2025-09-30 18:07:23.218 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:23 compute-0 nova_compute[265391]: 2025-09-30 18:07:23.219 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:23.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:23.642Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:23 compute-0 nova_compute[265391]: 2025-09-30 18:07:23.728 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:23 compute-0 nova_compute[265391]: 2025-09-30 18:07:23.729 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:23 compute-0 nova_compute[265391]: 2025-09-30 18:07:23.729 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:23 compute-0 nova_compute[265391]: 2025-09-30 18:07:23.729 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:07:23 compute-0 nova_compute[265391]: 2025-09-30 18:07:23.730 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:07:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2960321938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:07:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:24 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:25 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:25 compute-0 ceph-mon[73755]: pgmap v854: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:26 compute-0 ceph-mon[73755]: pgmap v855: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:26 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1941274574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:07:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:26.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:26 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:27 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:27.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:27 compute-0 sudo[296013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:07:27 compute-0 sudo[296013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:27 compute-0 sudo[296013]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:28.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:28 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:28] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Sep 30 18:07:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:28] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Sep 30 18:07:28 compute-0 ceph-mon[73755]: pgmap v856: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:29 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:29 compute-0 sshd-session[296038]: Invalid user diana from 14.225.220.107 port 35936
Sep 30 18:07:29 compute-0 sshd-session[296038]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:07:29 compute-0 sshd-session[296038]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:07:29 compute-0 podman[276673]: time="2025-09-30T18:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:07:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:07:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10232 "" "Go-http-client/1.1"
Sep 30 18:07:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:07:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:30.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:30 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:30 compute-0 ceph-mon[73755]: pgmap v857: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:07:31 compute-0 sshd-session[296038]: Failed password for invalid user diana from 14.225.220.107 port 35936 ssh2
Sep 30 18:07:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:31 compute-0 openstack_network_exporter[279566]: ERROR   18:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:07:31 compute-0 openstack_network_exporter[279566]: ERROR   18:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:07:31 compute-0 openstack_network_exporter[279566]: ERROR   18:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:07:31 compute-0 openstack_network_exporter[279566]: ERROR   18:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:07:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:07:31 compute-0 openstack_network_exporter[279566]: ERROR   18:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:07:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:07:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:31 compute-0 sshd-session[296038]: Received disconnect from 14.225.220.107 port 35936:11: Bye Bye [preauth]
Sep 30 18:07:31 compute-0 sshd-session[296038]: Disconnected from invalid user diana 14.225.220.107 port 35936 [preauth]
Sep 30 18:07:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:32.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:32 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:33 compute-0 ceph-mon[73755]: pgmap v858: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:33 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:33.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:07:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:34.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:35 compute-0 ceph-mon[73755]: pgmap v859: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:35 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:35.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:35 compute-0 podman[296048]: 2025-09-30 18:07:35.54145117 +0000 UTC m=+0.082763225 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:07:35 compute-0 podman[296047]: 2025-09-30 18:07:35.541878301 +0000 UTC m=+0.086830531 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:07:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:36.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:07:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3205241886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:07:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:07:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3205241886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:07:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:36 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:37 compute-0 ceph-mon[73755]: pgmap v860: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3205241886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:07:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3205241886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:07:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:37.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:37.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:07:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:07:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:07:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:07:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:07:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:07:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:07:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:07:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:37.442 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:07:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:37.442 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:07:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:07:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:38.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:38 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:38] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Sep 30 18:07:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:38] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Sep 30 18:07:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:38.967 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:82:19 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5ecd2ee32c3491198baea5df005e7e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63eb3a66-bb05-42df-9005-47612d7f10be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=049e58b6-619a-494f-be8e-cd60a8549d92) old=Port_Binding(mac=['fa:16:3e:5b:82:19'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5ecd2ee32c3491198baea5df005e7e5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:07:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:38.968 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 049e58b6-619a-494f-be8e-cd60a8549d92 in datapath 8540cf85-00d6-4dc4-a235-89cd0b224d26 updated
Sep 30 18:07:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:38.970 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8540cf85-00d6-4dc4-a235-89cd0b224d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:07:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:38.970 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d46a6513-e31a-4baa-bafe-6d34b09eec88]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:07:39 compute-0 ceph-mon[73755]: pgmap v861: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:39 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:39.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:07:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:40.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:40 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:41 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:41 compute-0 ceph-mon[73755]: pgmap v862: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:07:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:41.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:42.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:42.444 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:07:42 compute-0 podman[296106]: 2025-09-30 18:07:42.542384231 +0000 UTC m=+0.077849617 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:07:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:42 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:43 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:43.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:43 compute-0 ceph-mon[73755]: pgmap v863: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:43.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:44 compute-0 ceph-mon[73755]: pgmap v864: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:44 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:45 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:45.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:46.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:46 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:46 compute-0 ceph-mon[73755]: pgmap v865: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:47 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:47.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:47.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:47 compute-0 sudo[296131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:07:47 compute-0 sudo[296131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:07:47 compute-0 sudo[296131]: pam_unix(sudo:session): session closed for user root
Sep 30 18:07:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:48.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:48.453 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:ea:a7 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-5f482ad5-36f0-4d8c-b057-4e11342b7a56', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f482ad5-36f0-4d8c-b057-4e11342b7a56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20644a86c59b4259a037c783fe6fff20', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b03d95-2212-48b8-9026-d1d054714f2e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7811f356-f3dd-4a3d-aa8e-55d122820071) old=Port_Binding(mac=['fa:16:3e:f9:ea:a7'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-5f482ad5-36f0-4d8c-b057-4e11342b7a56', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f482ad5-36f0-4d8c-b057-4e11342b7a56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20644a86c59b4259a037c783fe6fff20', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:07:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:48.455 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7811f356-f3dd-4a3d-aa8e-55d122820071 in datapath 5f482ad5-36f0-4d8c-b057-4e11342b7a56 updated
Sep 30 18:07:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:48.457 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f482ad5-36f0-4d8c-b057-4e11342b7a56, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:07:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:48.458 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ab7cd5-f1a3-45df-9e86-f44f390e83ce]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:07:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:48 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:48] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Sep 30 18:07:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:48] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Sep 30 18:07:49 compute-0 ceph-mon[73755]: pgmap v866: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:49 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:49.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:49 compute-0 podman[296158]: 2025-09-30 18:07:49.529306438 +0000 UTC m=+0.065231699 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 18:07:49 compute-0 podman[296159]: 2025-09-30 18:07:49.558719014 +0000 UTC m=+0.080786344 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:07:49 compute-0 podman[296157]: 2025-09-30 18:07:49.56663921 +0000 UTC m=+0.105399044 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:07:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:07:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:50 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:51 compute-0 ceph-mon[73755]: pgmap v867: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:07:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:51 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:51.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:52.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:07:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:07:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:52 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:53 compute-0 ceph-mon[73755]: pgmap v868: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:07:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:53 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:53.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:53.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:54.272 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:07:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:54.272 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:07:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:07:54.272 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:07:54 compute-0 unix_chkpwd[296223]: password check failed for user (root)
Sep 30 18:07:54 compute-0 sshd-session[296218]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:07:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:55 compute-0 ceph-mon[73755]: pgmap v869: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:55.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:07:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:07:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:56.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:07:56 compute-0 sshd-session[296218]: Failed password for root from 45.252.249.158 port 36594 ssh2
Sep 30 18:07:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:56 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:57 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:57.131Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:07:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:07:57.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:07:57 compute-0 ceph-mon[73755]: pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:07:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:57.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:07:57 compute-0 sshd-session[296218]: Received disconnect from 45.252.249.158 port 36594:11: Bye Bye [preauth]
Sep 30 18:07:57 compute-0 sshd-session[296218]: Disconnected from authenticating user root 45.252.249.158 port 36594 [preauth]
Sep 30 18:07:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:07:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3752378017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:07:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:07:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3752378017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:07:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3752378017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:07:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3752378017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:07:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:07:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:07:58.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:07:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:58 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:58] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Sep 30 18:07:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:07:58] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Sep 30 18:07:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:07:59 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:07:59 compute-0 ceph-mon[73755]: pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:07:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:07:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:07:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:07:59.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:07:59 compute-0 podman[276673]: time="2025-09-30T18:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:07:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:07:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10229 "" "Go-http-client/1.1"
Sep 30 18:07:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:08:00 compute-0 ceph-mon[73755]: pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:08:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:00.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:00 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:01.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:01 compute-0 openstack_network_exporter[279566]: ERROR   18:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:08:01 compute-0 openstack_network_exporter[279566]: ERROR   18:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:08:01 compute-0 openstack_network_exporter[279566]: ERROR   18:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:08:01 compute-0 openstack_network_exporter[279566]: ERROR   18:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:08:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:08:01 compute-0 openstack_network_exporter[279566]: ERROR   18:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:08:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:08:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:08:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180802 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:08:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:02 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:02 compute-0 ceph-mon[73755]: pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:08:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:03 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:03.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:03.646Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:08:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:03.646Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:08:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:04.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:04 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:04 compute-0 ceph-mon[73755]: pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:08:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:05 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:08:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:06.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:06 compute-0 podman[296236]: 2025-09-30 18:08:06.534090252 +0000 UTC m=+0.074436358 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930)
Sep 30 18:08:06 compute-0 podman[296237]: 2025-09-30 18:08:06.543306022 +0000 UTC m=+0.071623175 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:08:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:06 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:07 compute-0 ceph-mon[73755]: pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:08:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:07.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:08:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:08:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:07.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:08:07
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', '.nfs']
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:08:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:08:07 compute-0 sudo[296290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:08:07 compute-0 sudo[296290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:07 compute-0 sudo[296290]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:08:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:08.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:08] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Sep 30 18:08:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:08] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Sep 30 18:08:09 compute-0 ceph-mon[73755]: pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:08:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:09.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:08:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:10.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:11 compute-0 ceph-mon[73755]: pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:08:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:08:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:11.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:08:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:13 compute-0 ceph-mon[73755]: pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:08:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:13 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:13.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:13 compute-0 podman[296322]: 2025-09-30 18:08:13.517327045 +0000 UTC m=+0.060277540 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Sep 30 18:08:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:13.647Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:08:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:08:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:08:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:14 compute-0 sudo[296342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:08:14 compute-0 sudo[296342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:14 compute-0 sudo[296342]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:14 compute-0 sudo[296367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 18:08:14 compute-0 sudo[296367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:15 compute-0 ceph-mon[73755]: pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:08:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:15 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:15 compute-0 sudo[296367]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:08:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:08:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:08:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:08:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:15 compute-0 sudo[296414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:08:15 compute-0 sudo[296414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:15 compute-0 sudo[296414]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:15 compute-0 nova_compute[265391]: 2025-09-30 18:08:15.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:08:15 compute-0 sudo[296440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:08:15 compute-0 sudo[296440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:08:15 compute-0 sudo[296440]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:08:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:08:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:08:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:08:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:08:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:08:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:08:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:08:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:08:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:08:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:08:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:08:16 compute-0 sudo[296499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:08:16 compute-0 sudo[296499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:16 compute-0 sudo[296499]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:16 compute-0 ceph-mon[73755]: pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:08:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:08:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:16.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:16 compute-0 sudo[296524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:08:16 compute-0 sudo[296524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:16 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:16 compute-0 podman[296589]: 2025-09-30 18:08:16.718816919 +0000 UTC m=+0.038083452 container create a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:08:16 compute-0 systemd[1]: Started libpod-conmon-a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86.scope.
Sep 30 18:08:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:08:16 compute-0 podman[296589]: 2025-09-30 18:08:16.702040362 +0000 UTC m=+0.021306915 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:08:16 compute-0 podman[296589]: 2025-09-30 18:08:16.81529294 +0000 UTC m=+0.134559533 container init a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_volhard, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:08:16 compute-0 podman[296589]: 2025-09-30 18:08:16.82527004 +0000 UTC m=+0.144536573 container start a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_volhard, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:08:16 compute-0 podman[296589]: 2025-09-30 18:08:16.829156991 +0000 UTC m=+0.148423564 container attach a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_volhard, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:08:16 compute-0 systemd[1]: libpod-a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86.scope: Deactivated successfully.
Sep 30 18:08:16 compute-0 great_volhard[296606]: 167 167
Sep 30 18:08:16 compute-0 conmon[296606]: conmon a5e9f1c966c0cef3979f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86.scope/container/memory.events
Sep 30 18:08:16 compute-0 podman[296589]: 2025-09-30 18:08:16.834054829 +0000 UTC m=+0.153321372 container died a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Sep 30 18:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-19451438d7ce60d70c93db4702225a3091dc1016ac5fe7dc38e9690860e9fe9b-merged.mount: Deactivated successfully.
Sep 30 18:08:16 compute-0 podman[296589]: 2025-09-30 18:08:16.885234331 +0000 UTC m=+0.204500874 container remove a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_volhard, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:08:16 compute-0 systemd[1]: libpod-conmon-a5e9f1c966c0cef3979f2e4012d8ed71676844ca3a905e59b02358ae487d9d86.scope: Deactivated successfully.
Sep 30 18:08:17 compute-0 podman[296632]: 2025-09-30 18:08:17.078403779 +0000 UTC m=+0.050692370 container create 55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:08:17 compute-0 systemd[1]: Started libpod-conmon-55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634.scope.
Sep 30 18:08:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:17.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:17 compute-0 podman[296632]: 2025-09-30 18:08:17.054591549 +0000 UTC m=+0.026880130 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:08:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa2efde7995d576d91d907d691a48cd0797bbccc97674c07f378314cf56f075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa2efde7995d576d91d907d691a48cd0797bbccc97674c07f378314cf56f075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa2efde7995d576d91d907d691a48cd0797bbccc97674c07f378314cf56f075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa2efde7995d576d91d907d691a48cd0797bbccc97674c07f378314cf56f075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa2efde7995d576d91d907d691a48cd0797bbccc97674c07f378314cf56f075/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:17 compute-0 podman[296632]: 2025-09-30 18:08:17.189193663 +0000 UTC m=+0.161482264 container init 55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:08:17 compute-0 podman[296632]: 2025-09-30 18:08:17.198089175 +0000 UTC m=+0.170377746 container start 55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:08:17 compute-0 podman[296632]: 2025-09-30 18:08:17.202791237 +0000 UTC m=+0.175079848 container attach 55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Sep 30 18:08:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 18:08:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:17.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:17 compute-0 loving_euler[296649]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:08:17 compute-0 loving_euler[296649]: --> All data devices are unavailable
Sep 30 18:08:17 compute-0 systemd[1]: libpod-55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634.scope: Deactivated successfully.
Sep 30 18:08:17 compute-0 podman[296665]: 2025-09-30 18:08:17.611993659 +0000 UTC m=+0.024886819 container died 55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aa2efde7995d576d91d907d691a48cd0797bbccc97674c07f378314cf56f075-merged.mount: Deactivated successfully.
Sep 30 18:08:17 compute-0 podman[296665]: 2025-09-30 18:08:17.656750894 +0000 UTC m=+0.069644034 container remove 55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:08:17 compute-0 systemd[1]: libpod-conmon-55dfa8cf4f11ec1e431771108259f225bb346f3e58c2929a5bef6fbb6d2be634.scope: Deactivated successfully.
Sep 30 18:08:17 compute-0 sudo[296524]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:17 compute-0 sudo[296681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:08:17 compute-0 sudo[296681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:17 compute-0 sudo[296681]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:17 compute-0 sudo[296706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:08:17 compute-0 sudo[296706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:08:18 compute-0 podman[296772]: 2025-09-30 18:08:18.230678133 +0000 UTC m=+0.043066632 container create d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_yonath, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:08:18 compute-0 systemd[1]: Started libpod-conmon-d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4.scope.
Sep 30 18:08:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:18.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:08:18 compute-0 podman[296772]: 2025-09-30 18:08:18.210153959 +0000 UTC m=+0.022542468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:08:18 compute-0 podman[296772]: 2025-09-30 18:08:18.319248418 +0000 UTC m=+0.131636957 container init d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_yonath, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:08:18 compute-0 podman[296772]: 2025-09-30 18:08:18.328755236 +0000 UTC m=+0.141143725 container start d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:08:18 compute-0 podman[296772]: 2025-09-30 18:08:18.332122194 +0000 UTC m=+0.144510723 container attach d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:08:18 compute-0 vibrant_yonath[296788]: 167 167
Sep 30 18:08:18 compute-0 systemd[1]: libpod-d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4.scope: Deactivated successfully.
Sep 30 18:08:18 compute-0 podman[296772]: 2025-09-30 18:08:18.334728101 +0000 UTC m=+0.147116590 container died d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_yonath, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-621dea3210d38ba9e1699fb84d11738f5865690db6ea1b3c55595ed5fdec3c51-merged.mount: Deactivated successfully.
Sep 30 18:08:18 compute-0 podman[296772]: 2025-09-30 18:08:18.377139725 +0000 UTC m=+0.189528224 container remove d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:08:18 compute-0 systemd[1]: libpod-conmon-d3dfe473a6e1983f94db54128bf8ccb5ed2c2e3dbf07631366a9470363429be4.scope: Deactivated successfully.
Sep 30 18:08:18 compute-0 nova_compute[265391]: 2025-09-30 18:08:18.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:08:18 compute-0 podman[296813]: 2025-09-30 18:08:18.555991531 +0000 UTC m=+0.049423298 container create a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:08:18 compute-0 systemd[1]: Started libpod-conmon-a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea.scope.
Sep 30 18:08:18 compute-0 podman[296813]: 2025-09-30 18:08:18.536130794 +0000 UTC m=+0.029562591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:08:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550703b032e0d16984cf97847fbe5b2e2657c1813926ffddc70d05b987f201e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550703b032e0d16984cf97847fbe5b2e2657c1813926ffddc70d05b987f201e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550703b032e0d16984cf97847fbe5b2e2657c1813926ffddc70d05b987f201e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550703b032e0d16984cf97847fbe5b2e2657c1813926ffddc70d05b987f201e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:18 compute-0 podman[296813]: 2025-09-30 18:08:18.65353917 +0000 UTC m=+0.146970987 container init a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:08:18 compute-0 podman[296813]: 2025-09-30 18:08:18.662932074 +0000 UTC m=+0.156363841 container start a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:08:18 compute-0 podman[296813]: 2025-09-30 18:08:18.666737514 +0000 UTC m=+0.160169311 container attach a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:08:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:18 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c002c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:18] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Sep 30 18:08:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:18] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]: {
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:     "0": [
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:         {
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "devices": [
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "/dev/loop3"
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             ],
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "lv_name": "ceph_lv0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "lv_size": "21470642176",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "name": "ceph_lv0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "tags": {
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.cluster_name": "ceph",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.crush_device_class": "",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.encrypted": "0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.osd_id": "0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.type": "block",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.vdo": "0",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:                 "ceph.with_tpm": "0"
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             },
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "type": "block",
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:             "vg_name": "ceph_vg0"
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:         }
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]:     ]
Sep 30 18:08:18 compute-0 quirky_lederberg[296830]: }
Sep 30 18:08:18 compute-0 systemd[1]: libpod-a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea.scope: Deactivated successfully.
Sep 30 18:08:18 compute-0 podman[296813]: 2025-09-30 18:08:18.968971931 +0000 UTC m=+0.462403778 container died a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:08:18 compute-0 ceph-mon[73755]: pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-550703b032e0d16984cf97847fbe5b2e2657c1813926ffddc70d05b987f201e0-merged.mount: Deactivated successfully.
Sep 30 18:08:19 compute-0 podman[296813]: 2025-09-30 18:08:19.01233897 +0000 UTC m=+0.505770757 container remove a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:08:19 compute-0 systemd[1]: libpod-conmon-a402c184c68fd4bcf865abbf87fee7d0efd71c7a710fc192a436ca2b08b15cea.scope: Deactivated successfully.
Sep 30 18:08:19 compute-0 sudo[296706]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:19 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:19 compute-0 sudo[296852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:08:19 compute-0 sudo[296852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:19 compute-0 sudo[296852]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.244 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.245 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:19 compute-0 sudo[296877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:08:19 compute-0 sudo[296877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:08:19 compute-0 podman[296942]: 2025-09-30 18:08:19.726405237 +0000 UTC m=+0.055529397 container create 877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.753 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:08:19 compute-0 systemd[1]: Started libpod-conmon-877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97.scope.
Sep 30 18:08:19 compute-0 podman[296942]: 2025-09-30 18:08:19.695066651 +0000 UTC m=+0.024190851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:08:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:08:19 compute-0 podman[296942]: 2025-09-30 18:08:19.830829105 +0000 UTC m=+0.159953325 container init 877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:08:19 compute-0 podman[296942]: 2025-09-30 18:08:19.841152584 +0000 UTC m=+0.170276714 container start 877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:08:19 compute-0 podman[296942]: 2025-09-30 18:08:19.845166738 +0000 UTC m=+0.174290978 container attach 877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:08:19 compute-0 systemd[1]: libpod-877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97.scope: Deactivated successfully.
Sep 30 18:08:19 compute-0 musing_galois[296972]: 167 167
Sep 30 18:08:19 compute-0 conmon[296972]: conmon 877bcdb651644c5d6707 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97.scope/container/memory.events
Sep 30 18:08:19 compute-0 podman[296942]: 2025-09-30 18:08:19.848513005 +0000 UTC m=+0.177637155 container died 877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:08:19 compute-0 podman[296960]: 2025-09-30 18:08:19.849485541 +0000 UTC m=+0.076307868 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Sep 30 18:08:19 compute-0 podman[296961]: 2025-09-30 18:08:19.857530799 +0000 UTC m=+0.078163755 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=)
Sep 30 18:08:19 compute-0 podman[296957]: 2025-09-30 18:08:19.864889801 +0000 UTC m=+0.095044665 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0f1fb8d0969c78714b16c164cc917e710a37e26207e49695a2ba11be1daa5ba-merged.mount: Deactivated successfully.
Sep 30 18:08:19 compute-0 podman[296942]: 2025-09-30 18:08:19.889525942 +0000 UTC m=+0.218650062 container remove 877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:08:19 compute-0 systemd[1]: libpod-conmon-877bcdb651644c5d67079e7d530b1e110c3be89853a59617acf780a36863bb97.scope: Deactivated successfully.
Sep 30 18:08:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.942 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:08:19 compute-0 nova_compute[265391]: 2025-09-30 18:08:19.942 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:20 compute-0 podman[297042]: 2025-09-30 18:08:20.052974576 +0000 UTC m=+0.042165208 container create 0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chaplygin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 18:08:20 compute-0 systemd[1]: Started libpod-conmon-0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2.scope.
Sep 30 18:08:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c523509b73d09aa82ae7b03c8209de3641c4b308b07318a20946a4295e237ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c523509b73d09aa82ae7b03c8209de3641c4b308b07318a20946a4295e237ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c523509b73d09aa82ae7b03c8209de3641c4b308b07318a20946a4295e237ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c523509b73d09aa82ae7b03c8209de3641c4b308b07318a20946a4295e237ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:20 compute-0 podman[297042]: 2025-09-30 18:08:20.035377908 +0000 UTC m=+0.024568540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:08:20 compute-0 podman[297042]: 2025-09-30 18:08:20.148961895 +0000 UTC m=+0.138152557 container init 0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chaplygin, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 18:08:20 compute-0 podman[297042]: 2025-09-30 18:08:20.154648723 +0000 UTC m=+0.143839345 container start 0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:08:20 compute-0 podman[297042]: 2025-09-30 18:08:20.15798322 +0000 UTC m=+0.147173862 container attach 0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chaplygin, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:08:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:20.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:08:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1884273996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.388 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.389 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.399 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.400 2 INFO nova.compute.claims [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.427 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.597 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.599 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.622 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.623 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4646MB free_disk=39.9921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:08:20 compute-0 nova_compute[265391]: 2025-09-30 18:08:20.623 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:20 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:20 compute-0 lvm[297155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:08:20 compute-0 lvm[297155]: VG ceph_vg0 finished
Sep 30 18:08:20 compute-0 festive_chaplygin[297078]: {}
Sep 30 18:08:20 compute-0 systemd[1]: libpod-0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2.scope: Deactivated successfully.
Sep 30 18:08:20 compute-0 podman[297042]: 2025-09-30 18:08:20.927471849 +0000 UTC m=+0.916662481 container died 0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chaplygin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:08:20 compute-0 systemd[1]: libpod-0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2.scope: Consumed 1.227s CPU time.
Sep 30 18:08:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c523509b73d09aa82ae7b03c8209de3641c4b308b07318a20946a4295e237ad-merged.mount: Deactivated successfully.
Sep 30 18:08:20 compute-0 podman[297042]: 2025-09-30 18:08:20.975863379 +0000 UTC m=+0.965054011 container remove 0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:08:20 compute-0 systemd[1]: libpod-conmon-0efc7bf9cc1ccf0506d5f4d05df9cae2bcce35dce1921449fef44f647c7a95f2.scope: Deactivated successfully.
Sep 30 18:08:20 compute-0 ceph-mon[73755]: pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:08:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1884273996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:08:21 compute-0 sudo[296877]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:08:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:08:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:21 compute-0 sudo[297172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:08:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:21 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:21 compute-0 sudo[297172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:21 compute-0 sudo[297172]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:21.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:21 compute-0 nova_compute[265391]: 2025-09-30 18:08:21.458 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 18:08:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:08:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793732369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:08:21 compute-0 nova_compute[265391]: 2025-09-30 18:08:21.950 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:21 compute-0 nova_compute[265391]: 2025-09-30 18:08:21.958 2 DEBUG nova.compute.provider_tree [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:08:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:08:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2793732369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:08:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:08:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:08:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:22.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:22 compute-0 nova_compute[265391]: 2025-09-30 18:08:22.467 2 DEBUG nova.scheduler.client.report [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:08:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:22 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c002c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:22 compute-0 nova_compute[265391]: 2025-09-30 18:08:22.980 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.591s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:22 compute-0 nova_compute[265391]: 2025-09-30 18:08:22.981 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:08:22 compute-0 nova_compute[265391]: 2025-09-30 18:08:22.985 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 2.362s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:23 compute-0 ceph-mon[73755]: pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 18:08:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:08:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:23 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:23.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:23 compute-0 nova_compute[265391]: 2025-09-30 18:08:23.498 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:08:23 compute-0 nova_compute[265391]: 2025-09-30 18:08:23.498 2 DEBUG nova.network.neutron [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:08:23 compute-0 nova_compute[265391]: 2025-09-30 18:08:23.499 2 WARNING neutronclient.v2_0.client [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:08:23 compute-0 nova_compute[265391]: 2025-09-30 18:08:23.500 2 WARNING neutronclient.v2_0.client [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:08:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:23.648Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.008 2 INFO nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.189 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.190 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.190 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:08:20 up  1:11,  0 user,  load average: 0.81, 0.79, 0.90\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_20644a86c59b4259a037c783fe6fff20': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.226 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180824 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 18:08:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.439 2 DEBUG nova.network.neutron [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Successfully created port: 0527f042-75af-44f3-a3f5-b9305b8505fc _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.521 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:08:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:24 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:08:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4169829947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.723 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:24 compute-0 nova_compute[265391]: 2025-09-30 18:08:24.729 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:08:25 compute-0 ceph-mon[73755]: pgmap v884: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:08:25 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4169829947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:08:25 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/766055163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:08:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:25 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.239 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:08:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:25.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.542 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.544 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.545 2 INFO nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Creating image(s)
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.579 2 DEBUG nova.storage.rbd_utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] rbd image d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.604 2 DEBUG nova.storage.rbd_utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] rbd image d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.624 2 DEBUG nova.storage.rbd_utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] rbd image d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.627 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.628 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.751 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:08:25 compute-0 nova_compute[265391]: 2025-09-30 18:08:25.752 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.767s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:08:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:26.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:26 compute-0 nova_compute[265391]: 2025-09-30 18:08:26.668 2 DEBUG nova.virt.libvirt.imagebackend [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Image locations are: [{'url': 'rbd://63d32c6a-fa18-54ed-8711-9a3915cc367b/images/5b99cbca-b655-4be5-8343-cf504005c42e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://63d32c6a-fa18-54ed-8711-9a3915cc367b/images/5b99cbca-b655-4be5-8343-cf504005c42e/snap', 'metadata': {}}] clone /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagebackend.py:1110
Sep 30 18:08:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:26 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c002c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:27.135Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:27 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:27 compute-0 ceph-mon[73755]: pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:08:27 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3746705995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.267 2 DEBUG nova.network.neutron [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Successfully updated port: 0527f042-75af-44f3-a3f5-b9305b8505fc _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:08:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:27.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.752 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.752 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.753 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.753 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.753 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.753 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.773 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "refresh_cache-d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.773 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquired lock "refresh_cache-d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:08:27 compute-0 nova_compute[265391]: 2025-09-30 18:08:27.774 2 DEBUG nova.network.neutron [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:08:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:08:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 9618 writes, 34K keys, 9618 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9618 writes, 2457 syncs, 3.91 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 787 writes, 1414 keys, 787 commit groups, 1.0 writes per commit group, ingest: 0.62 MB, 0.00 MB/s
                                           Interval WAL: 787 writes, 381 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:08:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.044 2 DEBUG oslo_utils.imageutils.format_inspector [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'QFI\xfb') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.047 2 DEBUG oslo_utils.imageutils.format_inspector [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.048 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457.part --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:28 compute-0 sudo[297303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:08:28 compute-0 sudo[297303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:28 compute-0 sudo[297303]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.114 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457.part --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.115 2 DEBUG nova.virt.images [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] 5b99cbca-b655-4be5-8343-cf504005c42e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.12/site-packages/nova/virt/images.py:278
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.116 2 DEBUG nova.privsep.utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.12/site-packages/nova/privsep/utils.py:63
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.116 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457.part /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457.converted execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:28 compute-0 ceph-mon[73755]: pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.194 2 DEBUG nova.compute.manager [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received event network-changed-0527f042-75af-44f3-a3f5-b9305b8505fc external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.194 2 DEBUG nova.compute.manager [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Refreshing instance network info cache due to event network-changed-0527f042-75af-44f3-a3f5-b9305b8505fc. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.194 2 DEBUG oslo_concurrency.lockutils [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.306 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457.part /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457.converted" returned: 0 in 0.189s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.314 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457.converted --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.372 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457.converted --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.374 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.746s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.405 2 DEBUG nova.storage.rbd_utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] rbd image d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.410 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.558 2 DEBUG nova.network.neutron [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:08:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:28 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:28] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Sep 30 18:08:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:28] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Sep 30 18:08:28 compute-0 nova_compute[265391]: 2025-09-30 18:08:28.868 2 WARNING neutronclient.v2_0.client [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:08:29 compute-0 nova_compute[265391]: 2025-09-30 18:08:29.011 2 DEBUG nova.network.neutron [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Updating instance_info_cache with network_info: [{"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:08:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:29 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Sep 30 18:08:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e135 e135: 2 total, 2 up, 2 in
Sep 30 18:08:29 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e135: 2 total, 2 up, 2 in
Sep 30 18:08:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:29 compute-0 nova_compute[265391]: 2025-09-30 18:08:29.520 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Releasing lock "refresh_cache-d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:08:29 compute-0 nova_compute[265391]: 2025-09-30 18:08:29.521 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Instance network_info: |[{"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:08:29 compute-0 nova_compute[265391]: 2025-09-30 18:08:29.522 2 DEBUG oslo_concurrency.lockutils [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:08:29 compute-0 nova_compute[265391]: 2025-09-30 18:08:29.522 2 DEBUG nova.network.neutron [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Refreshing network info cache for port 0527f042-75af-44f3-a3f5-b9305b8505fc _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:08:29 compute-0 podman[276673]: time="2025-09-30T18:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:08:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:08:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10249 "" "Go-http-client/1.1"
Sep 30 18:08:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.029 2 WARNING neutronclient.v2_0.client [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:08:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Sep 30 18:08:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e136 e136: 2 total, 2 up, 2 in
Sep 30 18:08:30 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e136: 2 total, 2 up, 2 in
Sep 30 18:08:30 compute-0 ceph-mon[73755]: osdmap e135: 2 total, 2 up, 2 in
Sep 30 18:08:30 compute-0 ceph-mon[73755]: pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Sep 30 18:08:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.580 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.171s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.648 2 DEBUG nova.storage.rbd_utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] resizing rbd image d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:08:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:30 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.742 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.742 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Ensure instance console log exists: /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.743 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.743 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.743 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.745 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Start _get_guest_xml network_info=[{"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.749 2 WARNING nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.751 2 DEBUG nova.virt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestContinuousAudit-server-1465888902', uuid='d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6'), owner=OwnerMeta(userid='585fdf30683043138bc58696e20c5d20', username='tempest-TestContinuousAudit-161535077-project-admin', projectid='20644a86c59b4259a037c783fe6fff20', projectname='tempest-TestContinuousAudit-161535077'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759255710.7512417) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.763 2 DEBUG nova.virt.libvirt.host [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.763 2 DEBUG nova.virt.libvirt.host [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.768 2 DEBUG nova.virt.libvirt.host [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.768 2 DEBUG nova.virt.libvirt.host [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.769 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.769 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.770 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.770 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.771 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.771 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.771 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.771 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.772 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.772 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.772 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.773 2 DEBUG nova.virt.hardware [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.777 2 DEBUG nova.privsep.utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.12/site-packages/nova/privsep/utils.py:63
Sep 30 18:08:30 compute-0 nova_compute[265391]: 2025-09-30 18:08:30.777 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:08:31 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913416882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.246 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:31 compute-0 ceph-mon[73755]: osdmap e136: 2 total, 2 up, 2 in
Sep 30 18:08:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1913416882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.282 2 DEBUG nova.storage.rbd_utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] rbd image d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.286 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:31 compute-0 openstack_network_exporter[279566]: ERROR   18:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:08:31 compute-0 openstack_network_exporter[279566]: ERROR   18:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:08:31 compute-0 openstack_network_exporter[279566]: ERROR   18:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:08:31 compute-0 openstack_network_exporter[279566]: ERROR   18:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:08:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:08:31 compute-0 openstack_network_exporter[279566]: ERROR   18:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:08:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.524 2 WARNING neutronclient.v2_0.client [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.696 2 DEBUG nova.network.neutron [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Updated VIF entry in instance network info cache for port 0527f042-75af-44f3-a3f5-b9305b8505fc. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.697 2 DEBUG nova.network.neutron [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Updating instance_info_cache with network_info: [{"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:08:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:08:31 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278792369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.776 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.778 2 DEBUG nova.virt.libvirt.vif [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestContinuousAudit-server-1465888902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testcontinuousaudit-server-1465888902',id=1,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20644a86c59b4259a037c783fe6fff20',ramdisk_id='',reservation_id='r-4nsvnjgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestContinuousAudit-161535077',owner_user_name='tempest-TestContinuousAudit-161535077-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:08:24Z,user_data=None,user_id='585fdf30683043138bc58696e20c5d20',uuid=d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.778 2 DEBUG nova.network.os_vif_util [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Converting VIF {"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.780 2 DEBUG nova.network.os_vif_util [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:86:8f,bridge_name='br-int',has_traffic_filtering=True,id=0527f042-75af-44f3-a3f5-b9305b8505fc,network=Network(8540cf85-00d6-4dc4-a235-89cd0b224d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0527f042-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:08:31 compute-0 nova_compute[265391]: 2025-09-30 18:08:31.783 2 DEBUG nova.objects.instance [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lazy-loading 'pci_devices' on Instance uuid d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:08:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.206 2 DEBUG oslo_concurrency.lockutils [req-dfcd82ee-697f-471e-bd4a-a23c117720a6 req-72e8a0d6-fd96-427d-9c7d-bf12e3ea3670 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:08:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4278792369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:08:32 compute-0 ceph-mon[73755]: pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 148 MiB used, 40 GiB / 40 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.291 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <uuid>d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6</uuid>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <name>instance-00000001</name>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <nova:name>tempest-TestContinuousAudit-server-1465888902</nova:name>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:08:30</nova:creationTime>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:08:32 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:08:32 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:user uuid="585fdf30683043138bc58696e20c5d20">tempest-TestContinuousAudit-161535077-project-admin</nova:user>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:project uuid="20644a86c59b4259a037c783fe6fff20">tempest-TestContinuousAudit-161535077</nova:project>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <nova:port uuid="0527f042-75af-44f3-a3f5-b9305b8505fc">
Sep 30 18:08:32 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <system>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <entry name="serial">d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6</entry>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <entry name="uuid">d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6</entry>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </system>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <os>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   </os>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <features>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   </features>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk">
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       </source>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk.config">
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       </source>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:08:32 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:de:86:8f"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <target dev="tap0527f042-75"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6/console.log" append="off"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <video>
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </video>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:08:32 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:08:32 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:08:32 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:08:32 compute-0 nova_compute[265391]: </domain>
Sep 30 18:08:32 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.292 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Preparing to wait for external event network-vif-plugged-0527f042-75af-44f3-a3f5-b9305b8505fc prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.293 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.293 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.293 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.294 2 DEBUG nova.virt.libvirt.vif [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestContinuousAudit-server-1465888902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testcontinuousaudit-server-1465888902',id=1,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20644a86c59b4259a037c783fe6fff20',ramdisk_id='',reservation_id='r-4nsvnjgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestContinuousAudit-161535077',owner_user_name='tempest-TestContinuousAudit-161535077-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:08:24Z,user_data=None,user_id='585fdf30683043138bc58696e20c5d20',uuid=d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.294 2 DEBUG nova.network.os_vif_util [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Converting VIF {"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.295 2 DEBUG nova.network.os_vif_util [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:86:8f,bridge_name='br-int',has_traffic_filtering=True,id=0527f042-75af-44f3-a3f5-b9305b8505fc,network=Network(8540cf85-00d6-4dc4-a235-89cd0b224d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0527f042-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.296 2 DEBUG os_vif [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:86:8f,bridge_name='br-int',has_traffic_filtering=True,id=0527f042-75af-44f3-a3f5-b9305b8505fc,network=Network(8540cf85-00d6-4dc4-a235-89cd0b224d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0527f042-75') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:08:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:32.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.336 2 DEBUG ovsdbapp.backend.ovs_idl [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.337 2 DEBUG ovsdbapp.backend.ovs_idl [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.337 2 DEBUG ovsdbapp.backend.ovs_idl [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.12/site-packages/ovs/reconnect.py:519
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.12/site-packages/ovs/reconnect.py:519
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.351 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.351 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '7d7dd0af-5807-57dd-be87-08f1be4220a3', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:32 compute-0 nova_compute[265391]: 2025-09-30 18:08:32.357 2 INFO oslo.privsep.daemon [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmphrm3bxy4/privsep.sock']
Sep 30 18:08:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:32 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220003ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.157 2 INFO oslo.privsep.daemon [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Spawned new privsep daemon via rootwrap
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.018 751 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.026 751 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.030 751 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.031 751 INFO oslo.privsep.daemon [-] privsep daemon running as pid 751
Sep 30 18:08:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:33 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:33.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0527f042-75, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap0527f042-75, col_values=(('qos', UUID('362b581f-f7cd-45ec-9b59-fe2a6a266371')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap0527f042-75, col_values=(('external_ids', {'iface-id': '0527f042-75af-44f3-a3f5-b9305b8505fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:86:8f', 'vm-uuid': 'd8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:33 compute-0 NetworkManager[45059]: <info>  [1759255713.4225] manager: (tap0527f042-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:33 compute-0 nova_compute[265391]: 2025-09-30 18:08:33.432 2 INFO os_vif [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:86:8f,bridge_name='br-int',has_traffic_filtering=True,id=0527f042-75af-44f3-a3f5-b9305b8505fc,network=Network(8540cf85-00d6-4dc4-a235-89cd0b224d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0527f042-75')
Sep 30 18:08:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:33.648Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Sep 30 18:08:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:34.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:34 compute-0 nova_compute[265391]: 2025-09-30 18:08:34.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:34 compute-0 nova_compute[265391]: 2025-09-30 18:08:34.973 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:08:34 compute-0 nova_compute[265391]: 2025-09-30 18:08:34.974 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:08:34 compute-0 nova_compute[265391]: 2025-09-30 18:08:34.974 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] No VIF found with MAC fa:16:3e:de:86:8f, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:08:34 compute-0 nova_compute[265391]: 2025-09-30 18:08:34.974 2 INFO nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Using config drive
Sep 30 18:08:34 compute-0 ceph-mon[73755]: pgmap v891: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Sep 30 18:08:35 compute-0 nova_compute[265391]: 2025-09-30 18:08:35.011 2 DEBUG nova.storage.rbd_utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] rbd image d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:08:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:35 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:35.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:35 compute-0 nova_compute[265391]: 2025-09-30 18:08:35.529 2 WARNING neutronclient.v2_0.client [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:08:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Sep 30 18:08:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 e137: 2 total, 2 up, 2 in
Sep 30 18:08:35 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e137: 2 total, 2 up, 2 in
Sep 30 18:08:35 compute-0 nova_compute[265391]: 2025-09-30 18:08:35.702 2 INFO nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Creating config drive at /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6/disk.config
Sep 30 18:08:35 compute-0 nova_compute[265391]: 2025-09-30 18:08:35.707 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpcs3ea0w1 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:35 compute-0 nova_compute[265391]: 2025-09-30 18:08:35.839 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpcs3ea0w1" returned: 0 in 0.132s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:35 compute-0 nova_compute[265391]: 2025-09-30 18:08:35.894 2 DEBUG nova.storage.rbd_utils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] rbd image d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:08:35 compute-0 nova_compute[265391]: 2025-09-30 18:08:35.897 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6/disk.config d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:08:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 31 KiB/s rd, 3.2 MiB/s wr, 48 op/s
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.068 2 DEBUG oslo_concurrency.processutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6/disk.config d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.069 2 INFO nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Deleting local config drive /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6/disk.config because it was imported into RBD.
Sep 30 18:08:36 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 18:08:36 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 18:08:36 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Sep 30 18:08:36 compute-0 kernel: tap0527f042-75: entered promiscuous mode
Sep 30 18:08:36 compute-0 ovn_controller[156242]: 2025-09-30T18:08:36Z|00040|binding|INFO|Claiming lport 0527f042-75af-44f3-a3f5-b9305b8505fc for this chassis.
Sep 30 18:08:36 compute-0 ovn_controller[156242]: 2025-09-30T18:08:36Z|00041|binding|INFO|0527f042-75af-44f3-a3f5-b9305b8505fc: Claiming fa:16:3e:de:86:8f 10.100.0.9
Sep 30 18:08:36 compute-0 NetworkManager[45059]: <info>  [1759255716.2008] manager: (tap0527f042-75): new Tun device (/org/freedesktop/NetworkManager/Devices/25)
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.224 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:86:8f 10.100.0.9'], port_security=['fa:16:3e:de:86:8f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20644a86c59b4259a037c783fe6fff20', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'abe33b85-4e2c-42ed-aa04-22ef524434e4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63eb3a66-bb05-42df-9005-47612d7f10be, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=0527f042-75af-44f3-a3f5-b9305b8505fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.225 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 0527f042-75af-44f3-a3f5-b9305b8505fc in datapath 8540cf85-00d6-4dc4-a235-89cd0b224d26 bound to our chassis
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.227 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8540cf85-00d6-4dc4-a235-89cd0b224d26
Sep 30 18:08:36 compute-0 systemd-udevd[297626]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.258 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[738a920d-0eb1-4040-ba9a-3840f615d91a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.259 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8540cf85-01 in ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.261 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8540cf85-00 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.261 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[09ec7917-0346-4d4b-a712-7a79658b9018]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.262 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1b95b6ad-7c1f-49b6-8f1a-b5f5801bb65e]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:36 compute-0 systemd-machined[219917]: New machine qemu-1-instance-00000001.
Sep 30 18:08:36 compute-0 NetworkManager[45059]: <info>  [1759255716.2736] device (tap0527f042-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:08:36 compute-0 NetworkManager[45059]: <info>  [1759255716.2750] device (tap0527f042-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.287 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1df9d7-6c78-4a38-812e-f9dd018ba698]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.311 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5f469ecd-829b-4c9e-98f1-c28aa81e895e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:36 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Sep 30 18:08:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.312 166158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpui4_k7x5/privsep.sock']
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:36 compute-0 ovn_controller[156242]: 2025-09-30T18:08:36Z|00042|binding|INFO|Setting lport 0527f042-75af-44f3-a3f5-b9305b8505fc ovn-installed in OVS
Sep 30 18:08:36 compute-0 ovn_controller[156242]: 2025-09-30T18:08:36Z|00043|binding|INFO|Setting lport 0527f042-75af-44f3-a3f5-b9305b8505fc up in Southbound
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:36.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:08:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/923608034' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:08:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:08:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/923608034' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:08:36 compute-0 ceph-mon[73755]: osdmap e137: 2 total, 2 up, 2 in
Sep 30 18:08:36 compute-0 ceph-mon[73755]: pgmap v893: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 31 KiB/s rd, 3.2 MiB/s wr, 48 op/s
Sep 30 18:08:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/923608034' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:08:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/923608034' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.675 2 DEBUG nova.compute.manager [req-3753b3ea-21f0-42e6-a9e7-5f6e23d80101 req-7577dedf-00c7-4119-831c-1b7a8b8cdd3c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received event network-vif-plugged-0527f042-75af-44f3-a3f5-b9305b8505fc external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.676 2 DEBUG oslo_concurrency.lockutils [req-3753b3ea-21f0-42e6-a9e7-5f6e23d80101 req-7577dedf-00c7-4119-831c-1b7a8b8cdd3c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.676 2 DEBUG oslo_concurrency.lockutils [req-3753b3ea-21f0-42e6-a9e7-5f6e23d80101 req-7577dedf-00c7-4119-831c-1b7a8b8cdd3c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.677 2 DEBUG oslo_concurrency.lockutils [req-3753b3ea-21f0-42e6-a9e7-5f6e23d80101 req-7577dedf-00c7-4119-831c-1b7a8b8cdd3c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:36 compute-0 nova_compute[265391]: 2025-09-30 18:08:36.677 2 DEBUG nova.compute.manager [req-3753b3ea-21f0-42e6-a9e7-5f6e23d80101 req-7577dedf-00c7-4119-831c-1b7a8b8cdd3c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Processing event network-vif-plugged-0527f042-75af-44f3-a3f5-b9305b8505fc _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:08:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:36 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:37.088 166158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:37.088 166158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpui4_k7x5/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.923 297688 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.927 297688 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.930 297688 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:36.930 297688 INFO oslo.privsep.daemon [-] privsep daemon running as pid 297688
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:37.090 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[1798abc2-9167-4b0c-83fb-ee96e135bd32]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:37.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.257 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.260 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.263 2 INFO nova.virt.libvirt.driver [-] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Instance spawned successfully.
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.263 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:08:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:08:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:08:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:37.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:08:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:08:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:08:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:08:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:08:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:08:37 compute-0 podman[297695]: 2025-09-30 18:08:37.539194511 +0000 UTC m=+0.069681005 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:37.570 297688 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:37.570 297688 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:37.570 297688 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:37 compute-0 podman[297694]: 2025-09-30 18:08:37.574678144 +0000 UTC m=+0.107479219 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller)
Sep 30 18:08:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.777 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.778 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.778 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.778 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.779 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:08:37 compute-0 nova_compute[265391]: 2025-09-30 18:08:37.779 2 DEBUG nova.virt.libvirt.driver [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:08:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.074 297688 INFO oslo_service.backend [-] Loading backend: eventlet
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.080 297688 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.153 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc9adfd-e5b9-4b5f-a306-b13a0e63b1c1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 NetworkManager[45059]: <info>  [1759255718.1614] manager: (tap8540cf85-00): new Veth device (/org/freedesktop/NetworkManager/Devices/26)
Sep 30 18:08:38 compute-0 systemd-udevd[297629]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.162 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a1212294-672a-4d49-90fb-5dd0da39fb39]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.210 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[55d7ca7c-9703-460e-aad0-ad80728a6537]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.214 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[3009ad89-7d0b-4403-ace4-4d7f15851cf2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 NetworkManager[45059]: <info>  [1759255718.2413] device (tap8540cf85-00): carrier: link connected
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.249 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5ed1bc-c07d-4af4-a205-afecaebc0d94]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.273 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[783385ee-2393-411c-b69d-9903f3ca1f85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8540cf85-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:82:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431693, 'reachable_time': 38144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297760, 'error': None, 'target': 'ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.289 2 INFO nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Took 12.75 seconds to spawn the instance on the hypervisor.
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.290 2 DEBUG nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.299 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2488f24a-26a4-4ca5-8dcb-bd0f7cfda86d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:8219'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431693, 'tstamp': 431693}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297761, 'error': None, 'target': 'ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.324 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3272d2-2482-42c5-bef2-6daa375996d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8540cf85-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:82:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431693, 'reachable_time': 38144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297762, 'error': None, 'target': 'ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.368 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6fc7d2-ecb8-4ef3-a9c6-9ee3946c35e0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.450 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b10b0ba4-321d-4f9b-9e2c-57dad1b35a26]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.451 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8540cf85-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.452 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.452 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8540cf85-00, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:38 compute-0 kernel: tap8540cf85-00: entered promiscuous mode
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:38 compute-0 NetworkManager[45059]: <info>  [1759255718.4549] manager: (tap8540cf85-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.457 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8540cf85-00, col_values=(('external_ids', {'iface-id': '049e58b6-619a-494f-be8e-cd60a8549d92'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:38 compute-0 ovn_controller[156242]: 2025-09-30T18:08:38Z|00044|binding|INFO|Releasing lport 049e58b6-619a-494f-be8e-cd60a8549d92 from this chassis (sb_readonly=0)
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.462 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[69298796-dd56-42a7-b1c8-c39257cefc20]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.463 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.463 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.463 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 8540cf85-00d6-4dc4-a235-89cd0b224d26 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.463 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.464 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2813a359-c713-442a-b58b-6a508f2229e6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.464 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.465 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8d10ae-db35-49dd-8fbe-babeb3d49b1f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.465 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-8540cf85-00d6-4dc4-a235-89cd0b224d26
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 8540cf85-00d6-4dc4-a235-89cd0b224d26
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:08:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:38.466 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'env', 'PROCESS_TAG=haproxy-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8540cf85-00d6-4dc4-a235-89cd0b224d26.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:38 compute-0 ceph-mon[73755]: pgmap v894: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Sep 30 18:08:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:38 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.787 2 DEBUG nova.compute.manager [req-6463bce9-3b9f-4acc-9091-b79e7005142c req-5d1cf6c4-13dd-4140-82e0-c4fffe590b7a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received event network-vif-plugged-0527f042-75af-44f3-a3f5-b9305b8505fc external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.787 2 DEBUG oslo_concurrency.lockutils [req-6463bce9-3b9f-4acc-9091-b79e7005142c req-5d1cf6c4-13dd-4140-82e0-c4fffe590b7a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.788 2 DEBUG oslo_concurrency.lockutils [req-6463bce9-3b9f-4acc-9091-b79e7005142c req-5d1cf6c4-13dd-4140-82e0-c4fffe590b7a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.788 2 DEBUG oslo_concurrency.lockutils [req-6463bce9-3b9f-4acc-9091-b79e7005142c req-5d1cf6c4-13dd-4140-82e0-c4fffe590b7a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.788 2 DEBUG nova.compute.manager [req-6463bce9-3b9f-4acc-9091-b79e7005142c req-5d1cf6c4-13dd-4140-82e0-c4fffe590b7a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] No waiting events found dispatching network-vif-plugged-0527f042-75af-44f3-a3f5-b9305b8505fc pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.789 2 WARNING nova.compute.manager [req-6463bce9-3b9f-4acc-9091-b79e7005142c req-5d1cf6c4-13dd-4140-82e0-c4fffe590b7a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received unexpected event network-vif-plugged-0527f042-75af-44f3-a3f5-b9305b8505fc for instance with vm_state active and task_state None.
Sep 30 18:08:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:38] "GET /metrics HTTP/1.1" 200 46643 "" "Prometheus/2.51.0"
Sep 30 18:08:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:38] "GET /metrics HTTP/1.1" 200 46643 "" "Prometheus/2.51.0"
Sep 30 18:08:38 compute-0 nova_compute[265391]: 2025-09-30 18:08:38.824 2 INFO nova.compute.manager [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Took 18.56 seconds to build instance.
Sep 30 18:08:38 compute-0 podman[297795]: 2025-09-30 18:08:38.891928241 +0000 UTC m=+0.053756050 container create 43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:08:38 compute-0 systemd[1]: Started libpod-conmon-43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c.scope.
Sep 30 18:08:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:08:38 compute-0 podman[297795]: 2025-09-30 18:08:38.863261695 +0000 UTC m=+0.025089534 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13102a7448788904fee4837e674d5de34770da9a7ff59ac3fcf28673e713ba04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:08:38 compute-0 podman[297795]: 2025-09-30 18:08:38.980485086 +0000 UTC m=+0.142312915 container init 43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Sep 30 18:08:38 compute-0 podman[297795]: 2025-09-30 18:08:38.986909724 +0000 UTC m=+0.148737533 container start 43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:08:39 compute-0 neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26[297811]: [NOTICE]   (297815) : New worker (297817) forked
Sep 30 18:08:39 compute-0 neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26[297811]: [NOTICE]   (297815) : Loading success.
Sep 30 18:08:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:39 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:39.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:39 compute-0 nova_compute[265391]: 2025-09-30 18:08:39.330 2 DEBUG oslo_concurrency.lockutils [None req-1f48a099-f3e6-47b6-be39-f1e6ae06cdd5 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.085s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:39 compute-0 nova_compute[265391]: 2025-09-30 18:08:39.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 100 op/s
Sep 30 18:08:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:40.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:40 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:40 compute-0 ceph-mon[73755]: pgmap v895: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 100 op/s
Sep 30 18:08:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:41 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:41.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Sep 30 18:08:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:42.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:42 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:43 compute-0 ceph-mon[73755]: pgmap v896: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Sep 30 18:08:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:43 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:43.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:43 compute-0 nova_compute[265391]: 2025-09-30 18:08:43.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:43.649Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:08:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:43.651Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 88 op/s
Sep 30 18:08:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:44.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:44 compute-0 nova_compute[265391]: 2025-09-30 18:08:44.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:44 compute-0 podman[297832]: 2025-09-30 18:08:44.517914155 +0000 UTC m=+0.060793374 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4)
Sep 30 18:08:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:44 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0018b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:45 compute-0 ceph-mon[73755]: pgmap v897: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 88 op/s
Sep 30 18:08:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:45 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:45.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 85 op/s
Sep 30 18:08:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:46.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:46 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:47 compute-0 ceph-mon[73755]: pgmap v898: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 85 op/s
Sep 30 18:08:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:47.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:08:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:47.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:47 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:47.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:08:48 compute-0 sudo[297857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:08:48 compute-0 sudo[297857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:08:48 compute-0 sudo[297857]: pam_unix(sudo:session): session closed for user root
Sep 30 18:08:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:48.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:48 compute-0 nova_compute[265391]: 2025-09-30 18:08:48.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:48 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Sep 30 18:08:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:48 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:48] "GET /metrics HTTP/1.1" 200 46643 "" "Prometheus/2.51.0"
Sep 30 18:08:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:48] "GET /metrics HTTP/1.1" 200 46643 "" "Prometheus/2.51.0"
Sep 30 18:08:49 compute-0 ceph-mon[73755]: pgmap v899: 353 pgs: 353 active+clean; 88 MiB data, 170 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:08:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:49 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:49 compute-0 nova_compute[265391]: 2025-09-30 18:08:49.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:49 compute-0 ovn_controller[156242]: 2025-09-30T18:08:49Z|00003|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:86:8f 10.100.0.9
Sep 30 18:08:49 compute-0 ovn_controller[156242]: 2025-09-30T18:08:49Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:86:8f 10.100.0.9
Sep 30 18:08:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 109 MiB data, 173 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Sep 30 18:08:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:50.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:50 compute-0 podman[297887]: 2025-09-30 18:08:50.561236061 +0000 UTC m=+0.088237728 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:08:50 compute-0 podman[297894]: 2025-09-30 18:08:50.580566804 +0000 UTC m=+0.085077196 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers)
Sep 30 18:08:50 compute-0 podman[297888]: 2025-09-30 18:08:50.597274529 +0000 UTC m=+0.113314631 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 18:08:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:50 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:51 compute-0 sshd-session[297885]: Invalid user camera from 14.225.220.107 port 45512
Sep 30 18:08:51 compute-0 sshd-session[297885]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:08:51 compute-0 sshd-session[297885]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:08:51 compute-0 ceph-mon[73755]: pgmap v900: 353 pgs: 353 active+clean; 109 MiB data, 173 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Sep 30 18:08:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:51 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 109 MiB data, 173 MiB used, 40 GiB / 40 GiB avail; 750 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Sep 30 18:08:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:08:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:08:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:52.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:52 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:53 compute-0 ceph-mon[73755]: pgmap v901: 353 pgs: 353 active+clean; 109 MiB data, 173 MiB used, 40 GiB / 40 GiB avail; 750 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Sep 30 18:08:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:08:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:53 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:53 compute-0 nova_compute[265391]: 2025-09-30 18:08:53.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:53 compute-0 sshd-session[297885]: Failed password for invalid user camera from 14.225.220.107 port 45512 ssh2
Sep 30 18:08:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:53.652Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:08:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:53.652Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 230 MiB used, 40 GiB / 40 GiB avail; 955 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Sep 30 18:08:54 compute-0 sshd-session[297885]: Received disconnect from 14.225.220.107 port 45512:11: Bye Bye [preauth]
Sep 30 18:08:54 compute-0 sshd-session[297885]: Disconnected from invalid user camera 14.225.220.107 port 45512 [preauth]
Sep 30 18:08:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:54.273 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:54.273 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:54.274 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:54.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:54 compute-0 nova_compute[265391]: 2025-09-30 18:08:54.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:55 compute-0 ceph-mon[73755]: pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 230 MiB used, 40 GiB / 40 GiB avail; 955 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Sep 30 18:08:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:55 compute-0 nova_compute[265391]: 2025-09-30 18:08:55.448 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:55 compute-0 nova_compute[265391]: 2025-09-30 18:08:55.449 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:55 compute-0 nova_compute[265391]: 2025-09-30 18:08:55.449 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:55 compute-0 nova_compute[265391]: 2025-09-30 18:08:55.449 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:55 compute-0 nova_compute[265391]: 2025-09-30 18:08:55.449 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:55 compute-0 nova_compute[265391]: 2025-09-30 18:08:55.466 2 INFO nova.compute.manager [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Terminating instance
Sep 30 18:08:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:08:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 230 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:08:55 compute-0 nova_compute[265391]: 2025-09-30 18:08:55.983 2 DEBUG nova.compute.manager [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:08:56 compute-0 kernel: tap0527f042-75 (unregistering): left promiscuous mode
Sep 30 18:08:56 compute-0 NetworkManager[45059]: <info>  [1759255736.0400] device (tap0527f042-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:08:56 compute-0 ovn_controller[156242]: 2025-09-30T18:08:56Z|00045|binding|INFO|Releasing lport 0527f042-75af-44f3-a3f5-b9305b8505fc from this chassis (sb_readonly=0)
Sep 30 18:08:56 compute-0 ovn_controller[156242]: 2025-09-30T18:08:56Z|00046|binding|INFO|Setting lport 0527f042-75af-44f3-a3f5-b9305b8505fc down in Southbound
Sep 30 18:08:56 compute-0 ovn_controller[156242]: 2025-09-30T18:08:56Z|00047|binding|INFO|Removing iface tap0527f042-75 ovn-installed in OVS
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.057 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:86:8f 10.100.0.9'], port_security=['fa:16:3e:de:86:8f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20644a86c59b4259a037c783fe6fff20', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'abe33b85-4e2c-42ed-aa04-22ef524434e4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63eb3a66-bb05-42df-9005-47612d7f10be, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=0527f042-75af-44f3-a3f5-b9305b8505fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.058 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 0527f042-75af-44f3-a3f5-b9305b8505fc in datapath 8540cf85-00d6-4dc4-a235-89cd0b224d26 unbound from our chassis
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.059 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8540cf85-00d6-4dc4-a235-89cd0b224d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.060 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[af6c16f6-8b97-452f-ae80-45419db0e99c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.061 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26 namespace which is not needed anymore
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Sep 30 18:08:56 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 13.071s CPU time.
Sep 30 18:08:56 compute-0 systemd-machined[219917]: Machine qemu-1-instance-00000001 terminated.
Sep 30 18:08:56 compute-0 neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26[297811]: [NOTICE]   (297815) : haproxy version is 3.0.5-8e879a5
Sep 30 18:08:56 compute-0 neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26[297811]: [NOTICE]   (297815) : path to executable is /usr/sbin/haproxy
Sep 30 18:08:56 compute-0 neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26[297811]: [WARNING]  (297815) : Exiting Master process...
Sep 30 18:08:56 compute-0 podman[297976]: 2025-09-30 18:08:56.175077678 +0000 UTC m=+0.032172529 container kill 43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:08:56 compute-0 neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26[297811]: [ALERT]    (297815) : Current worker (297817) exited with code 143 (Terminated)
Sep 30 18:08:56 compute-0 neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26[297811]: [WARNING]  (297815) : All workers exited. Exiting... (0)
Sep 30 18:08:56 compute-0 systemd[1]: libpod-43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c.scope: Deactivated successfully.
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.220 2 INFO nova.virt.libvirt.driver [-] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Instance destroyed successfully.
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.222 2 DEBUG nova.objects.instance [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lazy-loading 'resources' on Instance uuid d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:08:56 compute-0 podman[297991]: 2025-09-30 18:08:56.225147261 +0000 UTC m=+0.029719574 container died 43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.vendor=CentOS)
Sep 30 18:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c-userdata-shm.mount: Deactivated successfully.
Sep 30 18:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-13102a7448788904fee4837e674d5de34770da9a7ff59ac3fcf28673e713ba04-merged.mount: Deactivated successfully.
Sep 30 18:08:56 compute-0 podman[297991]: 2025-09-30 18:08:56.270460041 +0000 UTC m=+0.075032344 container cleanup 43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:08:56 compute-0 systemd[1]: libpod-conmon-43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c.scope: Deactivated successfully.
Sep 30 18:08:56 compute-0 podman[297993]: 2025-09-30 18:08:56.28809416 +0000 UTC m=+0.082625802 container remove 43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.293 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d679875-ad7d-4a3f-b3a2-e5e6fc614007]: (4, ("Tue Sep 30 06:08:56 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26 (43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c)\n43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c\nTue Sep 30 06:08:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26 (43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c)\n43d7f5092d6e2488f7e34d93183b8f6275704bba83b427d141c4e67c55644a9c\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.294 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe0142e-89c3-461c-a6d2-83eee1051fff]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.295 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8540cf85-00d6-4dc4-a235-89cd0b224d26.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.295 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d722e16f-ed5e-4154-beb2-53816f0d0f7b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.296 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8540cf85-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 kernel: tap8540cf85-00: left promiscuous mode
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.318 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[916e9ab8-b8e9-4f6d-b66f-6fe45f734107]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.345 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8154ffbd-f3ea-40a0-a86f-798e84f653ca]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.346 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea8a79a-6709-4837-a00b-6a988fdddf98]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:56.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.362 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c51801ab-07c8-4787-8546-0657f65d9db4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431684, 'reachable_time': 18316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298037, 'error': None, 'target': 'ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d8540cf85\x2d00d6\x2d4dc4\x2da235\x2d89cd0b224d26.mount: Deactivated successfully.
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.374 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8540cf85-00d6-4dc4-a235-89cd0b224d26 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.377 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[85bf5698-24ca-4aaa-9059-509f93763aff]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.657 2 DEBUG nova.compute.manager [req-1558fa23-1047-4303-aaa7-d50d073e76d0 req-6c86df23-1042-45ac-a5a5-6f2563922718 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received event network-vif-unplugged-0527f042-75af-44f3-a3f5-b9305b8505fc external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.658 2 DEBUG oslo_concurrency.lockutils [req-1558fa23-1047-4303-aaa7-d50d073e76d0 req-6c86df23-1042-45ac-a5a5-6f2563922718 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.658 2 DEBUG oslo_concurrency.lockutils [req-1558fa23-1047-4303-aaa7-d50d073e76d0 req-6c86df23-1042-45ac-a5a5-6f2563922718 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.659 2 DEBUG oslo_concurrency.lockutils [req-1558fa23-1047-4303-aaa7-d50d073e76d0 req-6c86df23-1042-45ac-a5a5-6f2563922718 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.660 2 DEBUG nova.compute.manager [req-1558fa23-1047-4303-aaa7-d50d073e76d0 req-6c86df23-1042-45ac-a5a5-6f2563922718 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] No waiting events found dispatching network-vif-unplugged-0527f042-75af-44f3-a3f5-b9305b8505fc pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.661 2 DEBUG nova.compute.manager [req-1558fa23-1047-4303-aaa7-d50d073e76d0 req-6c86df23-1042-45ac-a5a5-6f2563922718 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received event network-vif-unplugged-0527f042-75af-44f3-a3f5-b9305b8505fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:08:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:56 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.717 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:08:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:08:56.760 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.761 2 DEBUG nova.virt.libvirt.vif [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:08:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestContinuousAudit-server-1465888902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testcontinuousaudit-server-1465888902',id=1,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:08:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20644a86c59b4259a037c783fe6fff20',ramdisk_id='',reservation_id='r-4nsvnjgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestContinuousAudit-161535077',owner_user_name='tempest-TestContinuousAudit-161535077-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:08:38Z,user_data=None,user_id='585fdf30683043138bc58696e20c5d20',uuid=d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.762 2 DEBUG nova.network.os_vif_util [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Converting VIF {"id": "0527f042-75af-44f3-a3f5-b9305b8505fc", "address": "fa:16:3e:de:86:8f", "network": {"id": "8540cf85-00d6-4dc4-a235-89cd0b224d26", "bridge": "br-int", "label": "tempest-TestContinuousAudit-1473540262-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5ecd2ee32c3491198baea5df005e7e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0527f042-75", "ovs_interfaceid": "0527f042-75af-44f3-a3f5-b9305b8505fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.764 2 DEBUG nova.network.os_vif_util [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:86:8f,bridge_name='br-int',has_traffic_filtering=True,id=0527f042-75af-44f3-a3f5-b9305b8505fc,network=Network(8540cf85-00d6-4dc4-a235-89cd0b224d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0527f042-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.765 2 DEBUG os_vif [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:86:8f,bridge_name='br-int',has_traffic_filtering=True,id=0527f042-75af-44f3-a3f5-b9305b8505fc,network=Network(8540cf85-00d6-4dc4-a235-89cd0b224d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0527f042-75') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.771 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0527f042-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.778 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=362b581f-f7cd-45ec-9b59-fe2a6a266371) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:56 compute-0 nova_compute[265391]: 2025-09-30 18:08:56.783 2 INFO os_vif [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:86:8f,bridge_name='br-int',has_traffic_filtering=True,id=0527f042-75af-44f3-a3f5-b9305b8505fc,network=Network(8540cf85-00d6-4dc4-a235-89cd0b224d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0527f042-75')
Sep 30 18:08:57 compute-0 ceph-mon[73755]: pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 230 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:08:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:08:57.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:08:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:57 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:57 compute-0 nova_compute[265391]: 2025-09-30 18:08:57.232 2 INFO nova.virt.libvirt.driver [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Deleting instance files /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_del
Sep 30 18:08:57 compute-0 nova_compute[265391]: 2025-09-30 18:08:57.232 2 INFO nova.virt.libvirt.driver [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Deletion of /var/lib/nova/instances/d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6_del complete
Sep 30 18:08:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:57.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:08:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/530785160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:08:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:08:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/530785160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:08:57 compute-0 nova_compute[265391]: 2025-09-30 18:08:57.748 2 INFO nova.compute.manager [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Took 1.76 seconds to destroy the instance on the hypervisor.
Sep 30 18:08:57 compute-0 nova_compute[265391]: 2025-09-30 18:08:57.749 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:08:57 compute-0 nova_compute[265391]: 2025-09-30 18:08:57.749 2 DEBUG nova.compute.manager [-] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:08:57 compute-0 nova_compute[265391]: 2025-09-30 18:08:57.750 2 DEBUG nova.network.neutron [-] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:08:57 compute-0 nova_compute[265391]: 2025-09-30 18:08:57.750 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:08:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 230 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:08:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/530785160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:08:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/530785160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:08:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:08:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:08:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:08:58 compute-0 nova_compute[265391]: 2025-09-30 18:08:58.551 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:08:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:58 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:58 compute-0 nova_compute[265391]: 2025-09-30 18:08:58.724 2 DEBUG nova.compute.manager [req-16304bc3-20f8-48df-bc24-b0f894614fb9 req-b2b8c850-47ec-4ed5-b0d2-b5451864cbea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received event network-vif-unplugged-0527f042-75af-44f3-a3f5-b9305b8505fc external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:08:58 compute-0 nova_compute[265391]: 2025-09-30 18:08:58.724 2 DEBUG oslo_concurrency.lockutils [req-16304bc3-20f8-48df-bc24-b0f894614fb9 req-b2b8c850-47ec-4ed5-b0d2-b5451864cbea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:08:58 compute-0 nova_compute[265391]: 2025-09-30 18:08:58.724 2 DEBUG oslo_concurrency.lockutils [req-16304bc3-20f8-48df-bc24-b0f894614fb9 req-b2b8c850-47ec-4ed5-b0d2-b5451864cbea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:08:58 compute-0 nova_compute[265391]: 2025-09-30 18:08:58.725 2 DEBUG oslo_concurrency.lockutils [req-16304bc3-20f8-48df-bc24-b0f894614fb9 req-b2b8c850-47ec-4ed5-b0d2-b5451864cbea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:08:58 compute-0 nova_compute[265391]: 2025-09-30 18:08:58.725 2 DEBUG nova.compute.manager [req-16304bc3-20f8-48df-bc24-b0f894614fb9 req-b2b8c850-47ec-4ed5-b0d2-b5451864cbea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] No waiting events found dispatching network-vif-unplugged-0527f042-75af-44f3-a3f5-b9305b8505fc pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:08:58 compute-0 nova_compute[265391]: 2025-09-30 18:08:58.725 2 DEBUG nova.compute.manager [req-16304bc3-20f8-48df-bc24-b0f894614fb9 req-b2b8c850-47ec-4ed5-b0d2-b5451864cbea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received event network-vif-unplugged-0527f042-75af-44f3-a3f5-b9305b8505fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:08:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:58] "GET /metrics HTTP/1.1" 200 46641 "" "Prometheus/2.51.0"
Sep 30 18:08:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:08:58] "GET /metrics HTTP/1.1" 200 46641 "" "Prometheus/2.51.0"
Sep 30 18:08:59 compute-0 ceph-mon[73755]: pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 230 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:08:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:08:59 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:08:59 compute-0 nova_compute[265391]: 2025-09-30 18:08:59.336 2 DEBUG nova.network.neutron [-] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:08:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:08:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:08:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:08:59.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:08:59 compute-0 nova_compute[265391]: 2025-09-30 18:08:59.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:08:59 compute-0 podman[276673]: time="2025-09-30T18:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:08:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:08:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10252 "" "Go-http-client/1.1"
Sep 30 18:08:59 compute-0 nova_compute[265391]: 2025-09-30 18:08:59.846 2 INFO nova.compute.manager [-] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Took 2.10 seconds to deallocate network for instance.
Sep 30 18:08:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:09:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:00.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:00 compute-0 nova_compute[265391]: 2025-09-30 18:09:00.371 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:00 compute-0 nova_compute[265391]: 2025-09-30 18:09:00.372 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:00 compute-0 nova_compute[265391]: 2025-09-30 18:09:00.425 2 DEBUG oslo_concurrency.processutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:00 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:00 compute-0 nova_compute[265391]: 2025-09-30 18:09:00.815 2 DEBUG nova.compute.manager [req-163cf39f-17c8-489b-8516-d81d98209566 req-14a7f673-c7c3-4494-aa5c-97bf793e27eb 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6] Received event network-vif-deleted-0527f042-75af-44f3-a3f5-b9305b8505fc external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:09:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:09:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2321016534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:00 compute-0 nova_compute[265391]: 2025-09-30 18:09:00.873 2 DEBUG oslo_concurrency.processutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:00 compute-0 nova_compute[265391]: 2025-09-30 18:09:00.880 2 DEBUG nova.compute.provider_tree [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 39, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:09:01 compute-0 ceph-mon[73755]: pgmap v905: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:09:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2321016534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:01.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:01 compute-0 openstack_network_exporter[279566]: ERROR   18:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:09:01 compute-0 openstack_network_exporter[279566]: ERROR   18:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:09:01 compute-0 openstack_network_exporter[279566]: ERROR   18:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.421 2 ERROR nova.scheduler.client.report [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] [req-fee9aa89-d5dc-497d-a03d-ba6f147034e7] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 39, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 5403d2fc-3ca9-4ee2-946b-a15032cca0c2.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-fee9aa89-d5dc-497d-a03d-ba6f147034e7"}]}
Sep 30 18:09:01 compute-0 openstack_network_exporter[279566]: ERROR   18:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:09:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:09:01 compute-0 openstack_network_exporter[279566]: ERROR   18:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:09:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.444 2 DEBUG nova.scheduler.client.report [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.458 2 DEBUG nova.scheduler.client.report [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.459 2 DEBUG nova.compute.provider_tree [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 0, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.476 2 DEBUG nova.scheduler.client.report [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.495 2 DEBUG nova.scheduler.client.report [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.526 2 DEBUG oslo_concurrency.processutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 224 KiB/s rd, 146 KiB/s wr, 61 op/s
Sep 30 18:09:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:09:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078215495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.977 2 DEBUG oslo_concurrency.processutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:01 compute-0 nova_compute[265391]: 2025-09-30 18:09:01.983 2 DEBUG nova.compute.provider_tree [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 39, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:09:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3078215495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:02 compute-0 nova_compute[265391]: 2025-09-30 18:09:02.531 2 DEBUG nova.scheduler.client.report [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Updated inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 39, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:975
Sep 30 18:09:02 compute-0 nova_compute[265391]: 2025-09-30 18:09:02.532 2 DEBUG nova.compute.provider_tree [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:09:02 compute-0 nova_compute[265391]: 2025-09-30 18:09:02.533 2 DEBUG nova.compute.provider_tree [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:09:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:02 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:03 compute-0 nova_compute[265391]: 2025-09-30 18:09:03.045 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.673s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:03 compute-0 nova_compute[265391]: 2025-09-30 18:09:03.070 2 INFO nova.scheduler.client.report [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Deleted allocations for instance d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6
Sep 30 18:09:03 compute-0 ceph-mon[73755]: pgmap v906: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 224 KiB/s rd, 146 KiB/s wr, 61 op/s
Sep 30 18:09:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:03 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:03.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:03.653Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 224 KiB/s rd, 146 KiB/s wr, 61 op/s
Sep 30 18:09:04 compute-0 nova_compute[265391]: 2025-09-30 18:09:04.104 2 DEBUG oslo_concurrency.lockutils [None req-455a32cf-474c-4e42-85c1-cb4d2a70bc95 585fdf30683043138bc58696e20c5d20 20644a86c59b4259a037c783fe6fff20 - - default default] Lock "d8a3b6e3-8f20-4a0e-99d2-c531b63cf7e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.655s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:04 compute-0 ceph-mon[73755]: pgmap v907: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 224 KiB/s rd, 146 KiB/s wr, 61 op/s
Sep 30 18:09:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:04 compute-0 nova_compute[265391]: 2025-09-30 18:09:04.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:04 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:05 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:05.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Sep 30 18:09:06 compute-0 sshd-session[298111]: Invalid user camera from 45.252.249.158 port 59522
Sep 30 18:09:06 compute-0 sshd-session[298111]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:09:06 compute-0 sshd-session[298111]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:09:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:06.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:06 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:06 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:06.763 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:09:06 compute-0 nova_compute[265391]: 2025-09-30 18:09:06.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:07 compute-0 ceph-mon[73755]: pgmap v908: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Sep 30 18:09:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:07.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:09:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:09:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:07.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:09:07
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'default.rgw.meta', '.nfs', 'vms', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images']
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:09:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Sep 30 18:09:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:09:08 compute-0 sshd-session[298111]: Failed password for invalid user camera from 45.252.249.158 port 59522 ssh2
Sep 30 18:09:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:08 compute-0 sudo[298119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:09:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:08.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:08 compute-0 sudo[298119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:08 compute-0 sudo[298119]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:08 compute-0 podman[298144]: 2025-09-30 18:09:08.463504433 +0000 UTC m=+0.073669729 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:09:08 compute-0 podman[298143]: 2025-09-30 18:09:08.485239829 +0000 UTC m=+0.096126634 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930)
Sep 30 18:09:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:08] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:09:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:08] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:09:09 compute-0 ceph-mon[73755]: pgmap v909: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Sep 30 18:09:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:09.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:09 compute-0 sshd-session[298111]: Received disconnect from 45.252.249.158 port 59522:11: Bye Bye [preauth]
Sep 30 18:09:09 compute-0 sshd-session[298111]: Disconnected from invalid user camera 45.252.249.158 port 59522 [preauth]
Sep 30 18:09:09 compute-0 nova_compute[265391]: 2025-09-30 18:09:09.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Sep 30 18:09:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:10.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:11 compute-0 ceph-mon[73755]: pgmap v910: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Sep 30 18:09:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:11.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:11 compute-0 nova_compute[265391]: 2025-09-30 18:09:11.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:11 compute-0 nova_compute[265391]: 2025-09-30 18:09:11.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:12.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:13 compute-0 ceph-mon[73755]: pgmap v911: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:13 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:13.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:13.654Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:09:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:14.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:14 compute-0 nova_compute[265391]: 2025-09-30 18:09:14.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:15 compute-0 ceph-mon[73755]: pgmap v912: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:09:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:15 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:15.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:15 compute-0 podman[298200]: 2025-09-30 18:09:15.544685904 +0000 UTC m=+0.078883504 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:09:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:16.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:16 compute-0 nova_compute[265391]: 2025-09-30 18:09:16.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:16 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:16 compute-0 nova_compute[265391]: 2025-09-30 18:09:16.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:17 compute-0 ceph-mon[73755]: pgmap v913: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:17.140Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:17.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:18.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:18 compute-0 nova_compute[265391]: 2025-09-30 18:09:18.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:18 compute-0 nova_compute[265391]: 2025-09-30 18:09:18.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:09:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:18 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:18] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:09:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:18] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:09:18 compute-0 nova_compute[265391]: 2025-09-30 18:09:18.943 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:09:19 compute-0 ceph-mon[73755]: pgmap v914: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:19 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:19.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:19 compute-0 nova_compute[265391]: 2025-09-30 18:09:19.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:19 compute-0 nova_compute[265391]: 2025-09-30 18:09:19.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:09:19 compute-0 nova_compute[265391]: 2025-09-30 18:09:19.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:19 compute-0 nova_compute[265391]: 2025-09-30 18:09:19.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:19 compute-0 nova_compute[265391]: 2025-09-30 18:09:19.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:19 compute-0 nova_compute[265391]: 2025-09-30 18:09:19.944 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:09:19 compute-0 nova_compute[265391]: 2025-09-30 18:09:19.944 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:09:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128623266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:20 compute-0 nova_compute[265391]: 2025-09-30 18:09:20.446 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:20 compute-0 nova_compute[265391]: 2025-09-30 18:09:20.621 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:09:20 compute-0 nova_compute[265391]: 2025-09-30 18:09:20.623 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:20 compute-0 nova_compute[265391]: 2025-09-30 18:09:20.650 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.027s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:20 compute-0 nova_compute[265391]: 2025-09-30 18:09:20.652 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4558MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:09:20 compute-0 nova_compute[265391]: 2025-09-30 18:09:20.652 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:20 compute-0 nova_compute[265391]: 2025-09-30 18:09:20.652 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:20 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:21 compute-0 ceph-mon[73755]: pgmap v915: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:09:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2128623266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:21 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:21.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:21 compute-0 sudo[298250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:09:21 compute-0 sudo[298250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:21 compute-0 sudo[298250]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:21 compute-0 sudo[298298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:09:21 compute-0 sudo[298298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:21 compute-0 podman[298274]: 2025-09-30 18:09:21.541747107 +0000 UTC m=+0.077636201 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:09:21 compute-0 podman[298277]: 2025-09-30 18:09:21.552775145 +0000 UTC m=+0.075126067 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.expose-services=)
Sep 30 18:09:21 compute-0 podman[298275]: 2025-09-30 18:09:21.580703262 +0000 UTC m=+0.109054170 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:09:21 compute-0 nova_compute[265391]: 2025-09-30 18:09:21.706 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:09:21 compute-0 nova_compute[265391]: 2025-09-30 18:09:21.706 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:09:20 up  1:12,  0 user,  load average: 0.80, 0.82, 0.91\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:09:21 compute-0 nova_compute[265391]: 2025-09-30 18:09:21.723 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:21 compute-0 nova_compute[265391]: 2025-09-30 18:09:21.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:22 compute-0 sudo[298298]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:09:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742597895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:22 compute-0 nova_compute[265391]: 2025-09-30 18:09:22.231 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:22 compute-0 nova_compute[265391]: 2025-09-30 18:09:22.237 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:09:22 compute-0 sudo[298411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:09:22 compute-0 sudo[298411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:22 compute-0 sudo[298411]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:09:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:09:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180922 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:09:22 compute-0 sudo[298436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- inventory --format=json-pretty --filter-for-batch
Sep 30 18:09:22 compute-0 sudo[298436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:22.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:22 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:22 compute-0 nova_compute[265391]: 2025-09-30 18:09:22.745 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:09:22 compute-0 podman[298499]: 2025-09-30 18:09:22.848196224 +0000 UTC m=+0.049458118 container create 6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Sep 30 18:09:22 compute-0 systemd[1]: Started libpod-conmon-6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2.scope.
Sep 30 18:09:22 compute-0 podman[298499]: 2025-09-30 18:09:22.824251811 +0000 UTC m=+0.025513705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:09:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:09:22 compute-0 podman[298499]: 2025-09-30 18:09:22.958313301 +0000 UTC m=+0.159575225 container init 6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:09:22 compute-0 podman[298499]: 2025-09-30 18:09:22.971768151 +0000 UTC m=+0.173030015 container start 6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_booth, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:09:22 compute-0 podman[298499]: 2025-09-30 18:09:22.975628451 +0000 UTC m=+0.176890405 container attach 6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:09:22 compute-0 pensive_booth[298515]: 167 167
Sep 30 18:09:22 compute-0 podman[298499]: 2025-09-30 18:09:22.983393684 +0000 UTC m=+0.184655578 container died 6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:09:22 compute-0 systemd[1]: libpod-6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2.scope: Deactivated successfully.
Sep 30 18:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-98dbf9665b2fb79764429e80d0f14ac8fbfda2e7c39c2579e52e21b2d6d9dd24-merged.mount: Deactivated successfully.
Sep 30 18:09:23 compute-0 podman[298499]: 2025-09-30 18:09:23.037312797 +0000 UTC m=+0.238574661 container remove 6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_booth, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:09:23 compute-0 systemd[1]: libpod-conmon-6afcd68fbde1850ee2527cf44bee59a0a73b55d057ecd0ab6abbd91e789879a2.scope: Deactivated successfully.
Sep 30 18:09:23 compute-0 ceph-mon[73755]: pgmap v916: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3742597895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:09:23 compute-0 podman[298540]: 2025-09-30 18:09:23.21797181 +0000 UTC m=+0.061558914 container create 9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:09:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:23 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:23 compute-0 systemd[1]: Started libpod-conmon-9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e.scope.
Sep 30 18:09:23 compute-0 nova_compute[265391]: 2025-09-30 18:09:23.254 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:09:23 compute-0 nova_compute[265391]: 2025-09-30 18:09:23.254 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.602s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:23 compute-0 nova_compute[265391]: 2025-09-30 18:09:23.254 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:23 compute-0 nova_compute[265391]: 2025-09-30 18:09:23.255 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:09:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443832d682d4d7d049c01c2df95d5b0dc1567f2c8ed557687cfc4b99498aef85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443832d682d4d7d049c01c2df95d5b0dc1567f2c8ed557687cfc4b99498aef85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443832d682d4d7d049c01c2df95d5b0dc1567f2c8ed557687cfc4b99498aef85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443832d682d4d7d049c01c2df95d5b0dc1567f2c8ed557687cfc4b99498aef85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:23 compute-0 podman[298540]: 2025-09-30 18:09:23.200233908 +0000 UTC m=+0.043821032 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:09:23 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:23.296 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:39:ed 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5947b7c96cd42be8502dbab4c825083', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fffd780-66a8-4f09-9e3d-aefd98ad1eb6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=12cfcc60-6c05-4cc2-8665-8a4d689e5c1a) old=Port_Binding(mac=['fa:16:3e:77:39:ed'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5947b7c96cd42be8502dbab4c825083', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:09:23 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:23.297 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 12cfcc60-6c05-4cc2-8665-8a4d689e5c1a in datapath 4b8f21c3-21c3-482f-88c7-197b5bceb2ea updated
Sep 30 18:09:23 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:23.298 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b8f21c3-21c3-482f-88c7-197b5bceb2ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:09:23 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:23.298 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[90e12cc1-f443-4736-abe7-05551d8385f9]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:09:23 compute-0 podman[298540]: 2025-09-30 18:09:23.307033408 +0000 UTC m=+0.150620532 container init 9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:09:23 compute-0 podman[298540]: 2025-09-30 18:09:23.315222741 +0000 UTC m=+0.158809845 container start 9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:09:23 compute-0 podman[298540]: 2025-09-30 18:09:23.319368099 +0000 UTC m=+0.162955233 container attach 9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 18:09:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:23.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:23.655Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:09:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:23.657Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:09:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]: [
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:     {
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "available": false,
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "being_replaced": false,
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "ceph_device_lvm": false,
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "device_id": "QEMU_DVD-ROM_QM00001",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "lsm_data": {},
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "lvs": [],
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "path": "/dev/sr0",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "rejected_reasons": [
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "Has a FileSystem",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "Insufficient space (<5GB)"
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         ],
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         "sys_api": {
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "actuators": null,
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "device_nodes": [
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:                 "sr0"
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             ],
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "devname": "sr0",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "human_readable_size": "482.00 KB",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "id_bus": "ata",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "model": "QEMU DVD-ROM",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "nr_requests": "2",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "parent": "/dev/sr0",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "partitions": {},
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "path": "/dev/sr0",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "removable": "1",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "rev": "2.5+",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "ro": "0",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "rotational": "0",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "sas_address": "",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "sas_device_handle": "",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "scheduler_mode": "mq-deadline",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "sectors": 0,
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "sectorsize": "2048",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "size": 493568.0,
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "support_discard": "2048",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "type": "disk",
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:             "vendor": "QEMU"
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:         }
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]:     }
Sep 30 18:09:24 compute-0 mystifying_volhard[298556]: ]
Sep 30 18:09:24 compute-0 systemd[1]: libpod-9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e.scope: Deactivated successfully.
Sep 30 18:09:24 compute-0 podman[298540]: 2025-09-30 18:09:24.227512828 +0000 UTC m=+1.071099952 container died 9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:09:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-443832d682d4d7d049c01c2df95d5b0dc1567f2c8ed557687cfc4b99498aef85-merged.mount: Deactivated successfully.
Sep 30 18:09:24 compute-0 podman[298540]: 2025-09-30 18:09:24.282261542 +0000 UTC m=+1.125848646 container remove 9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_volhard, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:09:24 compute-0 systemd[1]: libpod-conmon-9df49fb118e958da0e3319349641d93607fc8700b652a531a02472597c91f96e.scope: Deactivated successfully.
Sep 30 18:09:24 compute-0 sudo[298436]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:24.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:24 compute-0 nova_compute[265391]: 2025-09-30 18:09:24.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:09:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:24 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:09:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:09:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:09:24 compute-0 nova_compute[265391]: 2025-09-30 18:09:24.760 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:24 compute-0 nova_compute[265391]: 2025-09-30 18:09:24.760 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:24 compute-0 sudo[300034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:09:24 compute-0 sudo[300034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:24 compute-0 sudo[300034]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:24 compute-0 sudo[300059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:09:24 compute-0 sudo[300059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:25 compute-0 ceph-mon[73755]: pgmap v917: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:09:25 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:09:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:25 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:25 compute-0 nova_compute[265391]: 2025-09-30 18:09:25.270 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:25 compute-0 nova_compute[265391]: 2025-09-30 18:09:25.270 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:25 compute-0 nova_compute[265391]: 2025-09-30 18:09:25.270 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:25 compute-0 nova_compute[265391]: 2025-09-30 18:09:25.270 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:25 compute-0 podman[300125]: 2025-09-30 18:09:25.314710547 +0000 UTC m=+0.038589456 container create 7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hellman, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 18:09:25 compute-0 systemd[1]: Started libpod-conmon-7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab.scope.
Sep 30 18:09:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:25.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:09:25 compute-0 podman[300125]: 2025-09-30 18:09:25.29791855 +0000 UTC m=+0.021797479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:09:25 compute-0 podman[300125]: 2025-09-30 18:09:25.398331604 +0000 UTC m=+0.122210513 container init 7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:09:25 compute-0 podman[300125]: 2025-09-30 18:09:25.405746997 +0000 UTC m=+0.129625916 container start 7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:09:25 compute-0 heuristic_hellman[300141]: 167 167
Sep 30 18:09:25 compute-0 podman[300125]: 2025-09-30 18:09:25.409040252 +0000 UTC m=+0.132919191 container attach 7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:09:25 compute-0 podman[300125]: 2025-09-30 18:09:25.410388237 +0000 UTC m=+0.134267146 container died 7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:09:25 compute-0 systemd[1]: libpod-7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab.scope: Deactivated successfully.
Sep 30 18:09:25 compute-0 nova_compute[265391]: 2025-09-30 18:09:25.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:25 compute-0 nova_compute[265391]: 2025-09-30 18:09:25.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:09:25 compute-0 nova_compute[265391]: 2025-09-30 18:09:25.430 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-77855a8fdaaa7a530d60a978846e5ce7d2e682f74fec20ebc7f103609cef0d6e-merged.mount: Deactivated successfully.
Sep 30 18:09:25 compute-0 podman[300125]: 2025-09-30 18:09:25.444779753 +0000 UTC m=+0.168658662 container remove 7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hellman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:09:25 compute-0 systemd[1]: libpod-conmon-7b048f66c8d87c1cca0798df8870ab7be87164a8bcb872aa88f9352b5c9a2eab.scope: Deactivated successfully.
Sep 30 18:09:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:25 compute-0 podman[300165]: 2025-09-30 18:09:25.599164111 +0000 UTC m=+0.043216596 container create 505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_brattain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 18:09:25 compute-0 systemd[1]: Started libpod-conmon-505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232.scope.
Sep 30 18:09:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:09:25 compute-0 podman[300165]: 2025-09-30 18:09:25.579989562 +0000 UTC m=+0.024042097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491cee35d5df6a12c6036a024354aad7e74df352f062f9a7490fb2707007ef5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491cee35d5df6a12c6036a024354aad7e74df352f062f9a7490fb2707007ef5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491cee35d5df6a12c6036a024354aad7e74df352f062f9a7490fb2707007ef5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491cee35d5df6a12c6036a024354aad7e74df352f062f9a7490fb2707007ef5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491cee35d5df6a12c6036a024354aad7e74df352f062f9a7490fb2707007ef5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:25 compute-0 podman[300165]: 2025-09-30 18:09:25.695167 +0000 UTC m=+0.139219565 container init 505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:09:25 compute-0 podman[300165]: 2025-09-30 18:09:25.703637781 +0000 UTC m=+0.147690306 container start 505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_brattain, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:09:25 compute-0 podman[300165]: 2025-09-30 18:09:25.709156804 +0000 UTC m=+0.153209309 container attach 505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:09:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:09:26 compute-0 thirsty_brattain[300182]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:09:26 compute-0 thirsty_brattain[300182]: --> All data devices are unavailable
Sep 30 18:09:26 compute-0 systemd[1]: libpod-505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232.scope: Deactivated successfully.
Sep 30 18:09:26 compute-0 podman[300165]: 2025-09-30 18:09:26.077084712 +0000 UTC m=+0.521137217 container died 505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_brattain, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-491cee35d5df6a12c6036a024354aad7e74df352f062f9a7490fb2707007ef5b-merged.mount: Deactivated successfully.
Sep 30 18:09:26 compute-0 podman[300165]: 2025-09-30 18:09:26.117525984 +0000 UTC m=+0.561578489 container remove 505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_brattain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:09:26 compute-0 systemd[1]: libpod-conmon-505552649f9e70e1e8b9a150af5eebb4bcfca72ed145e74feceaf83df31a1232.scope: Deactivated successfully.
Sep 30 18:09:26 compute-0 sudo[300059]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:26 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4275842867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:26 compute-0 sudo[300210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:09:26 compute-0 sudo[300210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:26 compute-0 sudo[300210]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:26 compute-0 sudo[300235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:09:26 compute-0 sudo[300235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:26.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:26 compute-0 podman[300301]: 2025-09-30 18:09:26.675459777 +0000 UTC m=+0.036705966 container create c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:09:26 compute-0 systemd[1]: Started libpod-conmon-c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd.scope.
Sep 30 18:09:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:26 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:09:26 compute-0 podman[300301]: 2025-09-30 18:09:26.751954228 +0000 UTC m=+0.113200407 container init c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:09:26 compute-0 podman[300301]: 2025-09-30 18:09:26.659955204 +0000 UTC m=+0.021201373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:09:26 compute-0 podman[300301]: 2025-09-30 18:09:26.763289053 +0000 UTC m=+0.124535232 container start c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jones, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:09:26 compute-0 podman[300301]: 2025-09-30 18:09:26.767591645 +0000 UTC m=+0.128837854 container attach c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jones, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 18:09:26 compute-0 blissful_jones[300317]: 167 167
Sep 30 18:09:26 compute-0 systemd[1]: libpod-c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd.scope: Deactivated successfully.
Sep 30 18:09:26 compute-0 podman[300301]: 2025-09-30 18:09:26.772794871 +0000 UTC m=+0.134041080 container died c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jones, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-146e65dfcbdff536257e724a67ed8ab2298eb3c70744567c7e33606c899e9904-merged.mount: Deactivated successfully.
Sep 30 18:09:26 compute-0 podman[300301]: 2025-09-30 18:09:26.8192372 +0000 UTC m=+0.180483369 container remove c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:09:26 compute-0 systemd[1]: libpod-conmon-c30a2156a3ded75b8026a94cc4c8fedb7d90f258817afadc95a764ab157d52dd.scope: Deactivated successfully.
Sep 30 18:09:26 compute-0 nova_compute[265391]: 2025-09-30 18:09:26.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:27 compute-0 podman[300341]: 2025-09-30 18:09:27.070420048 +0000 UTC m=+0.064283834 container create 727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:09:27 compute-0 systemd[1]: Started libpod-conmon-727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7.scope.
Sep 30 18:09:27 compute-0 podman[300341]: 2025-09-30 18:09:27.040074068 +0000 UTC m=+0.033937924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:09:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:27.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad84235a58c8ef1536a838a2a855a791ec45c97bdadd08617088cfec77a68342/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad84235a58c8ef1536a838a2a855a791ec45c97bdadd08617088cfec77a68342/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad84235a58c8ef1536a838a2a855a791ec45c97bdadd08617088cfec77a68342/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad84235a58c8ef1536a838a2a855a791ec45c97bdadd08617088cfec77a68342/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:27 compute-0 podman[300341]: 2025-09-30 18:09:27.183137042 +0000 UTC m=+0.177000888 container init 727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 18:09:27 compute-0 ceph-mon[73755]: pgmap v918: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:09:27 compute-0 podman[300341]: 2025-09-30 18:09:27.196058078 +0000 UTC m=+0.189921874 container start 727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:09:27 compute-0 podman[300341]: 2025-09-30 18:09:27.200028662 +0000 UTC m=+0.193892448 container attach 727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 18:09:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:27 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:27.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]: {
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:     "0": [
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:         {
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "devices": [
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "/dev/loop3"
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             ],
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "lv_name": "ceph_lv0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "lv_size": "21470642176",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "name": "ceph_lv0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "tags": {
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.cluster_name": "ceph",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.crush_device_class": "",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.encrypted": "0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.osd_id": "0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.type": "block",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.vdo": "0",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:                 "ceph.with_tpm": "0"
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             },
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "type": "block",
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:             "vg_name": "ceph_vg0"
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:         }
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]:     ]
Sep 30 18:09:27 compute-0 wonderful_liskov[300357]: }
Sep 30 18:09:27 compute-0 systemd[1]: libpod-727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7.scope: Deactivated successfully.
Sep 30 18:09:27 compute-0 podman[300341]: 2025-09-30 18:09:27.531835699 +0000 UTC m=+0.525699455 container died 727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad84235a58c8ef1536a838a2a855a791ec45c97bdadd08617088cfec77a68342-merged.mount: Deactivated successfully.
Sep 30 18:09:27 compute-0 podman[300341]: 2025-09-30 18:09:27.576083851 +0000 UTC m=+0.569947607 container remove 727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:09:27 compute-0 systemd[1]: libpod-conmon-727eb318a5f6fa872bcc6456425a80f0b4f433e7e1900296aa45bee160b304d7.scope: Deactivated successfully.
Sep 30 18:09:27 compute-0 sudo[300235]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:27 compute-0 sudo[300379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:09:27 compute-0 sudo[300379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:27 compute-0 sudo[300379]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:27 compute-0 sudo[300405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:09:27 compute-0 sudo[300405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:09:28 compute-0 podman[300472]: 2025-09-30 18:09:28.19268478 +0000 UTC m=+0.048297759 container create 6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:09:28 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2291823085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:28 compute-0 ceph-mon[73755]: pgmap v919: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:09:28 compute-0 systemd[1]: Started libpod-conmon-6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b.scope.
Sep 30 18:09:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:09:28 compute-0 podman[300472]: 2025-09-30 18:09:28.174399354 +0000 UTC m=+0.030012363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:09:28 compute-0 podman[300472]: 2025-09-30 18:09:28.284803058 +0000 UTC m=+0.140416127 container init 6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 18:09:28 compute-0 podman[300472]: 2025-09-30 18:09:28.293031932 +0000 UTC m=+0.148644911 container start 6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_sutherland, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:09:28 compute-0 podman[300472]: 2025-09-30 18:09:28.296598715 +0000 UTC m=+0.152211774 container attach 6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:09:28 compute-0 pedantic_sutherland[300490]: 167 167
Sep 30 18:09:28 compute-0 systemd[1]: libpod-6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b.scope: Deactivated successfully.
Sep 30 18:09:28 compute-0 conmon[300490]: conmon 6d801922d502ea536fee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b.scope/container/memory.events
Sep 30 18:09:28 compute-0 podman[300495]: 2025-09-30 18:09:28.337461768 +0000 UTC m=+0.024984631 container died 6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed9ecee07ef8f1428cb76a8537f817108690a6b31e9be04e4f3fa2bac2a908c6-merged.mount: Deactivated successfully.
Sep 30 18:09:28 compute-0 podman[300495]: 2025-09-30 18:09:28.373221129 +0000 UTC m=+0.060743972 container remove 6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_sutherland, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:09:28 compute-0 systemd[1]: libpod-conmon-6d801922d502ea536fee5feffb68efda7d14c79c039986687459fd618a9b390b.scope: Deactivated successfully.
Sep 30 18:09:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:28.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:28 compute-0 sudo[300512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:09:28 compute-0 sudo[300512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:28 compute-0 sudo[300512]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:28 compute-0 podman[300542]: 2025-09-30 18:09:28.660167248 +0000 UTC m=+0.071106092 container create 266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_poincare, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:09:28 compute-0 systemd[1]: Started libpod-conmon-266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488.scope.
Sep 30 18:09:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:28 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:28 compute-0 podman[300542]: 2025-09-30 18:09:28.629523871 +0000 UTC m=+0.040462785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:09:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1584cb2a14c20657b0f71ce75777d42626a0a406965d27cf649c8167a6eb96ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1584cb2a14c20657b0f71ce75777d42626a0a406965d27cf649c8167a6eb96ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1584cb2a14c20657b0f71ce75777d42626a0a406965d27cf649c8167a6eb96ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1584cb2a14c20657b0f71ce75777d42626a0a406965d27cf649c8167a6eb96ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:09:28 compute-0 podman[300542]: 2025-09-30 18:09:28.770433628 +0000 UTC m=+0.181372542 container init 266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_poincare, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:09:28 compute-0 podman[300542]: 2025-09-30 18:09:28.783501849 +0000 UTC m=+0.194440683 container start 266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_poincare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 18:09:28 compute-0 podman[300542]: 2025-09-30 18:09:28.78740713 +0000 UTC m=+0.198345994 container attach 266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_poincare, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:09:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:28] "GET /metrics HTTP/1.1" 200 46628 "" "Prometheus/2.51.0"
Sep 30 18:09:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:28] "GET /metrics HTTP/1.1" 200 46628 "" "Prometheus/2.51.0"
Sep 30 18:09:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:29 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:29.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:29 compute-0 nova_compute[265391]: 2025-09-30 18:09:29.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:29 compute-0 lvm[300634]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:09:29 compute-0 lvm[300634]: VG ceph_vg0 finished
Sep 30 18:09:29 compute-0 beautiful_poincare[300559]: {}
Sep 30 18:09:29 compute-0 systemd[1]: libpod-266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488.scope: Deactivated successfully.
Sep 30 18:09:29 compute-0 podman[300542]: 2025-09-30 18:09:29.568014709 +0000 UTC m=+0.978953543 container died 266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:09:29 compute-0 systemd[1]: libpod-266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488.scope: Consumed 1.266s CPU time.
Sep 30 18:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1584cb2a14c20657b0f71ce75777d42626a0a406965d27cf649c8167a6eb96ef-merged.mount: Deactivated successfully.
Sep 30 18:09:29 compute-0 podman[300542]: 2025-09-30 18:09:29.609579661 +0000 UTC m=+1.020518475 container remove 266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_poincare, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:09:29 compute-0 systemd[1]: libpod-conmon-266c028ba083f21240b061a368e9d3b95d5f22884dd1d4c7287c83703db71488.scope: Deactivated successfully.
Sep 30 18:09:29 compute-0 sudo[300405]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:09:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:09:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:29 compute-0 podman[276673]: time="2025-09-30T18:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:09:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:09:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10256 "" "Go-http-client/1.1"
Sep 30 18:09:29 compute-0 sudo[300651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:09:29 compute-0 sudo[300651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:29 compute-0 sudo[300651]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:09:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:30.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:09:30 compute-0 ceph-mon[73755]: pgmap v920: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:09:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:30 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:09:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:31.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:31 compute-0 openstack_network_exporter[279566]: ERROR   18:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:09:31 compute-0 openstack_network_exporter[279566]: ERROR   18:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:09:31 compute-0 openstack_network_exporter[279566]: ERROR   18:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:09:31 compute-0 openstack_network_exporter[279566]: ERROR   18:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:09:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:09:31 compute-0 openstack_network_exporter[279566]: ERROR   18:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:09:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:09:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:31.620 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:a4:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c06e9657-589e-4dca-93a3-44b9a4da38f4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c06e9657-589e-4dca-93a3-44b9a4da38f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0783e60216244dbda21696efa03e2275', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4fb4927a-fd33-46be-8d3d-6e2898831ff5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=552f1fba-62a5-48e5-9d1c-51c58cbe4d6e) old=Port_Binding(mac=['fa:16:3e:a1:a4:93'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-c06e9657-589e-4dca-93a3-44b9a4da38f4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c06e9657-589e-4dca-93a3-44b9a4da38f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0783e60216244dbda21696efa03e2275', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:09:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:31.621 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 552f1fba-62a5-48e5-9d1c-51c58cbe4d6e in datapath c06e9657-589e-4dca-93a3-44b9a4da38f4 updated
Sep 30 18:09:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:31.622 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c06e9657-589e-4dca-93a3-44b9a4da38f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:09:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:31.622 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[869887e3-1c42-44d5-9c18-90636b13b349]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:09:31 compute-0 nova_compute[265391]: 2025-09-30 18:09:31.808 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:09:31 compute-0 nova_compute[265391]: 2025-09-30 18:09:31.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:09:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:32.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:32 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:32 compute-0 unix_chkpwd[300680]: password check failed for user (root)
Sep 30 18:09:32 compute-0 sshd-session[300677]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.156.73.233  user=root
Sep 30 18:09:33 compute-0 ceph-mon[73755]: pgmap v921: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:09:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:33 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:33.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:33.658Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:09:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:09:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:09:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:34.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:34 compute-0 nova_compute[265391]: 2025-09-30 18:09:34.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:35 compute-0 ceph-mon[73755]: pgmap v922: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:09:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:35 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220000ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:35.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:35 compute-0 sshd-session[300677]: Failed password for root from 185.156.73.233 port 61180 ssh2
Sep 30 18:09:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:35 compute-0 sshd-session[300677]: Connection closed by authenticating user root 185.156.73.233 port 61180 [preauth]
Sep 30 18:09:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:09:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:36.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:09:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888146747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:09:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:09:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888146747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:09:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:36 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:36 compute-0 nova_compute[265391]: 2025-09-30 18:09:36.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:37 compute-0 ceph-mon[73755]: pgmap v923: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:09:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3888146747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:09:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3888146747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:09:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:37.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 18:09:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:09:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:09:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:09:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:09:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:09:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:09:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:09:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:09:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:09:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:09:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:38.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:38 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:38] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:09:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:38] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:09:39 compute-0 ceph-mon[73755]: pgmap v924: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:09:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:39 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:39.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:39 compute-0 nova_compute[265391]: 2025-09-30 18:09:39.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:39 compute-0 podman[300691]: 2025-09-30 18:09:39.516785362 +0000 UTC m=+0.056252325 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:09:39 compute-0 podman[300690]: 2025-09-30 18:09:39.565133161 +0000 UTC m=+0.107403897 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:09:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:09:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:40.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:40 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:41 compute-0 ceph-mon[73755]: pgmap v925: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:09:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:41 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:41.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:41 compute-0 nova_compute[265391]: 2025-09-30 18:09:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 18:09:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/180942 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 18:09:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:42 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:43 compute-0 ceph-mon[73755]: pgmap v926: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 1 op/s
Sep 30 18:09:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:43 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:43.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:43 compute-0 nova_compute[265391]: 2025-09-30 18:09:43.548 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:43 compute-0 nova_compute[265391]: 2025-09-30 18:09:43.548 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:43.659Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:09:44 compute-0 nova_compute[265391]: 2025-09-30 18:09:44.053 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:09:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:44.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:44 compute-0 nova_compute[265391]: 2025-09-30 18:09:44.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:44 compute-0 nova_compute[265391]: 2025-09-30 18:09:44.604 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:44 compute-0 nova_compute[265391]: 2025-09-30 18:09:44.604 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:44 compute-0 nova_compute[265391]: 2025-09-30 18:09:44.614 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:09:44 compute-0 nova_compute[265391]: 2025-09-30 18:09:44.614 2 INFO nova.compute.claims [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:09:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:44 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:45 compute-0 ceph-mon[73755]: pgmap v927: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:09:45 compute-0 ovn_controller[156242]: 2025-09-30T18:09:45Z|00048|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Sep 30 18:09:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:45 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:45.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:45 compute-0 nova_compute[265391]: 2025-09-30 18:09:45.680 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:09:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:09:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1512167147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1512167147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:09:46 compute-0 nova_compute[265391]: 2025-09-30 18:09:46.155 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:46 compute-0 nova_compute[265391]: 2025-09-30 18:09:46.162 2 DEBUG nova.compute.provider_tree [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:09:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:46.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:46 compute-0 podman[300771]: 2025-09-30 18:09:46.531289398 +0000 UTC m=+0.062496638 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Sep 30 18:09:46 compute-0 nova_compute[265391]: 2025-09-30 18:09:46.670 2 DEBUG nova.scheduler.client.report [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:09:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:46 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2180019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:46 compute-0 nova_compute[265391]: 2025-09-30 18:09:46.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:47.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:09:47 compute-0 ceph-mon[73755]: pgmap v928: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:09:47 compute-0 nova_compute[265391]: 2025-09-30 18:09:47.181 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.576s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:47 compute-0 nova_compute[265391]: 2025-09-30 18:09:47.182 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:09:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:47 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:47.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:47 compute-0 nova_compute[265391]: 2025-09-30 18:09:47.693 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:09:47 compute-0 nova_compute[265391]: 2025-09-30 18:09:47.694 2 DEBUG nova.network.neutron [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:09:47 compute-0 nova_compute[265391]: 2025-09-30 18:09:47.695 2 WARNING neutronclient.v2_0.client [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:09:47 compute-0 nova_compute[265391]: 2025-09-30 18:09:47.695 2 WARNING neutronclient.v2_0.client [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:09:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:09:48 compute-0 nova_compute[265391]: 2025-09-30 18:09:48.206 2 INFO nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:09:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:48 compute-0 sudo[300794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:09:48 compute-0 sudo[300794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:09:48 compute-0 sudo[300794]: pam_unix(sudo:session): session closed for user root
Sep 30 18:09:48 compute-0 nova_compute[265391]: 2025-09-30 18:09:48.715 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:09:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:48 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:48] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:09:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:48] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:09:49 compute-0 ceph-mon[73755]: pgmap v929: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:09:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:49 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:49.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.580 2 DEBUG nova.network.neutron [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Successfully created port: a7cb8fe9-f156-4a0f-aa37-f348ade37f45 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.736 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.739 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.739 2 INFO nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Creating image(s)
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.775 2 DEBUG nova.storage.rbd_utils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] rbd image c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.804 2 DEBUG nova.storage.rbd_utils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] rbd image c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.831 2 DEBUG nova.storage.rbd_utils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] rbd image c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.835 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.906 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.908 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.909 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.909 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.947 2 DEBUG nova.storage.rbd_utils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] rbd image c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:09:49 compute-0 nova_compute[265391]: 2025-09-30 18:09:49.951 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:09:50 compute-0 nova_compute[265391]: 2025-09-30 18:09:50.237 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:50 compute-0 ceph-mon[73755]: pgmap v930: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:09:50 compute-0 nova_compute[265391]: 2025-09-30 18:09:50.330 2 DEBUG nova.storage.rbd_utils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] resizing rbd image c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:09:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:50.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:50 compute-0 nova_compute[265391]: 2025-09-30 18:09:50.456 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:09:50 compute-0 nova_compute[265391]: 2025-09-30 18:09:50.457 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Ensure instance console log exists: /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:09:50 compute-0 nova_compute[265391]: 2025-09-30 18:09:50.457 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:50 compute-0 nova_compute[265391]: 2025-09-30 18:09:50.457 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:50 compute-0 nova_compute[265391]: 2025-09-30 18:09:50.458 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:50 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2180019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:51 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:51.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:51 compute-0 nova_compute[265391]: 2025-09-30 18:09:51.660 2 DEBUG nova.network.neutron [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Successfully updated port: a7cb8fe9-f156-4a0f-aa37-f348ade37f45 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:09:51 compute-0 nova_compute[265391]: 2025-09-30 18:09:51.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.103 2 DEBUG nova.compute.manager [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received event network-changed-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.104 2 DEBUG nova.compute.manager [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Refreshing instance network info cache due to event network-changed-a7cb8fe9-f156-4a0f-aa37-f348ade37f45. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.104 2 DEBUG oslo_concurrency.lockutils [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-c31a2d74-08f2-40bb-8b81-fd5301d9a627" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.104 2 DEBUG oslo_concurrency.lockutils [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-c31a2d74-08f2-40bb-8b81-fd5301d9a627" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.104 2 DEBUG nova.network.neutron [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Refreshing network info cache for port a7cb8fe9-f156-4a0f-aa37-f348ade37f45 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.166 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "refresh_cache-c31a2d74-08f2-40bb-8b81-fd5301d9a627" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:09:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:09:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:09:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:52 compute-0 podman[300991]: 2025-09-30 18:09:52.548463457 +0000 UTC m=+0.073075453 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:09:52 compute-0 podman[300990]: 2025-09-30 18:09:52.549825882 +0000 UTC m=+0.080561378 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 18:09:52 compute-0 podman[300989]: 2025-09-30 18:09:52.581219619 +0000 UTC m=+0.114966723 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.622 2 WARNING neutronclient.v2_0.client [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.723 2 DEBUG nova.network.neutron [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:09:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:52 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:52 compute-0 nova_compute[265391]: 2025-09-30 18:09:52.926 2 DEBUG nova.network.neutron [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:09:53 compute-0 ceph-mon[73755]: pgmap v931: 353 pgs: 353 active+clean; 41 MiB data, 184 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:09:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:09:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:53 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:53.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:53 compute-0 nova_compute[265391]: 2025-09-30 18:09:53.432 2 DEBUG oslo_concurrency.lockutils [req-1e4cddb5-1fb6-4bf3-be5a-6974274c2d32 req-c0ccb516-804e-441e-9637-c3132a2c653c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-c31a2d74-08f2-40bb-8b81-fd5301d9a627" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:09:53 compute-0 nova_compute[265391]: 2025-09-30 18:09:53.433 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquired lock "refresh_cache-c31a2d74-08f2-40bb-8b81-fd5301d9a627" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:09:53 compute-0 nova_compute[265391]: 2025-09-30 18:09:53.434 2 DEBUG nova.network.neutron [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:09:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:53.660Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:09:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:53.660Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:09:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:09:54 compute-0 nova_compute[265391]: 2025-09-30 18:09:54.015 2 DEBUG nova.network.neutron [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:09:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:54.274 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:54.275 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:09:54.275 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:54 compute-0 nova_compute[265391]: 2025-09-30 18:09:54.284 2 WARNING neutronclient.v2_0.client [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:09:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:54 compute-0 nova_compute[265391]: 2025-09-30 18:09:54.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:54 compute-0 nova_compute[265391]: 2025-09-30 18:09:54.706 2 DEBUG nova.network.neutron [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Updating instance_info_cache with network_info: [{"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:09:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2180019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:55 compute-0 ceph-mon[73755]: pgmap v932: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.214 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Releasing lock "refresh_cache-c31a2d74-08f2-40bb-8b81-fd5301d9a627" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.215 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Instance network_info: |[{"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.217 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Start _get_guest_xml network_info=[{"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.221 2 WARNING nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.223 2 DEBUG nova.virt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestDataModel-server-1794545882', uuid='c31a2d74-08f2-40bb-8b81-fd5301d9a627'), owner=OwnerMeta(userid='d8e62d62fa4d4959828354f71c48cd9d', username='tempest-TestDataModel-213655642-project-admin', projectid='0783e60216244dbda21696efa03e2275', projectname='tempest-TestDataModel-213655642'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759255795.223027) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.228 2 DEBUG nova.virt.libvirt.host [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.229 2 DEBUG nova.virt.libvirt.host [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.231 2 DEBUG nova.virt.libvirt.host [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.232 2 DEBUG nova.virt.libvirt.host [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.233 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.233 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.234 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.234 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.234 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.234 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.235 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.235 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.235 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.235 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.236 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.236 2 DEBUG nova.virt.hardware [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.238 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:09:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:09:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272442282' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.719 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.758 2 DEBUG nova.storage.rbd_utils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] rbd image c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:09:55 compute-0 nova_compute[265391]: 2025-09-30 18:09:55.764 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:09:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4272442282' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.082705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255796082740, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2110, "num_deletes": 251, "total_data_size": 4049875, "memory_usage": 4111344, "flush_reason": "Manual Compaction"}
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255796097117, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3904627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24254, "largest_seqno": 26363, "table_properties": {"data_size": 3895243, "index_size": 5878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19661, "raw_average_key_size": 20, "raw_value_size": 3876218, "raw_average_value_size": 4000, "num_data_blocks": 261, "num_entries": 969, "num_filter_entries": 969, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759255588, "oldest_key_time": 1759255588, "file_creation_time": 1759255796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14448 microseconds, and 7071 cpu microseconds.
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.097151) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3904627 bytes OK
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.097175) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.098707) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.098722) EVENT_LOG_v1 {"time_micros": 1759255796098717, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.098743) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4041166, prev total WAL file size 4041166, number of live WAL files 2.
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.099623) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3813KB)], [56(10MB)]
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255796099669, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 15135725, "oldest_snapshot_seqno": -1}
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5588 keys, 13047290 bytes, temperature: kUnknown
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255796159310, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 13047290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13008864, "index_size": 23340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 141487, "raw_average_key_size": 25, "raw_value_size": 12906635, "raw_average_value_size": 2309, "num_data_blocks": 961, "num_entries": 5588, "num_filter_entries": 5588, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759255796, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.159607) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 13047290 bytes
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.160702) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.2 rd, 218.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 10.7 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 6110, records dropped: 522 output_compression: NoCompression
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.160720) EVENT_LOG_v1 {"time_micros": 1759255796160711, "job": 30, "event": "compaction_finished", "compaction_time_micros": 59775, "compaction_time_cpu_micros": 36512, "output_level": 6, "num_output_files": 1, "total_output_size": 13047290, "num_input_records": 6110, "num_output_records": 5588, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255796161574, "job": 30, "event": "table_file_deletion", "file_number": 58}
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255796163349, "job": 30, "event": "table_file_deletion", "file_number": 56}
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.099531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.163428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.163433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.163435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.163437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:09:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:09:56.163439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:09:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:09:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049268746' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.261 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.263 2 DEBUG nova.virt.libvirt.vif [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-1794545882',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-1794545882',id=2,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0783e60216244dbda21696efa03e2275',ramdisk_id='',reservation_id='r-wk0wr0ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestDataModel-213655642',owner_user_name='tempest-TestDataModel-213655642-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:09:48Z,user_data=None,user_id='d8e62d62fa4d4959828354f71c48cd9d',uuid=c31a2d74-08f2-40bb-8b81-fd5301d9a627,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.263 2 DEBUG nova.network.os_vif_util [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Converting VIF {"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.264 2 DEBUG nova.network.os_vif_util [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c9:62,bridge_name='br-int',has_traffic_filtering=True,id=a7cb8fe9-f156-4a0f-aa37-f348ade37f45,network=Network(4b8f21c3-21c3-482f-88c7-197b5bceb2ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7cb8fe9-f1') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.266 2 DEBUG nova.objects.instance [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lazy-loading 'pci_devices' on Instance uuid c31a2d74-08f2-40bb-8b81-fd5301d9a627 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:09:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:56 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.799 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <uuid>c31a2d74-08f2-40bb-8b81-fd5301d9a627</uuid>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <name>instance-00000002</name>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <nova:name>tempest-TestDataModel-server-1794545882</nova:name>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:09:55</nova:creationTime>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:09:56 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:09:56 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:user uuid="d8e62d62fa4d4959828354f71c48cd9d">tempest-TestDataModel-213655642-project-admin</nova:user>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:project uuid="0783e60216244dbda21696efa03e2275">tempest-TestDataModel-213655642</nova:project>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <nova:port uuid="a7cb8fe9-f156-4a0f-aa37-f348ade37f45">
Sep 30 18:09:56 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <system>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <entry name="serial">c31a2d74-08f2-40bb-8b81-fd5301d9a627</entry>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <entry name="uuid">c31a2d74-08f2-40bb-8b81-fd5301d9a627</entry>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </system>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <os>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   </os>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <features>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   </features>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk">
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       </source>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk.config">
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       </source>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:09:56 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:d7:c9:62"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <target dev="tapa7cb8fe9-f1"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627/console.log" append="off"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <video>
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </video>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:09:56 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:09:56 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:09:56 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:09:56 compute-0 nova_compute[265391]: </domain>
Sep 30 18:09:56 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.801 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Preparing to wait for external event network-vif-plugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.801 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.802 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.802 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.803 2 DEBUG nova.virt.libvirt.vif [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-1794545882',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-1794545882',id=2,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0783e60216244dbda21696efa03e2275',ramdisk_id='',reservation_id='r-wk0wr0ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestDataModel-213655642',owner_user_name='tempest-TestDataModel-213655642-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:09:48Z,user_data=None,user_id='d8e62d62fa4d4959828354f71c48cd9d',uuid=c31a2d74-08f2-40bb-8b81-fd5301d9a627,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.804 2 DEBUG nova.network.os_vif_util [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Converting VIF {"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.805 2 DEBUG nova.network.os_vif_util [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c9:62,bridge_name='br-int',has_traffic_filtering=True,id=a7cb8fe9-f156-4a0f-aa37-f348ade37f45,network=Network(4b8f21c3-21c3-482f-88c7-197b5bceb2ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7cb8fe9-f1') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.805 2 DEBUG os_vif [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c9:62,bridge_name='br-int',has_traffic_filtering=True,id=a7cb8fe9-f156-4a0f-aa37-f348ade37f45,network=Network(4b8f21c3-21c3-482f-88c7-197b5bceb2ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7cb8fe9-f1') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.807 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'a2b9bebb-4e89-5c91-a765-7e044d605850', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7cb8fe9-f1, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapa7cb8fe9-f1, col_values=(('qos', UUID('057de372-0e14-4b48-84b8-63d0f9fe4e17')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.819 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapa7cb8fe9-f1, col_values=(('external_ids', {'iface-id': 'a7cb8fe9-f156-4a0f-aa37-f348ade37f45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:c9:62', 'vm-uuid': 'c31a2d74-08f2-40bb-8b81-fd5301d9a627'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:56 compute-0 NetworkManager[45059]: <info>  [1759255796.8220] manager: (tapa7cb8fe9-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:56 compute-0 nova_compute[265391]: 2025-09-30 18:09:56.828 2 INFO os_vif [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c9:62,bridge_name='br-int',has_traffic_filtering=True,id=a7cb8fe9-f156-4a0f-aa37-f348ade37f45,network=Network(4b8f21c3-21c3-482f-88c7-197b5bceb2ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7cb8fe9-f1')
Sep 30 18:09:57 compute-0 ceph-mon[73755]: pgmap v933: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:09:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3049268746' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:09:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:09:57.146Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:09:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:57 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:57.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:09:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543639361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:09:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:09:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543639361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:09:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:09:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2543639361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:09:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2543639361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:09:58 compute-0 nova_compute[265391]: 2025-09-30 18:09:58.382 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:09:58 compute-0 nova_compute[265391]: 2025-09-30 18:09:58.382 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:09:58 compute-0 nova_compute[265391]: 2025-09-30 18:09:58.382 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] No VIF found with MAC fa:16:3e:d7:c9:62, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:09:58 compute-0 nova_compute[265391]: 2025-09-30 18:09:58.383 2 INFO nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Using config drive
Sep 30 18:09:58 compute-0 nova_compute[265391]: 2025-09-30 18:09:58.410 2 DEBUG nova.storage.rbd_utils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] rbd image c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:09:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:09:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:09:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:09:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:58 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:58] "GET /metrics HTTP/1.1" 200 46628 "" "Prometheus/2.51.0"
Sep 30 18:09:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:09:58] "GET /metrics HTTP/1.1" 200 46628 "" "Prometheus/2.51.0"
Sep 30 18:09:58 compute-0 nova_compute[265391]: 2025-09-30 18:09:58.927 2 WARNING neutronclient.v2_0.client [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:09:59 compute-0 ceph-mon[73755]: pgmap v934: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:09:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:09:59 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:09:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:09:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:09:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:09:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:09:59 compute-0 nova_compute[265391]: 2025-09-30 18:09:59.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:09:59 compute-0 nova_compute[265391]: 2025-09-30 18:09:59.553 2 INFO nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Creating config drive at /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627/disk.config
Sep 30 18:09:59 compute-0 nova_compute[265391]: 2025-09-30 18:09:59.567 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp6j8uz71a execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:59 compute-0 nova_compute[265391]: 2025-09-30 18:09:59.706 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp6j8uz71a" returned: 0 in 0.139s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:59 compute-0 podman[276673]: time="2025-09-30T18:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:09:59 compute-0 nova_compute[265391]: 2025-09-30 18:09:59.748 2 DEBUG nova.storage.rbd_utils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] rbd image c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:09:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:09:59 compute-0 nova_compute[265391]: 2025-09-30 18:09:59.758 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627/disk.config c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:09:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10241 "" "Go-http-client/1.1"
Sep 30 18:09:59 compute-0 nova_compute[265391]: 2025-09-30 18:09:59.921 2 DEBUG oslo_concurrency.processutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627/disk.config c31a2d74-08f2-40bb-8b81-fd5301d9a627_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:09:59 compute-0 nova_compute[265391]: 2025-09-30 18:09:59.922 2 INFO nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Deleting local config drive /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627/disk.config because it was imported into RBD.
Sep 30 18:09:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:10:00 compute-0 kernel: tapa7cb8fe9-f1: entered promiscuous mode
Sep 30 18:10:00 compute-0 NetworkManager[45059]: <info>  [1759255800.0012] manager: (tapa7cb8fe9-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/29)
Sep 30 18:10:00 compute-0 ovn_controller[156242]: 2025-09-30T18:10:00Z|00049|binding|INFO|Claiming lport a7cb8fe9-f156-4a0f-aa37-f348ade37f45 for this chassis.
Sep 30 18:10:00 compute-0 ovn_controller[156242]: 2025-09-30T18:10:00Z|00050|binding|INFO|a7cb8fe9-f156-4a0f-aa37-f348ade37f45: Claiming fa:16:3e:d7:c9:62 10.100.0.9
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.021 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:c9:62 10.100.0.9'], port_security=['fa:16:3e:d7:c9:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c31a2d74-08f2-40bb-8b81-fd5301d9a627', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0783e60216244dbda21696efa03e2275', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44055cfe-7091-4bf5-849f-a5ec90884056', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fffd780-66a8-4f09-9e3d-aefd98ad1eb6, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=a7cb8fe9-f156-4a0f-aa37-f348ade37f45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.023 166158 INFO neutron.agent.ovn.metadata.agent [-] Port a7cb8fe9-f156-4a0f-aa37-f348ade37f45 in datapath 4b8f21c3-21c3-482f-88c7-197b5bceb2ea bound to our chassis
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.024 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b8f21c3-21c3-482f-88c7-197b5bceb2ea
Sep 30 18:10:00 compute-0 systemd-udevd[301193]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.040 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[075a0878-7274-4bc8-8fed-917dd1a88b13]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.041 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4b8f21c3-21 in ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.046 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4b8f21c3-20 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.046 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[579ebdb0-3fa1-4858-b37b-adeef019407b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.047 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5db573db-8bd3-4c7a-a90a-e155f44fec58]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 NetworkManager[45059]: <info>  [1759255800.0614] device (tapa7cb8fe9-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:10:00 compute-0 NetworkManager[45059]: <info>  [1759255800.0625] device (tapa7cb8fe9-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:10:00 compute-0 systemd-machined[219917]: New machine qemu-2-instance-00000002.
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.065 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[f4eb92aa-f72a-4913-b9d5-0be3dd2ee9d9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.085 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0f46e73a-571d-466b-bc25-ef09f439663d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:00 compute-0 ovn_controller[156242]: 2025-09-30T18:10:00Z|00051|binding|INFO|Setting lport a7cb8fe9-f156-4a0f-aa37-f348ade37f45 ovn-installed in OVS
Sep 30 18:10:00 compute-0 ovn_controller[156242]: 2025-09-30T18:10:00Z|00052|binding|INFO|Setting lport a7cb8fe9-f156-4a0f-aa37-f348ade37f45 up in Southbound
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.125 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[12c7802c-068d-4495-bd90-7a7697b611f0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.129 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1fa6a6-c639-4ea4-a8f2-d77d474ede9d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ceph-mon[73755]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:10:00 compute-0 NetworkManager[45059]: <info>  [1759255800.1323] manager: (tap4b8f21c3-20): new Veth device (/org/freedesktop/NetworkManager/Devices/30)
Sep 30 18:10:00 compute-0 systemd-udevd[301197]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.168 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[d61524aa-3724-4b24-8c37-91106d9853e0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.173 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[4519a800-2499-448c-b1ea-d26e0f4785bc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 NetworkManager[45059]: <info>  [1759255800.2028] device (tap4b8f21c3-20): carrier: link connected
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.210 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdb3257-4537-44b1-91b9-ed96f98f5523]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.235 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[534a053c-62b6-4e6f-ad46-0c5c50a8fe29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b8f21c3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:39:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439889, 'reachable_time': 23744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301226, 'error': None, 'target': 'ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.258 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5381d1-1c5a-4994-a5c7-07079a86fb93]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:39ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439889, 'tstamp': 439889}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301227, 'error': None, 'target': 'ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.281 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f122b1d6-3300-40b8-a7af-bf9750cc1afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b8f21c3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:39:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439889, 'reachable_time': 23744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301228, 'error': None, 'target': 'ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.316 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7b7bf7-5a17-49ea-8f45-a00c2b7a93ff]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.401 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7fc835-7ac1-4c91-ad9a-eb081af69d14]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.402 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b8f21c3-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.403 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.403 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b8f21c3-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:00 compute-0 NetworkManager[45059]: <info>  [1759255800.4071] manager: (tap4b8f21c3-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Sep 30 18:10:00 compute-0 kernel: tap4b8f21c3-20: entered promiscuous mode
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.409 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b8f21c3-20, col_values=(('external_ids', {'iface-id': '12cfcc60-6c05-4cc2-8665-8a4d689e5c1a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:00 compute-0 ovn_controller[156242]: 2025-09-30T18:10:00Z|00053|binding|INFO|Releasing lport 12cfcc60-6c05-4cc2-8665-8a4d689e5c1a from this chassis (sb_readonly=0)
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.427 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2122620b-540e-4372-ac95-adcc7646d11a]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.428 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.428 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.428 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 4b8f21c3-21c3-482f-88c7-197b5bceb2ea disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.428 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.429 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1fac6201-71ea-40c2-82e3-e702bca4f951]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.429 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.430 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[933795dd-c22a-4ecf-828c-9aeed1ca11e7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.430 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-4b8f21c3-21c3-482f-88c7-197b5bceb2ea
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 4b8f21c3-21c3-482f-88c7-197b5bceb2ea
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:10:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:00.431 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'env', 'PROCESS_TAG=haproxy-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:10:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:00 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.763 2 DEBUG nova.compute.manager [req-f72abe23-b00e-4076-b195-ff760c74c70f req-022fde24-636a-4714-91f5-3e81621bcf0d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received event network-vif-plugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.764 2 DEBUG oslo_concurrency.lockutils [req-f72abe23-b00e-4076-b195-ff760c74c70f req-022fde24-636a-4714-91f5-3e81621bcf0d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.764 2 DEBUG oslo_concurrency.lockutils [req-f72abe23-b00e-4076-b195-ff760c74c70f req-022fde24-636a-4714-91f5-3e81621bcf0d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.764 2 DEBUG oslo_concurrency.lockutils [req-f72abe23-b00e-4076-b195-ff760c74c70f req-022fde24-636a-4714-91f5-3e81621bcf0d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:00 compute-0 nova_compute[265391]: 2025-09-30 18:10:00.765 2 DEBUG nova.compute.manager [req-f72abe23-b00e-4076-b195-ff760c74c70f req-022fde24-636a-4714-91f5-3e81621bcf0d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Processing event network-vif-plugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:10:00 compute-0 podman[301261]: 2025-09-30 18:10:00.854420729 +0000 UTC m=+0.069702715 container create 91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930)
Sep 30 18:10:00 compute-0 systemd[1]: Started libpod-conmon-91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089.scope.
Sep 30 18:10:00 compute-0 podman[301261]: 2025-09-30 18:10:00.814051628 +0000 UTC m=+0.029333624 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:10:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab45167e0507266ef071800330669726251ef521860c56e210ed1baaffb101b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:00 compute-0 podman[301261]: 2025-09-30 18:10:00.941987278 +0000 UTC m=+0.157269314 container init 91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:10:00 compute-0 podman[301261]: 2025-09-30 18:10:00.947127842 +0000 UTC m=+0.162409828 container start 91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, io.buildah.version=1.41.4)
Sep 30 18:10:00 compute-0 neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea[301277]: [NOTICE]   (301281) : New worker (301283) forked
Sep 30 18:10:00 compute-0 neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea[301277]: [NOTICE]   (301281) : Loading success.
Sep 30 18:10:01 compute-0 ceph-mon[73755]: pgmap v935: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:10:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:01 compute-0 openstack_network_exporter[279566]: ERROR   18:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:10:01 compute-0 openstack_network_exporter[279566]: ERROR   18:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:10:01 compute-0 openstack_network_exporter[279566]: ERROR   18:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:10:01 compute-0 openstack_network_exporter[279566]: ERROR   18:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:10:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:10:01 compute-0 openstack_network_exporter[279566]: ERROR   18:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:10:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:10:01 compute-0 nova_compute[265391]: 2025-09-30 18:10:01.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.013 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.018 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.022 2 INFO nova.virt.libvirt.driver [-] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Instance spawned successfully.
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.022 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:10:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.564 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.565 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.565 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.566 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.566 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.567 2 DEBUG nova.virt.libvirt.driver [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:10:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:02 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.898 2 DEBUG nova.compute.manager [req-ad018f04-d98c-4adc-bc15-5e2bc5df7577 req-1f9d6d42-13a4-4452-91e4-4748651fda3e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received event network-vif-plugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.899 2 DEBUG oslo_concurrency.lockutils [req-ad018f04-d98c-4adc-bc15-5e2bc5df7577 req-1f9d6d42-13a4-4452-91e4-4748651fda3e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.900 2 DEBUG oslo_concurrency.lockutils [req-ad018f04-d98c-4adc-bc15-5e2bc5df7577 req-1f9d6d42-13a4-4452-91e4-4748651fda3e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.900 2 DEBUG oslo_concurrency.lockutils [req-ad018f04-d98c-4adc-bc15-5e2bc5df7577 req-1f9d6d42-13a4-4452-91e4-4748651fda3e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.901 2 DEBUG nova.compute.manager [req-ad018f04-d98c-4adc-bc15-5e2bc5df7577 req-1f9d6d42-13a4-4452-91e4-4748651fda3e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] No waiting events found dispatching network-vif-plugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:10:02 compute-0 nova_compute[265391]: 2025-09-30 18:10:02.901 2 WARNING nova.compute.manager [req-ad018f04-d98c-4adc-bc15-5e2bc5df7577 req-1f9d6d42-13a4-4452-91e4-4748651fda3e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received unexpected event network-vif-plugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 for instance with vm_state building and task_state spawning.
Sep 30 18:10:03 compute-0 nova_compute[265391]: 2025-09-30 18:10:03.074 2 INFO nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Took 13.34 seconds to spawn the instance on the hypervisor.
Sep 30 18:10:03 compute-0 nova_compute[265391]: 2025-09-30 18:10:03.076 2 DEBUG nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:10:03 compute-0 ceph-mon[73755]: pgmap v936: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:10:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:03 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:03.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:03 compute-0 nova_compute[265391]: 2025-09-30 18:10:03.604 2 INFO nova.compute.manager [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Took 19.04 seconds to build instance.
Sep 30 18:10:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:03.661Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Sep 30 18:10:04 compute-0 nova_compute[265391]: 2025-09-30 18:10:04.111 2 DEBUG oslo_concurrency.lockutils [None req-74389532-7338-4f2a-9aff-abde2ac40984 d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.563s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:04.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:04 compute-0 nova_compute[265391]: 2025-09-30 18:10:04.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:04 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:05 compute-0 ceph-mon[73755]: pgmap v937: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Sep 30 18:10:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:05 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:05.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:05 compute-0 sshd-session[301338]: Invalid user postgres from 14.225.220.107 port 40482
Sep 30 18:10:05 compute-0 sshd-session[301338]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:10:05 compute-0 sshd-session[301338]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:10:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 70 op/s
Sep 30 18:10:06 compute-0 ceph-mon[73755]: pgmap v938: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 70 op/s
Sep 30 18:10:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:06.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:06 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003630 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:06 compute-0 nova_compute[265391]: 2025-09-30 18:10:06.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:07.147Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:10:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:07.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:10:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:10:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:10:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:10:07 compute-0 sshd-session[301338]: Failed password for invalid user postgres from 14.225.220.107 port 40482 ssh2
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:10:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:07.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005229063904082829 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:10:07
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'backups', 'vms', 'cephfs.cephfs.meta', 'images']
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:10:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 70 op/s
Sep 30 18:10:08 compute-0 ceph-mon[73755]: pgmap v939: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 70 op/s
Sep 30 18:10:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:08.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:08 compute-0 sudo[301345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:10:08 compute-0 sudo[301345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:08 compute-0 sudo[301345]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:10:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:10:09 compute-0 sshd-session[301338]: Received disconnect from 14.225.220.107 port 40482:11: Bye Bye [preauth]
Sep 30 18:10:09 compute-0 sshd-session[301338]: Disconnected from invalid user postgres 14.225.220.107 port 40482 [preauth]
Sep 30 18:10:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:09.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:09 compute-0 nova_compute[265391]: 2025-09-30 18:10:09.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:10:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:10.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:10 compute-0 podman[301374]: 2025-09-30 18:10:10.55412506 +0000 UTC m=+0.078582206 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:10:10 compute-0 podman[301373]: 2025-09-30 18:10:10.567189951 +0000 UTC m=+0.101165765 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:10:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c002810 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Sep 30 18:10:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Sep 30 18:10:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Sep 30 18:10:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Sep 30 18:10:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Sep 30 18:10:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Sep 30 18:10:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Sep 30 18:10:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Sep 30 18:10:11 compute-0 ceph-mon[73755]: pgmap v940: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:10:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2824111221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:11.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:11 compute-0 nova_compute[265391]: 2025-09-30 18:10:11.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:10:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:12.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:13 compute-0 ceph-mon[73755]: pgmap v941: 353 pgs: 353 active+clean; 88 MiB data, 205 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:10:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:13 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:13.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:13 compute-0 sshd-session[301423]: Invalid user www from 45.252.249.158 port 33910
Sep 30 18:10:13 compute-0 sshd-session[301423]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:10:13 compute-0 sshd-session[301423]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:10:13 compute-0 ovn_controller[156242]: 2025-09-30T18:10:13Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:c9:62 10.100.0.9
Sep 30 18:10:13 compute-0 ovn_controller[156242]: 2025-09-30T18:10:13Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:c9:62 10.100.0.9
Sep 30 18:10:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:13.541 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:10:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:13.542 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:10:13 compute-0 nova_compute[265391]: 2025-09-30 18:10:13.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:13.662Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 109 MiB data, 221 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 233 op/s
Sep 30 18:10:14 compute-0 nova_compute[265391]: 2025-09-30 18:10:14.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:14.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c002810 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:14 compute-0 sshd-session[301423]: Failed password for invalid user www from 45.252.249.158 port 33910 ssh2
Sep 30 18:10:15 compute-0 ceph-mon[73755]: pgmap v942: 353 pgs: 353 active+clean; 109 MiB data, 221 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 233 op/s
Sep 30 18:10:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:15 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:15.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:15 compute-0 sshd-session[301423]: Received disconnect from 45.252.249.158 port 33910:11: Bye Bye [preauth]
Sep 30 18:10:15 compute-0 sshd-session[301423]: Disconnected from invalid user www 45.252.249.158 port 33910 [preauth]
Sep 30 18:10:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 109 MiB data, 221 MiB used, 40 GiB / 40 GiB avail; 434 KiB/s rd, 2.0 MiB/s wr, 162 op/s
Sep 30 18:10:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:16.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:16 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:16 compute-0 nova_compute[265391]: 2025-09-30 18:10:16.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:17 compute-0 ceph-mon[73755]: pgmap v943: 353 pgs: 353 active+clean; 109 MiB data, 221 MiB used, 40 GiB / 40 GiB avail; 434 KiB/s rd, 2.0 MiB/s wr, 162 op/s
Sep 30 18:10:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:17.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:10:17 compute-0 podman[301432]: 2025-09-30 18:10:17.20839959 +0000 UTC m=+0.076241345 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Sep 30 18:10:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:17.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 109 MiB data, 221 MiB used, 40 GiB / 40 GiB avail; 434 KiB/s rd, 2.0 MiB/s wr, 162 op/s
Sep 30 18:10:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:18.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:18 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c002810 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:10:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:10:18 compute-0 nova_compute[265391]: 2025-09-30 18:10:18.937 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:10:19 compute-0 ceph-mon[73755]: pgmap v944: 353 pgs: 353 active+clean; 109 MiB data, 221 MiB used, 40 GiB / 40 GiB avail; 434 KiB/s rd, 2.0 MiB/s wr, 162 op/s
Sep 30 18:10:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:19 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:19 compute-0 nova_compute[265391]: 2025-09-30 18:10:19.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:10:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:19.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:19 compute-0 nova_compute[265391]: 2025-09-30 18:10:19.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 213 op/s
Sep 30 18:10:20 compute-0 nova_compute[265391]: 2025-09-30 18:10:20.152 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:20 compute-0 nova_compute[265391]: 2025-09-30 18:10:20.152 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:20 compute-0 nova_compute[265391]: 2025-09-30 18:10:20.153 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:20 compute-0 nova_compute[265391]: 2025-09-30 18:10:20.153 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:10:20 compute-0 nova_compute[265391]: 2025-09-30 18:10:20.153 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:10:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:20.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:10:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352399850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:20 compute-0 nova_compute[265391]: 2025-09-30 18:10:20.644 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:10:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:20 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:21 compute-0 ceph-mon[73755]: pgmap v945: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 213 op/s
Sep 30 18:10:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3352399850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:21 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2200044a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:21.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:21.543 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.686 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.687 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.831 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.832 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.852 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.853 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4332MB free_disk=39.92606735229492GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.853 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.853 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:21 compute-0 nova_compute[265391]: 2025-09-30 18:10:21.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Sep 30 18:10:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:10:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:10:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:22.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:22 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c002810 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:22 compute-0 nova_compute[265391]: 2025-09-30 18:10:22.979 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance c31a2d74-08f2-40bb-8b81-fd5301d9a627 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:10:22 compute-0 nova_compute[265391]: 2025-09-30 18:10:22.980 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:10:22 compute-0 nova_compute[265391]: 2025-09-30 18:10:22.980 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:10:21 up  1:13,  0 user,  load average: 0.90, 0.87, 0.92\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_0783e60216244dbda21696efa03e2275': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:10:23 compute-0 nova_compute[265391]: 2025-09-30 18:10:23.109 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:10:23 compute-0 ceph-mon[73755]: pgmap v946: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Sep 30 18:10:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:10:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2957699765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:10:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2372102183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:10:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:23 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:23.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:10:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147530626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:23 compute-0 podman[301508]: 2025-09-30 18:10:23.539876589 +0000 UTC m=+0.075975788 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, architecture=x86_64, name=ubi9-minimal, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:10:23 compute-0 podman[301506]: 2025-09-30 18:10:23.541997295 +0000 UTC m=+0.080687302 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Sep 30 18:10:23 compute-0 podman[301507]: 2025-09-30 18:10:23.542132918 +0000 UTC m=+0.079067819 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930)
Sep 30 18:10:23 compute-0 nova_compute[265391]: 2025-09-30 18:10:23.577 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:10:23 compute-0 nova_compute[265391]: 2025-09-30 18:10:23.583 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:10:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:23.663Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Sep 30 18:10:24 compute-0 nova_compute[265391]: 2025-09-30 18:10:24.090 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:10:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4147530626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:24 compute-0 nova_compute[265391]: 2025-09-30 18:10:24.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:24.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:24 compute-0 nova_compute[265391]: 2025-09-30 18:10:24.600 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:10:24 compute-0 nova_compute[265391]: 2025-09-30 18:10:24.600 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.747s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:24 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:25 compute-0 ceph-mon[73755]: pgmap v947: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Sep 30 18:10:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:25 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:25.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 51 op/s
Sep 30 18:10:26 compute-0 ceph-mon[73755]: pgmap v948: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 51 op/s
Sep 30 18:10:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:26.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:26 compute-0 nova_compute[265391]: 2025-09-30 18:10:26.600 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:10:26 compute-0 nova_compute[265391]: 2025-09-30 18:10:26.601 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:10:26 compute-0 nova_compute[265391]: 2025-09-30 18:10:26.601 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:10:26 compute-0 nova_compute[265391]: 2025-09-30 18:10:26.601 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:10:26 compute-0 nova_compute[265391]: 2025-09-30 18:10:26.602 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:10:26 compute-0 nova_compute[265391]: 2025-09-30 18:10:26.602 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:10:26 compute-0 nova_compute[265391]: 2025-09-30 18:10:26.602 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:10:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:26 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c002810 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:26 compute-0 nova_compute[265391]: 2025-09-30 18:10:26.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:27.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:27 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3274908834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:27 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:27.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 51 op/s
Sep 30 18:10:28 compute-0 ceph-mon[73755]: pgmap v949: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 51 op/s
Sep 30 18:10:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:28.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:28 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:28] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:10:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:28] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:10:28 compute-0 sudo[301572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:10:28 compute-0 sudo[301572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:28 compute-0 sudo[301572]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:29 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:29.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:29 compute-0 nova_compute[265391]: 2025-09-30 18:10:29.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:29 compute-0 podman[276673]: time="2025-09-30T18:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:10:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:10:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10715 "" "Go-http-client/1.1"
Sep 30 18:10:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 56 op/s
Sep 30 18:10:30 compute-0 sudo[301599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:10:30 compute-0 sudo[301599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:30 compute-0 sudo[301599]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:30 compute-0 sudo[301624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:10:30 compute-0 sudo[301624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:30.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:30 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:30 compute-0 sudo[301624]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:10:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:10:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:10:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:10:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:10:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:10:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:10:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:10:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:10:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:10:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:10:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:10:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:10:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:10:31 compute-0 sudo[301682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:10:31 compute-0 sudo[301682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:31 compute-0 sudo[301682]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:31 compute-0 ceph-mon[73755]: pgmap v950: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 56 op/s
Sep 30 18:10:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3126934234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:10:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:10:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:10:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:10:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:10:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:10:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:10:31 compute-0 sudo[301707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:10:31 compute-0 sudo[301707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:31 compute-0 openstack_network_exporter[279566]: ERROR   18:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:10:31 compute-0 openstack_network_exporter[279566]: ERROR   18:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:10:31 compute-0 openstack_network_exporter[279566]: ERROR   18:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:10:31 compute-0 openstack_network_exporter[279566]: ERROR   18:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:10:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:10:31 compute-0 openstack_network_exporter[279566]: ERROR   18:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:10:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:10:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:31.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:31 compute-0 podman[301776]: 2025-09-30 18:10:31.579885501 +0000 UTC m=+0.060860376 container create a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_napier, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:10:31 compute-0 systemd[1]: Started libpod-conmon-a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75.scope.
Sep 30 18:10:31 compute-0 podman[301776]: 2025-09-30 18:10:31.551313627 +0000 UTC m=+0.032288572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:10:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:10:31 compute-0 podman[301776]: 2025-09-30 18:10:31.677324667 +0000 UTC m=+0.158299532 container init a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:10:31 compute-0 podman[301776]: 2025-09-30 18:10:31.685225703 +0000 UTC m=+0.166200558 container start a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_napier, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:10:31 compute-0 podman[301776]: 2025-09-30 18:10:31.688054816 +0000 UTC m=+0.169029691 container attach a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_napier, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 18:10:31 compute-0 happy_napier[301792]: 167 167
Sep 30 18:10:31 compute-0 podman[301776]: 2025-09-30 18:10:31.691388193 +0000 UTC m=+0.172363048 container died a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_napier, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 18:10:31 compute-0 systemd[1]: libpod-a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75.scope: Deactivated successfully.
Sep 30 18:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc9964ecbbf6655713201a7701f1fd099744a082aaf1072ae6baf2ae388d788-merged.mount: Deactivated successfully.
Sep 30 18:10:31 compute-0 podman[301776]: 2025-09-30 18:10:31.73431018 +0000 UTC m=+0.215285035 container remove a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:10:31 compute-0 systemd[1]: libpod-conmon-a16bb01d7a9d689173bba489613dbfedfe1237459a9a29285ac88fa25c28ef75.scope: Deactivated successfully.
Sep 30 18:10:31 compute-0 nova_compute[265391]: 2025-09-30 18:10:31.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:31 compute-0 podman[301818]: 2025-09-30 18:10:31.971452263 +0000 UTC m=+0.063041432 container create c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:10:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 3.2 KiB/s rd, 26 KiB/s wr, 5 op/s
Sep 30 18:10:32 compute-0 systemd[1]: Started libpod-conmon-c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e.scope.
Sep 30 18:10:32 compute-0 podman[301818]: 2025-09-30 18:10:31.942865239 +0000 UTC m=+0.034454468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:10:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc3d5293a8b6b9da9ed25d3e59f021e1dd2756924d34871a24e34a6adf3856d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc3d5293a8b6b9da9ed25d3e59f021e1dd2756924d34871a24e34a6adf3856d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc3d5293a8b6b9da9ed25d3e59f021e1dd2756924d34871a24e34a6adf3856d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc3d5293a8b6b9da9ed25d3e59f021e1dd2756924d34871a24e34a6adf3856d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc3d5293a8b6b9da9ed25d3e59f021e1dd2756924d34871a24e34a6adf3856d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:32 compute-0 podman[301818]: 2025-09-30 18:10:32.087934335 +0000 UTC m=+0.179523554 container init c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_merkle, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:10:32 compute-0 podman[301818]: 2025-09-30 18:10:32.099482386 +0000 UTC m=+0.191071555 container start c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:10:32 compute-0 podman[301818]: 2025-09-30 18:10:32.103460289 +0000 UTC m=+0.195049538 container attach c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_merkle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:10:32 compute-0 hungry_merkle[301835]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:10:32 compute-0 hungry_merkle[301835]: --> All data devices are unavailable
Sep 30 18:10:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:32.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:32 compute-0 systemd[1]: libpod-c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e.scope: Deactivated successfully.
Sep 30 18:10:32 compute-0 podman[301818]: 2025-09-30 18:10:32.548615576 +0000 UTC m=+0.640204755 container died c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bc3d5293a8b6b9da9ed25d3e59f021e1dd2756924d34871a24e34a6adf3856d-merged.mount: Deactivated successfully.
Sep 30 18:10:32 compute-0 podman[301818]: 2025-09-30 18:10:32.605408824 +0000 UTC m=+0.696997973 container remove c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 18:10:32 compute-0 systemd[1]: libpod-conmon-c31a6c32ef48f447edfc6e3853b608830807bba3c319860947fcba4006245b4e.scope: Deactivated successfully.
Sep 30 18:10:32 compute-0 sudo[301707]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:32 compute-0 sudo[301861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:10:32 compute-0 sudo[301861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:32 compute-0 sudo[301861]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:32 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:32 compute-0 sudo[301886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:10:32 compute-0 sudo[301886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:33 compute-0 ceph-mon[73755]: pgmap v951: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 3.2 KiB/s rd, 26 KiB/s wr, 5 op/s
Sep 30 18:10:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:33 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:33 compute-0 podman[301951]: 2025-09-30 18:10:33.329010949 +0000 UTC m=+0.051083720 container create 0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 18:10:33 compute-0 systemd[1]: Started libpod-conmon-0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f.scope.
Sep 30 18:10:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:10:33 compute-0 podman[301951]: 2025-09-30 18:10:33.309520252 +0000 UTC m=+0.031593083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:10:33 compute-0 podman[301951]: 2025-09-30 18:10:33.419984497 +0000 UTC m=+0.142057288 container init 0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:10:33 compute-0 podman[301951]: 2025-09-30 18:10:33.428314094 +0000 UTC m=+0.150386845 container start 0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:10:33 compute-0 podman[301951]: 2025-09-30 18:10:33.431790785 +0000 UTC m=+0.153863586 container attach 0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:10:33 compute-0 nostalgic_kapitsa[301967]: 167 167
Sep 30 18:10:33 compute-0 systemd[1]: libpod-0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f.scope: Deactivated successfully.
Sep 30 18:10:33 compute-0 conmon[301967]: conmon 0aa4b6cfa06a91c33969 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f.scope/container/memory.events
Sep 30 18:10:33 compute-0 podman[301951]: 2025-09-30 18:10:33.435239034 +0000 UTC m=+0.157311795 container died 0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:10:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:33.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e388656cbe7e90114235018f6668f133eae5af31036b74e857fb20cf86d6f11-merged.mount: Deactivated successfully.
Sep 30 18:10:33 compute-0 podman[301951]: 2025-09-30 18:10:33.477060803 +0000 UTC m=+0.199133564 container remove 0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 18:10:33 compute-0 systemd[1]: libpod-conmon-0aa4b6cfa06a91c33969aaae23e128dc490df5a303bb935e4102b96342e2bc7f.scope: Deactivated successfully.
Sep 30 18:10:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:33.664Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:33 compute-0 podman[301992]: 2025-09-30 18:10:33.70514307 +0000 UTC m=+0.047632761 container create d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_torvalds, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:10:33 compute-0 systemd[1]: Started libpod-conmon-d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa.scope.
Sep 30 18:10:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc421f32551c8df3174ca8fd4fcb9d84fa5ba5ce5dba29faa3d34900ae5b50a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc421f32551c8df3174ca8fd4fcb9d84fa5ba5ce5dba29faa3d34900ae5b50a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc421f32551c8df3174ca8fd4fcb9d84fa5ba5ce5dba29faa3d34900ae5b50a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc421f32551c8df3174ca8fd4fcb9d84fa5ba5ce5dba29faa3d34900ae5b50a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:33 compute-0 podman[301992]: 2025-09-30 18:10:33.687738577 +0000 UTC m=+0.030228278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:10:33 compute-0 podman[301992]: 2025-09-30 18:10:33.795119872 +0000 UTC m=+0.137609613 container init d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 18:10:33 compute-0 podman[301992]: 2025-09-30 18:10:33.808334496 +0000 UTC m=+0.150824227 container start d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_torvalds, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:10:33 compute-0 podman[301992]: 2025-09-30 18:10:33.812160536 +0000 UTC m=+0.154650237 container attach d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_torvalds, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:10:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]: {
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:     "0": [
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:         {
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "devices": [
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "/dev/loop3"
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             ],
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "lv_name": "ceph_lv0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "lv_size": "21470642176",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "name": "ceph_lv0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "tags": {
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.cluster_name": "ceph",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.crush_device_class": "",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.encrypted": "0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.osd_id": "0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.type": "block",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.vdo": "0",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:                 "ceph.with_tpm": "0"
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             },
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "type": "block",
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:             "vg_name": "ceph_vg0"
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:         }
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]:     ]
Sep 30 18:10:34 compute-0 cranky_torvalds[302009]: }
Sep 30 18:10:34 compute-0 systemd[1]: libpod-d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa.scope: Deactivated successfully.
Sep 30 18:10:34 compute-0 podman[301992]: 2025-09-30 18:10:34.192603099 +0000 UTC m=+0.535092870 container died d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_torvalds, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc421f32551c8df3174ca8fd4fcb9d84fa5ba5ce5dba29faa3d34900ae5b50a5-merged.mount: Deactivated successfully.
Sep 30 18:10:34 compute-0 podman[301992]: 2025-09-30 18:10:34.229826928 +0000 UTC m=+0.572316619 container remove d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:10:34 compute-0 systemd[1]: libpod-conmon-d82c7c4dbb2499520010f29f1d6b1a381d017b3140f10c9c0cf118ec11f23ffa.scope: Deactivated successfully.
Sep 30 18:10:34 compute-0 sudo[301886]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:34 compute-0 sudo[302031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:10:34 compute-0 sudo[302031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:34 compute-0 sudo[302031]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:34 compute-0 sudo[302056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:10:34 compute-0 sudo[302056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:34 compute-0 nova_compute[265391]: 2025-09-30 18:10:34.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:34.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:34 compute-0 podman[302123]: 2025-09-30 18:10:34.897175479 +0000 UTC m=+0.054516030 container create 5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mendeleev, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:10:34 compute-0 systemd[1]: Started libpod-conmon-5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76.scope.
Sep 30 18:10:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:10:34 compute-0 podman[302123]: 2025-09-30 18:10:34.876754697 +0000 UTC m=+0.034095278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:10:34 compute-0 podman[302123]: 2025-09-30 18:10:34.979192033 +0000 UTC m=+0.136532664 container init 5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mendeleev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:10:34 compute-0 podman[302123]: 2025-09-30 18:10:34.992653024 +0000 UTC m=+0.149993575 container start 5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:10:34 compute-0 podman[302123]: 2025-09-30 18:10:34.996116694 +0000 UTC m=+0.153457335 container attach 5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:10:35 compute-0 gracious_mendeleev[302139]: 167 167
Sep 30 18:10:35 compute-0 systemd[1]: libpod-5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76.scope: Deactivated successfully.
Sep 30 18:10:35 compute-0 podman[302123]: 2025-09-30 18:10:35.003215769 +0000 UTC m=+0.160556350 container died 5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mendeleev, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 18:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b50b460b8a3f3dc2c426d13bb22a996a2203d6c5850faf3ee5814b14d59fbe5c-merged.mount: Deactivated successfully.
Sep 30 18:10:35 compute-0 podman[302123]: 2025-09-30 18:10:35.048319943 +0000 UTC m=+0.205660494 container remove 5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:10:35 compute-0 systemd[1]: libpod-conmon-5c72c204a4ae5221f066a5d7ccfbaf3347eb06dbbe65ab44ca8b9748f0b6ca76.scope: Deactivated successfully.
Sep 30 18:10:35 compute-0 ceph-mon[73755]: pgmap v952: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Sep 30 18:10:35 compute-0 podman[302162]: 2025-09-30 18:10:35.284968653 +0000 UTC m=+0.059482229 container create 15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:10:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:35 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:35 compute-0 systemd[1]: Started libpod-conmon-15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58.scope.
Sep 30 18:10:35 compute-0 podman[302162]: 2025-09-30 18:10:35.26372734 +0000 UTC m=+0.038240956 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:10:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3faccb582e4ae358a65ce60f47aaeea0eae48525c49bdd1ef0f479b3348be534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3faccb582e4ae358a65ce60f47aaeea0eae48525c49bdd1ef0f479b3348be534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3faccb582e4ae358a65ce60f47aaeea0eae48525c49bdd1ef0f479b3348be534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3faccb582e4ae358a65ce60f47aaeea0eae48525c49bdd1ef0f479b3348be534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:10:35 compute-0 podman[302162]: 2025-09-30 18:10:35.389791411 +0000 UTC m=+0.164304977 container init 15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:10:35 compute-0 podman[302162]: 2025-09-30 18:10:35.406529097 +0000 UTC m=+0.181042653 container start 15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:10:35 compute-0 podman[302162]: 2025-09-30 18:10:35.41049995 +0000 UTC m=+0.185013506 container attach 15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 18:10:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:35.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:10:36 compute-0 lvm[302255]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:10:36 compute-0 lvm[302255]: VG ceph_vg0 finished
Sep 30 18:10:36 compute-0 elastic_sammet[302179]: {}
Sep 30 18:10:36 compute-0 systemd[1]: libpod-15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58.scope: Deactivated successfully.
Sep 30 18:10:36 compute-0 systemd[1]: libpod-15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58.scope: Consumed 1.120s CPU time.
Sep 30 18:10:36 compute-0 podman[302259]: 2025-09-30 18:10:36.182029772 +0000 UTC m=+0.039150630 container died 15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3faccb582e4ae358a65ce60f47aaeea0eae48525c49bdd1ef0f479b3348be534-merged.mount: Deactivated successfully.
Sep 30 18:10:36 compute-0 podman[302259]: 2025-09-30 18:10:36.237203558 +0000 UTC m=+0.094324376 container remove 15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:10:36 compute-0 systemd[1]: libpod-conmon-15b68d8e67d48eeb58fc2037027787efa9cb3bdf9f0b0c7cdff41a0557126a58.scope: Deactivated successfully.
Sep 30 18:10:36 compute-0 sudo[302056]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:10:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:10:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:10:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:10:36 compute-0 sudo[302274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:10:36 compute-0 sudo[302274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:36 compute-0 sudo[302274]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:10:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/434169990' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:10:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:10:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/434169990' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:10:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:36.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:36 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:36 compute-0 nova_compute[265391]: 2025-09-30 18:10:36.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:37 compute-0 ceph-mon[73755]: pgmap v953: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:10:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:10:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:10:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/434169990' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:10:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/434169990' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:10:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:37.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:10:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:10:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:10:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:10:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:10:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:10:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:10:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:10:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:37.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:10:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:10:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:38.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:38 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:38] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:10:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:38] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:10:39 compute-0 ceph-mon[73755]: pgmap v954: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:10:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:39 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:10:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:39.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:10:39 compute-0 nova_compute[265391]: 2025-09-30 18:10:39.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 121 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Sep 30 18:10:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:40.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:40 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:41 compute-0 ceph-mon[73755]: pgmap v955: 353 pgs: 353 active+clean; 121 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Sep 30 18:10:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:41 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:41.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:41 compute-0 podman[302305]: 2025-09-30 18:10:41.540287237 +0000 UTC m=+0.071536663 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:10:41 compute-0 podman[302304]: 2025-09-30 18:10:41.615527466 +0000 UTC m=+0.146458373 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Sep 30 18:10:41 compute-0 nova_compute[265391]: 2025-09-30 18:10:41.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 121 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 95 op/s
Sep 30 18:10:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3101236716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:42.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:42 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:43 compute-0 ceph-mon[73755]: pgmap v956: 353 pgs: 353 active+clean; 121 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 95 op/s
Sep 30 18:10:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:43 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:43.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:43.666Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 236 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 95 op/s
Sep 30 18:10:44 compute-0 nova_compute[265391]: 2025-09-30 18:10:44.496 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:44 compute-0 nova_compute[265391]: 2025-09-30 18:10:44.496 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:44 compute-0 nova_compute[265391]: 2025-09-30 18:10:44.497 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:44 compute-0 nova_compute[265391]: 2025-09-30 18:10:44.497 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:44 compute-0 nova_compute[265391]: 2025-09-30 18:10:44.498 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:44 compute-0 nova_compute[265391]: 2025-09-30 18:10:44.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:44 compute-0 nova_compute[265391]: 2025-09-30 18:10:44.516 2 INFO nova.compute.manager [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Terminating instance
Sep 30 18:10:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:44.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:44 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.036 2 DEBUG nova.compute.manager [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:10:45 compute-0 kernel: tapa7cb8fe9-f1 (unregistering): left promiscuous mode
Sep 30 18:10:45 compute-0 NetworkManager[45059]: <info>  [1759255845.1095] device (tapa7cb8fe9-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 ovn_controller[156242]: 2025-09-30T18:10:45Z|00054|binding|INFO|Releasing lport a7cb8fe9-f156-4a0f-aa37-f348ade37f45 from this chassis (sb_readonly=0)
Sep 30 18:10:45 compute-0 ovn_controller[156242]: 2025-09-30T18:10:45Z|00055|binding|INFO|Setting lport a7cb8fe9-f156-4a0f-aa37-f348ade37f45 down in Southbound
Sep 30 18:10:45 compute-0 ovn_controller[156242]: 2025-09-30T18:10:45Z|00056|binding|INFO|Removing iface tapa7cb8fe9-f1 ovn-installed in OVS
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.130 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:c9:62 10.100.0.9'], port_security=['fa:16:3e:d7:c9:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c31a2d74-08f2-40bb-8b81-fd5301d9a627', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0783e60216244dbda21696efa03e2275', 'neutron:revision_number': '5', 'neutron:security_group_ids': '44055cfe-7091-4bf5-849f-a5ec90884056', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3fffd780-66a8-4f09-9e3d-aefd98ad1eb6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=a7cb8fe9-f156-4a0f-aa37-f348ade37f45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.132 166158 INFO neutron.agent.ovn.metadata.agent [-] Port a7cb8fe9-f156-4a0f-aa37-f348ade37f45 in datapath 4b8f21c3-21c3-482f-88c7-197b5bceb2ea unbound from our chassis
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.135 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b8f21c3-21c3-482f-88c7-197b5bceb2ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.141 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3fef4a-b3b1-46b3-9ff5-01fc7972d626]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.142 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea namespace which is not needed anymore
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Sep 30 18:10:45 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 14.910s CPU time.
Sep 30 18:10:45 compute-0 systemd-machined[219917]: Machine qemu-2-instance-00000002 terminated.
Sep 30 18:10:45 compute-0 ceph-mon[73755]: pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 236 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 95 op/s
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.293 2 INFO nova.virt.libvirt.driver [-] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Instance destroyed successfully.
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.295 2 DEBUG nova.objects.instance [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lazy-loading 'resources' on Instance uuid c31a2d74-08f2-40bb-8b81-fd5301d9a627 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:10:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:45 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:45 compute-0 neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea[301277]: [NOTICE]   (301281) : haproxy version is 3.0.5-8e879a5
Sep 30 18:10:45 compute-0 neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea[301277]: [NOTICE]   (301281) : path to executable is /usr/sbin/haproxy
Sep 30 18:10:45 compute-0 neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea[301277]: [WARNING]  (301281) : Exiting Master process...
Sep 30 18:10:45 compute-0 podman[302388]: 2025-09-30 18:10:45.323603226 +0000 UTC m=+0.043875543 container kill 91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 18:10:45 compute-0 neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea[301277]: [ALERT]    (301281) : Current worker (301283) exited with code 143 (Terminated)
Sep 30 18:10:45 compute-0 neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea[301277]: [WARNING]  (301281) : All workers exited. Exiting... (0)
Sep 30 18:10:45 compute-0 systemd[1]: libpod-91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089.scope: Deactivated successfully.
Sep 30 18:10:45 compute-0 conmon[301277]: conmon 91b5cba595b3f0c685d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089.scope/container/memory.events
Sep 30 18:10:45 compute-0 podman[302407]: 2025-09-30 18:10:45.369591114 +0000 UTC m=+0.024024477 container died 91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 18:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089-userdata-shm.mount: Deactivated successfully.
Sep 30 18:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ab45167e0507266ef071800330669726251ef521860c56e210ed1baaffb101b-merged.mount: Deactivated successfully.
Sep 30 18:10:45 compute-0 podman[302407]: 2025-09-30 18:10:45.415895549 +0000 UTC m=+0.070328872 container cleanup 91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:10:45 compute-0 systemd[1]: libpod-conmon-91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089.scope: Deactivated successfully.
Sep 30 18:10:45 compute-0 podman[302415]: 2025-09-30 18:10:45.436770872 +0000 UTC m=+0.071179193 container remove 91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.443 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ab8bde-9d59-4227-a681-c305092072c1]: (4, ("Tue Sep 30 06:10:45 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea (91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089)\n91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089\nTue Sep 30 06:10:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea (91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089)\n91b5cba595b3f0c685d32549f0b6ffea1577b6a5484cdea816862ab2af734089\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.445 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2903cff0-10ad-43bc-a5f7-f89af7652a9c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.445 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b8f21c3-21c3-482f-88c7-197b5bceb2ea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.446 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2239506c-0188-478d-ae7a-aa8fbc83151d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.447 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b8f21c3-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 kernel: tap4b8f21c3-20: left promiscuous mode
Sep 30 18:10:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:45.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.473 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d29877b5-2893-4a1c-b513-2efe2e1659e1]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.499 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7bac440e-610a-4bbf-ba4a-806d71d19cf6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.500 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5627af-0052-4bbc-aa8f-8c1a246dac18]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.523 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[cf29b819-329c-4fae-a377-7f2c6b8db467]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439881, 'reachable_time': 27201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302441, 'error': None, 'target': 'ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d4b8f21c3\x2d21c3\x2d482f\x2d88c7\x2d197b5bceb2ea.mount: Deactivated successfully.
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.530 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4b8f21c3-21c3-482f-88c7-197b5bceb2ea deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:10:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:45.530 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[0225e8dd-9e59-42c6-a9d9-13947272b17f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:10:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.594974) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255845595028, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 706, "num_deletes": 255, "total_data_size": 905869, "memory_usage": 919000, "flush_reason": "Manual Compaction"}
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255845603892, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 894046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26364, "largest_seqno": 27069, "table_properties": {"data_size": 890422, "index_size": 1404, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8185, "raw_average_key_size": 18, "raw_value_size": 882990, "raw_average_value_size": 1997, "num_data_blocks": 63, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759255797, "oldest_key_time": 1759255797, "file_creation_time": 1759255845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 8981 microseconds, and 5589 cpu microseconds.
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.603954) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 894046 bytes OK
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.603982) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.605870) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.605888) EVENT_LOG_v1 {"time_micros": 1759255845605882, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.605910) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 902196, prev total WAL file size 902196, number of live WAL files 2.
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.606628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(873KB)], [59(12MB)]
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255845606700, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13941336, "oldest_snapshot_seqno": -1}
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5506 keys, 13824106 bytes, temperature: kUnknown
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255845668050, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 13824106, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13784956, "index_size": 24256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 140987, "raw_average_key_size": 25, "raw_value_size": 13682896, "raw_average_value_size": 2485, "num_data_blocks": 997, "num_entries": 5506, "num_filter_entries": 5506, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759255845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.668513) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 13824106 bytes
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.670101) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.8 rd, 224.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.4 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(31.1) write-amplify(15.5) OK, records in: 6030, records dropped: 524 output_compression: NoCompression
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.670135) EVENT_LOG_v1 {"time_micros": 1759255845670119, "job": 32, "event": "compaction_finished", "compaction_time_micros": 61476, "compaction_time_cpu_micros": 36182, "output_level": 6, "num_output_files": 1, "total_output_size": 13824106, "num_input_records": 6030, "num_output_records": 5506, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255845670631, "job": 32, "event": "table_file_deletion", "file_number": 61}
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255845675530, "job": 32, "event": "table_file_deletion", "file_number": 59}
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.606490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.675614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.675621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.675624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.675627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:10:45 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:10:45.675630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.803 2 DEBUG nova.virt.libvirt.vif [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-1794545882',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-1794545882',id=2,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:10:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0783e60216244dbda21696efa03e2275',ramdisk_id='',reservation_id='r-wk0wr0ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestDataModel-213655642',owner_user_name='tempest-TestDataModel-213655642-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:10:03Z,user_data=None,user_id='d8e62d62fa4d4959828354f71c48cd9d',uuid=c31a2d74-08f2-40bb-8b81-fd5301d9a627,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.804 2 DEBUG nova.network.os_vif_util [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Converting VIF {"id": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "address": "fa:16:3e:d7:c9:62", "network": {"id": "4b8f21c3-21c3-482f-88c7-197b5bceb2ea", "bridge": "br-int", "label": "tempest-TestDataModel-239005640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5947b7c96cd42be8502dbab4c825083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7cb8fe9-f1", "ovs_interfaceid": "a7cb8fe9-f156-4a0f-aa37-f348ade37f45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.805 2 DEBUG nova.network.os_vif_util [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c9:62,bridge_name='br-int',has_traffic_filtering=True,id=a7cb8fe9-f156-4a0f-aa37-f348ade37f45,network=Network(4b8f21c3-21c3-482f-88c7-197b5bceb2ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7cb8fe9-f1') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.806 2 DEBUG os_vif [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c9:62,bridge_name='br-int',has_traffic_filtering=True,id=a7cb8fe9-f156-4a0f-aa37-f348ade37f45,network=Network(4b8f21c3-21c3-482f-88c7-197b5bceb2ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7cb8fe9-f1') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7cb8fe9-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.816 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=057de372-0e14-4b48-84b8-63d0f9fe4e17) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.821 2 INFO os_vif [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:c9:62,bridge_name='br-int',has_traffic_filtering=True,id=a7cb8fe9-f156-4a0f-aa37-f348ade37f45,network=Network(4b8f21c3-21c3-482f-88c7-197b5bceb2ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7cb8fe9-f1')
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.857 2 DEBUG nova.compute.manager [req-4867338d-67a0-4363-bc63-a635dcdee041 req-6f1f4843-e28c-4eda-9f91-022b50a35c7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received event network-vif-unplugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.858 2 DEBUG oslo_concurrency.lockutils [req-4867338d-67a0-4363-bc63-a635dcdee041 req-6f1f4843-e28c-4eda-9f91-022b50a35c7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.859 2 DEBUG oslo_concurrency.lockutils [req-4867338d-67a0-4363-bc63-a635dcdee041 req-6f1f4843-e28c-4eda-9f91-022b50a35c7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.859 2 DEBUG oslo_concurrency.lockutils [req-4867338d-67a0-4363-bc63-a635dcdee041 req-6f1f4843-e28c-4eda-9f91-022b50a35c7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.860 2 DEBUG nova.compute.manager [req-4867338d-67a0-4363-bc63-a635dcdee041 req-6f1f4843-e28c-4eda-9f91-022b50a35c7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] No waiting events found dispatching network-vif-unplugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:10:45 compute-0 nova_compute[265391]: 2025-09-30 18:10:45.860 2 DEBUG nova.compute.manager [req-4867338d-67a0-4363-bc63-a635dcdee041 req-6f1f4843-e28c-4eda-9f91-022b50a35c7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received event network-vif-unplugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:10:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 121 MiB data, 236 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Sep 30 18:10:46 compute-0 nova_compute[265391]: 2025-09-30 18:10:46.308 2 INFO nova.virt.libvirt.driver [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Deleting instance files /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627_del
Sep 30 18:10:46 compute-0 nova_compute[265391]: 2025-09-30 18:10:46.309 2 INFO nova.virt.libvirt.driver [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Deletion of /var/lib/nova/instances/c31a2d74-08f2-40bb-8b81-fd5301d9a627_del complete
Sep 30 18:10:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:46.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:46 compute-0 ceph-mon[73755]: pgmap v958: 353 pgs: 353 active+clean; 121 MiB data, 236 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Sep 30 18:10:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:46 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:46 compute-0 nova_compute[265391]: 2025-09-30 18:10:46.825 2 INFO nova.compute.manager [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Took 1.79 seconds to destroy the instance on the hypervisor.
Sep 30 18:10:46 compute-0 nova_compute[265391]: 2025-09-30 18:10:46.825 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:10:46 compute-0 nova_compute[265391]: 2025-09-30 18:10:46.826 2 DEBUG nova.compute.manager [-] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:10:46 compute-0 nova_compute[265391]: 2025-09-30 18:10:46.827 2 DEBUG nova.network.neutron [-] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:10:46 compute-0 nova_compute[265391]: 2025-09-30 18:10:46.827 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:10:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:47.150Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:10:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:47.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.242 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:10:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:47 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:47.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:47 compute-0 podman[302463]: 2025-09-30 18:10:47.524480504 +0000 UTC m=+0.058935475 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.939 2 DEBUG nova.compute.manager [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received event network-vif-unplugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.939 2 DEBUG oslo_concurrency.lockutils [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.939 2 DEBUG oslo_concurrency.lockutils [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.939 2 DEBUG oslo_concurrency.lockutils [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.939 2 DEBUG nova.compute.manager [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] No waiting events found dispatching network-vif-unplugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.940 2 DEBUG nova.compute.manager [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received event network-vif-unplugged-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.940 2 DEBUG nova.compute.manager [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Received event network-vif-deleted-a7cb8fe9-f156-4a0f-aa37-f348ade37f45 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.940 2 INFO nova.compute.manager [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Neutron deleted interface a7cb8fe9-f156-4a0f-aa37-f348ade37f45; detaching it from the instance and deleting it from the info cache
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.940 2 DEBUG nova.network.neutron [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:10:47 compute-0 nova_compute[265391]: 2025-09-30 18:10:47.975 2 DEBUG nova.network.neutron [-] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:10:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 121 MiB data, 236 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Sep 30 18:10:48 compute-0 nova_compute[265391]: 2025-09-30 18:10:48.447 2 DEBUG nova.compute.manager [req-747cbaa1-df3b-4503-a715-2d28e34f8482 req-27e1fa51-80a9-4f19-9743-749d22ccc21b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Detach interface failed, port_id=a7cb8fe9-f156-4a0f-aa37-f348ade37f45, reason: Instance c31a2d74-08f2-40bb-8b81-fd5301d9a627 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Sep 30 18:10:48 compute-0 nova_compute[265391]: 2025-09-30 18:10:48.482 2 INFO nova.compute.manager [-] [instance: c31a2d74-08f2-40bb-8b81-fd5301d9a627] Took 1.66 seconds to deallocate network for instance.
Sep 30 18:10:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.003000078s ======
Sep 30 18:10:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:48.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Sep 30 18:10:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:48 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:48] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:10:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:48] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:10:48 compute-0 sudo[302485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:10:48 compute-0 sudo[302485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:10:48 compute-0 sudo[302485]: pam_unix(sudo:session): session closed for user root
Sep 30 18:10:49 compute-0 nova_compute[265391]: 2025-09-30 18:10:49.003 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:49 compute-0 nova_compute[265391]: 2025-09-30 18:10:49.003 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:49 compute-0 nova_compute[265391]: 2025-09-30 18:10:49.045 2 DEBUG oslo_concurrency.processutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:10:49 compute-0 ceph-mon[73755]: pgmap v959: 353 pgs: 353 active+clean; 121 MiB data, 236 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Sep 30 18:10:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:49 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:49.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:10:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2722150778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:49 compute-0 nova_compute[265391]: 2025-09-30 18:10:49.491 2 DEBUG oslo_concurrency.processutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:10:49 compute-0 nova_compute[265391]: 2025-09-30 18:10:49.497 2 DEBUG nova.compute.provider_tree [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:10:49 compute-0 nova_compute[265391]: 2025-09-30 18:10:49.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 54 op/s
Sep 30 18:10:50 compute-0 nova_compute[265391]: 2025-09-30 18:10:50.005 2 DEBUG nova.scheduler.client.report [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:10:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2722150778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:10:50 compute-0 nova_compute[265391]: 2025-09-30 18:10:50.517 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.513s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:50 compute-0 nova_compute[265391]: 2025-09-30 18:10:50.543 2 INFO nova.scheduler.client.report [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Deleted allocations for instance c31a2d74-08f2-40bb-8b81-fd5301d9a627
Sep 30 18:10:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:50.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:50 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:50 compute-0 nova_compute[265391]: 2025-09-30 18:10:50.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:51 compute-0 ceph-mon[73755]: pgmap v960: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 54 op/s
Sep 30 18:10:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:51 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:51.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:51 compute-0 nova_compute[265391]: 2025-09-30 18:10:51.575 2 DEBUG oslo_concurrency.lockutils [None req-51f2d8d4-5a67-4518-ab30-2b334896b95c d8e62d62fa4d4959828354f71c48cd9d 0783e60216244dbda21696efa03e2275 - - default default] Lock "c31a2d74-08f2-40bb-8b81-fd5301d9a627" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.078s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:10:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:10:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:10:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:10:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:52.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:10:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:52 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:53 compute-0 ceph-mon[73755]: pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:10:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:10:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:53 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:53.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:53.666Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:10:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:54.276 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:10:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:54.276 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:10:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:10:54.276 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:10:54 compute-0 nova_compute[265391]: 2025-09-30 18:10:54.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:54 compute-0 podman[302540]: 2025-09-30 18:10:54.542728527 +0000 UTC m=+0.074300185 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid)
Sep 30 18:10:54 compute-0 podman[302539]: 2025-09-30 18:10:54.551140406 +0000 UTC m=+0.077578090 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Sep 30 18:10:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:54 compute-0 podman[302541]: 2025-09-30 18:10:54.581395284 +0000 UTC m=+0.101406981 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Sep 30 18:10:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:55 compute-0 ceph-mon[73755]: pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:10:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:55.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:55 compute-0 nova_compute[265391]: 2025-09-30 18:10:55.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:10:55 compute-0 nova_compute[265391]: 2025-09-30 18:10:55.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:10:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:56.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:56 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:10:57.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:10:57 compute-0 ceph-mon[73755]: pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:10:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:57 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:57.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:10:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833487038' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:10:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:10:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833487038' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:10:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:10:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/833487038' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:10:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/833487038' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:10:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:10:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:58 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:58] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:10:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:10:58] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:10:59 compute-0 ceph-mon[73755]: pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:10:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:10:59 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:10:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:10:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:10:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:10:59.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:10:59 compute-0 nova_compute[265391]: 2025-09-30 18:10:59.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:10:59 compute-0 podman[276673]: time="2025-09-30T18:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:10:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:10:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10248 "" "Go-http-client/1.1"
Sep 30 18:10:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:11:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:00.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:00 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:00 compute-0 nova_compute[265391]: 2025-09-30 18:11:00.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:01 compute-0 ceph-mon[73755]: pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:11:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:01 compute-0 openstack_network_exporter[279566]: ERROR   18:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:11:01 compute-0 openstack_network_exporter[279566]: ERROR   18:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:11:01 compute-0 openstack_network_exporter[279566]: ERROR   18:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:11:01 compute-0 openstack_network_exporter[279566]: ERROR   18:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:11:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:11:01 compute-0 openstack_network_exporter[279566]: ERROR   18:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:11:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:11:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:11:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:01.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:11:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:02 compute-0 ceph-mon[73755]: pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:02.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:02 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:02.790 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:11:02 compute-0 nova_compute[265391]: 2025-09-30 18:11:02.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:02.791 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:11:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:03 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:03.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:03.666Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:04 compute-0 nova_compute[265391]: 2025-09-30 18:11:04.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:04.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:04 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:05 compute-0 ceph-mon[73755]: pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:05 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:05.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:05 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:05.793 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:05 compute-0 nova_compute[265391]: 2025-09-30 18:11:05.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:06.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:06 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:07 compute-0 ceph-mon[73755]: pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:07.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:11:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:11:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:11:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:07.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:11:07
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', '.nfs', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', 'images']
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:11:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:11:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:08] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:11:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:08] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:11:09 compute-0 sudo[302618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:11:09 compute-0 sudo[302618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:09 compute-0 sudo[302618]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:09 compute-0 ceph-mon[73755]: pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 18:11:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:09.468 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:07:7b 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250d452565a2459c8481b499c0227183', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3a8ea0a0-c179-4516-9404-04b68a17e79e) old=Port_Binding(mac=['fa:16:3e:18:07:7b'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250d452565a2459c8481b499c0227183', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:11:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:09.469 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3a8ea0a0-c179-4516-9404-04b68a17e79e in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab updated
Sep 30 18:11:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:09.471 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5fff1904-159a-4b76-8c46-feabf17f29ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:11:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:09.472 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[12fd1d42-f873-418f-9bc7-7bf727aea1f2]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:09.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:09 compute-0 nova_compute[265391]: 2025-09-30 18:11:09.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:10.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:10 compute-0 nova_compute[265391]: 2025-09-30 18:11:10.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:10 compute-0 sshd-session[302644]: Invalid user abcd from 14.225.220.107 port 55578
Sep 30 18:11:10 compute-0 sshd-session[302644]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:11:10 compute-0 sshd-session[302644]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:11:11 compute-0 ceph-mon[73755]: pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:11.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:12 compute-0 podman[302650]: 2025-09-30 18:11:12.546188787 +0000 UTC m=+0.072726204 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:11:12 compute-0 podman[302649]: 2025-09-30 18:11:12.577240915 +0000 UTC m=+0.106464932 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20250930, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 18:11:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:12.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:12 compute-0 sshd-session[302644]: Failed password for invalid user abcd from 14.225.220.107 port 55578 ssh2
Sep 30 18:11:13 compute-0 ceph-mon[73755]: pgmap v971: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:13 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:13.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:13.667Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:14 compute-0 nova_compute[265391]: 2025-09-30 18:11:14.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:14.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:14 compute-0 sshd-session[302644]: Received disconnect from 14.225.220.107 port 55578:11: Bye Bye [preauth]
Sep 30 18:11:14 compute-0 sshd-session[302644]: Disconnected from invalid user abcd 14.225.220.107 port 55578 [preauth]
Sep 30 18:11:15 compute-0 ceph-mon[73755]: pgmap v972: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:15 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:15 compute-0 sshd-session[302702]: Invalid user diana from 45.252.249.158 port 46300
Sep 30 18:11:15 compute-0 sshd-session[302702]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:11:15 compute-0 sshd-session[302702]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:11:15 compute-0 nova_compute[265391]: 2025-09-30 18:11:15.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:16.075 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:66:01 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d56b2601-7f56-4c9d-9a6f-73b6bc9a0f86', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d56b2601-7f56-4c9d-9a6f-73b6bc9a0f86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f77bdba-b48a-4510-90bd-ec07e6ccf8ba, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fa4f1b47-85c6-4625-86a5-85ea3743d11f) old=Port_Binding(mac=['fa:16:3e:18:66:01'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-d56b2601-7f56-4c9d-9a6f-73b6bc9a0f86', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d56b2601-7f56-4c9d-9a6f-73b6bc9a0f86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:11:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:16.076 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fa4f1b47-85c6-4625-86a5-85ea3743d11f in datapath d56b2601-7f56-4c9d-9a6f-73b6bc9a0f86 updated
Sep 30 18:11:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:16.078 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d56b2601-7f56-4c9d-9a6f-73b6bc9a0f86, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:11:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:16.078 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c5919f-3061-4257-a0a9-33aa35ee017c]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:16.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:16 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:17.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:17 compute-0 ceph-mon[73755]: pgmap v973: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:17.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:17 compute-0 sshd-session[302702]: Failed password for invalid user diana from 45.252.249.158 port 46300 ssh2
Sep 30 18:11:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:18 compute-0 sshd-session[302702]: Received disconnect from 45.252.249.158 port 46300:11: Bye Bye [preauth]
Sep 30 18:11:18 compute-0 sshd-session[302702]: Disconnected from invalid user diana 45.252.249.158 port 46300 [preauth]
Sep 30 18:11:18 compute-0 podman[302708]: 2025-09-30 18:11:18.513592892 +0000 UTC m=+0.059899998 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Sep 30 18:11:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:18.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:18 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:18] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:11:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:18] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:11:19 compute-0 ceph-mon[73755]: pgmap v974: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:19 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:19 compute-0 nova_compute[265391]: 2025-09-30 18:11:19.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:19 compute-0 nova_compute[265391]: 2025-09-30 18:11:19.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:19 compute-0 nova_compute[265391]: 2025-09-30 18:11:19.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:19 compute-0 nova_compute[265391]: 2025-09-30 18:11:19.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:19 compute-0 nova_compute[265391]: 2025-09-30 18:11:19.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:19 compute-0 nova_compute[265391]: 2025-09-30 18:11:19.941 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:11:19 compute-0 nova_compute[265391]: 2025-09-30 18:11:19.941 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:20 compute-0 ceph-mon[73755]: pgmap v975: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:11:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1035571615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:11:20 compute-0 nova_compute[265391]: 2025-09-30 18:11:20.439 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:20 compute-0 nova_compute[265391]: 2025-09-30 18:11:20.598 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:11:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:20 compute-0 nova_compute[265391]: 2025-09-30 18:11:20.600 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:20 compute-0 nova_compute[265391]: 2025-09-30 18:11:20.622 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:20 compute-0 nova_compute[265391]: 2025-09-30 18:11:20.622 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4529MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:11:20 compute-0 nova_compute[265391]: 2025-09-30 18:11:20.623 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:20 compute-0 nova_compute[265391]: 2025-09-30 18:11:20.623 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:20 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:20 compute-0 nova_compute[265391]: 2025-09-30 18:11:20.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1035571615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:11:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:21 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:21.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:11:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:11:22 compute-0 ceph-mon[73755]: pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:11:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:22 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:23 compute-0 nova_compute[265391]: 2025-09-30 18:11:23.240 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:11:23 compute-0 nova_compute[265391]: 2025-09-30 18:11:23.240 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:11:20 up  1:14,  0 user,  load average: 0.55, 0.77, 0.88\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:11:23 compute-0 nova_compute[265391]: 2025-09-30 18:11:23.258 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:23 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210004180 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:23.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=cleanup t=2025-09-30T18:11:23.534118496Z level=info msg="Completed cleanup jobs" duration=37.185476ms
Sep 30 18:11:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T18:11:23.616470236Z level=info msg="Update check succeeded" duration=53.394988ms
Sep 30 18:11:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T18:11:23.628639492Z level=info msg="Update check succeeded" duration=61.259532ms
Sep 30 18:11:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:23.668Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:11:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691222335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:11:23 compute-0 nova_compute[265391]: 2025-09-30 18:11:23.728 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:23 compute-0 nova_compute[265391]: 2025-09-30 18:11:23.735 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:11:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1691222335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:11:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:24 compute-0 nova_compute[265391]: 2025-09-30 18:11:24.245 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:11:24 compute-0 nova_compute[265391]: 2025-09-30 18:11:24.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:24.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:24 compute-0 nova_compute[265391]: 2025-09-30 18:11:24.756 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:11:24 compute-0 nova_compute[265391]: 2025-09-30 18:11:24.757 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.134s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:24 compute-0 ceph-mon[73755]: pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:24 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:25 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:25.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:25 compute-0 podman[302784]: 2025-09-30 18:11:25.519121078 +0000 UTC m=+0.053307216 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git)
Sep 30 18:11:25 compute-0 podman[302783]: 2025-09-30 18:11:25.53303452 +0000 UTC m=+0.063585623 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, container_name=iscsid, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:11:25 compute-0 podman[302782]: 2025-09-30 18:11:25.540013581 +0000 UTC m=+0.079262130 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 18:11:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:25 compute-0 nova_compute[265391]: 2025-09-30 18:11:25.756 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:25 compute-0 nova_compute[265391]: 2025-09-30 18:11:25.757 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:25 compute-0 nova_compute[265391]: 2025-09-30 18:11:25.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:26 compute-0 nova_compute[265391]: 2025-09-30 18:11:26.268 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:26 compute-0 nova_compute[265391]: 2025-09-30 18:11:26.268 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:26 compute-0 nova_compute[265391]: 2025-09-30 18:11:26.268 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:26 compute-0 nova_compute[265391]: 2025-09-30 18:11:26.269 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:26 compute-0 nova_compute[265391]: 2025-09-30 18:11:26.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:26 compute-0 nova_compute[265391]: 2025-09-30 18:11:26.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:11:26 compute-0 nova_compute[265391]: 2025-09-30 18:11:26.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:11:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:26.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:26 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:27 compute-0 ceph-mon[73755]: pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:27.154Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:27 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:27.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:28 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/561503774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:11:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:28 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:28] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:11:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:28] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:11:29 compute-0 ceph-mon[73755]: pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/519664168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:11:29 compute-0 sudo[302841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:11:29 compute-0 sudo[302841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:29 compute-0 sudo[302841]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:29 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:29 compute-0 nova_compute[265391]: 2025-09-30 18:11:29.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:29 compute-0 podman[276673]: time="2025-09-30T18:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:11:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:11:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10263 "" "Go-http-client/1.1"
Sep 30 18:11:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:30.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:30 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:30 compute-0 nova_compute[265391]: 2025-09-30 18:11:30.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:31 compute-0 ceph-mon[73755]: pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:31 compute-0 openstack_network_exporter[279566]: ERROR   18:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:11:31 compute-0 openstack_network_exporter[279566]: ERROR   18:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:11:31 compute-0 openstack_network_exporter[279566]: ERROR   18:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:11:31 compute-0 openstack_network_exporter[279566]: ERROR   18:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:11:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:11:31 compute-0 openstack_network_exporter[279566]: ERROR   18:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:11:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:11:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:31.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:31 compute-0 ovn_controller[156242]: 2025-09-30T18:11:31Z|00057|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Sep 30 18:11:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:32.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:32 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:33 compute-0 ceph-mon[73755]: pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:33 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:33.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:33.670Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:34 compute-0 nova_compute[265391]: 2025-09-30 18:11:34.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:34.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:35 compute-0 ceph-mon[73755]: pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:35 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:35.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:35 compute-0 nova_compute[265391]: 2025-09-30 18:11:35.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:11:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1090999453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:11:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:11:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1090999453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:11:36 compute-0 sudo[302874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:11:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:36.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:36 compute-0 sudo[302874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:36 compute-0 sudo[302874]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:36 compute-0 sudo[302899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:11:36 compute-0 sudo[302899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:36 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:37.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:37 compute-0 ceph-mon[73755]: pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1090999453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:11:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1090999453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:11:37 compute-0 sudo[302899]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:11:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:11:37 compute-0 sudo[302956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:11:37 compute-0 sudo[302956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:37 compute-0 sudo[302956]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:11:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:11:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:11:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:11:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:11:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:11:37 compute-0 sudo[302981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 18:11:37 compute-0 sudo[302981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:37.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:37 compute-0 sudo[302981]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:11:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:11:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 18:11:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:11:37 compute-0 nova_compute[265391]: 2025-09-30 18:11:37.831 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:37 compute-0 nova_compute[265391]: 2025-09-30 18:11:37.832 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:11:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:11:38 compute-0 nova_compute[265391]: 2025-09-30 18:11:38.338 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:11:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:38] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:11:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:38] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:11:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:38 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:38 compute-0 nova_compute[265391]: 2025-09-30 18:11:38.880 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:38 compute-0 nova_compute[265391]: 2025-09-30 18:11:38.880 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:38 compute-0 nova_compute[265391]: 2025-09-30 18:11:38.888 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:11:38 compute-0 nova_compute[265391]: 2025-09-30 18:11:38.888 2 INFO nova.compute.claims [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:11:39 compute-0 ceph-mon[73755]: pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:39 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:39.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:39 compute-0 nova_compute[265391]: 2025-09-30 18:11:39.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:11:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:11:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:39 compute-0 nova_compute[265391]: 2025-09-30 18:11:39.966 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:11:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/25730032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:11:40 compute-0 nova_compute[265391]: 2025-09-30 18:11:40.443 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:40 compute-0 nova_compute[265391]: 2025-09-30 18:11:40.450 2 DEBUG nova.compute.provider_tree [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:11:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:11:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:11:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:40 compute-0 ceph-mon[73755]: pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/25730032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:11:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:40 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:40 compute-0 nova_compute[265391]: 2025-09-30 18:11:40.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:40 compute-0 nova_compute[265391]: 2025-09-30 18:11:40.957 2 DEBUG nova.scheduler.client.report [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:11:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:11:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:11:41 compute-0 sudo[303052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:11:41 compute-0 sudo[303052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:41 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:41 compute-0 sudo[303052]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:41 compute-0 sudo[303077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:11:41 compute-0 sudo[303077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:41 compute-0 nova_compute[265391]: 2025-09-30 18:11:41.467 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.587s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:41 compute-0 nova_compute[265391]: 2025-09-30 18:11:41.468 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:11:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:41 compute-0 podman[303142]: 2025-09-30 18:11:41.940850511 +0000 UTC m=+0.052436923 container create 64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:11:41 compute-0 nova_compute[265391]: 2025-09-30 18:11:41.977 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:11:41 compute-0 nova_compute[265391]: 2025-09-30 18:11:41.978 2 DEBUG nova.network.neutron [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:11:41 compute-0 nova_compute[265391]: 2025-09-30 18:11:41.978 2 WARNING neutronclient.v2_0.client [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:11:41 compute-0 nova_compute[265391]: 2025-09-30 18:11:41.979 2 WARNING neutronclient.v2_0.client [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:11:41 compute-0 systemd[1]: Started libpod-conmon-64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da.scope.
Sep 30 18:11:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:42 compute-0 podman[303142]: 2025-09-30 18:11:41.918337006 +0000 UTC m=+0.029923458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:11:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:11:42 compute-0 podman[303142]: 2025-09-30 18:11:42.039853483 +0000 UTC m=+0.151439925 container init 64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:11:42 compute-0 podman[303142]: 2025-09-30 18:11:42.048018955 +0000 UTC m=+0.159605367 container start 64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kirch, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:11:42 compute-0 podman[303142]: 2025-09-30 18:11:42.051003923 +0000 UTC m=+0.162590375 container attach 64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kirch, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:11:42 compute-0 wizardly_kirch[303158]: 167 167
Sep 30 18:11:42 compute-0 systemd[1]: libpod-64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da.scope: Deactivated successfully.
Sep 30 18:11:42 compute-0 conmon[303158]: conmon 64621b44e5dbad69de88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da.scope/container/memory.events
Sep 30 18:11:42 compute-0 podman[303142]: 2025-09-30 18:11:42.055801777 +0000 UTC m=+0.167388189 container died 64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 18:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a65f73409e4c63c036089894aaba64bdb501adea7a91aa16b7a158eab1a6abe-merged.mount: Deactivated successfully.
Sep 30 18:11:42 compute-0 podman[303142]: 2025-09-30 18:11:42.103206419 +0000 UTC m=+0.214792841 container remove 64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 18:11:42 compute-0 systemd[1]: libpod-conmon-64621b44e5dbad69de8875e0fdf347c6f7309abf87d843561aec43cea0a131da.scope: Deactivated successfully.
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:11:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:11:42 compute-0 ceph-mon[73755]: pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:42 compute-0 podman[303182]: 2025-09-30 18:11:42.331973343 +0000 UTC m=+0.058661875 container create c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_taussig, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:11:42 compute-0 systemd[1]: Started libpod-conmon-c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927.scope.
Sep 30 18:11:42 compute-0 podman[303182]: 2025-09-30 18:11:42.313678657 +0000 UTC m=+0.040367179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:11:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20485e6dede63fb8f142b9b33f37b0c8e07b124acb96a584d66655f368a48fe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20485e6dede63fb8f142b9b33f37b0c8e07b124acb96a584d66655f368a48fe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20485e6dede63fb8f142b9b33f37b0c8e07b124acb96a584d66655f368a48fe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20485e6dede63fb8f142b9b33f37b0c8e07b124acb96a584d66655f368a48fe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20485e6dede63fb8f142b9b33f37b0c8e07b124acb96a584d66655f368a48fe0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:42 compute-0 podman[303182]: 2025-09-30 18:11:42.431620511 +0000 UTC m=+0.158309053 container init c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_taussig, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:11:42 compute-0 podman[303182]: 2025-09-30 18:11:42.451762365 +0000 UTC m=+0.178450877 container start c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_taussig, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 18:11:42 compute-0 podman[303182]: 2025-09-30 18:11:42.456176689 +0000 UTC m=+0.182865241 container attach c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:11:42 compute-0 nova_compute[265391]: 2025-09-30 18:11:42.488 2 INFO nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:11:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:42.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:42 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c0014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:42 compute-0 adoring_taussig[303198]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:11:42 compute-0 adoring_taussig[303198]: --> All data devices are unavailable
Sep 30 18:11:42 compute-0 systemd[1]: libpod-c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927.scope: Deactivated successfully.
Sep 30 18:11:42 compute-0 podman[303182]: 2025-09-30 18:11:42.826625904 +0000 UTC m=+0.553314416 container died c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-20485e6dede63fb8f142b9b33f37b0c8e07b124acb96a584d66655f368a48fe0-merged.mount: Deactivated successfully.
Sep 30 18:11:42 compute-0 podman[303182]: 2025-09-30 18:11:42.884100727 +0000 UTC m=+0.610789239 container remove c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_taussig, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 18:11:42 compute-0 systemd[1]: libpod-conmon-c28f861f4749d718fb23f82f207c155eb35d434b9e82400dc1c50b935af14927.scope: Deactivated successfully.
Sep 30 18:11:42 compute-0 podman[303221]: 2025-09-30 18:11:42.935493362 +0000 UTC m=+0.066251092 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:11:42 compute-0 sudo[303077]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:42 compute-0 podman[303213]: 2025-09-30 18:11:42.97544486 +0000 UTC m=+0.110805939 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:11:42 compute-0 nova_compute[265391]: 2025-09-30 18:11:42.996 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:11:43 compute-0 sudo[303271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:11:43 compute-0 sudo[303271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:43 compute-0 sudo[303271]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:43 compute-0 sudo[303299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:11:43 compute-0 sudo[303299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:43 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:43 compute-0 nova_compute[265391]: 2025-09-30 18:11:43.381 2 DEBUG nova.network.neutron [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Successfully created port: 26d718f0-c921-490f-815d-8221b976f012 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:11:43 compute-0 podman[303364]: 2025-09-30 18:11:43.460530653 +0000 UTC m=+0.042146606 container create edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:11:43 compute-0 systemd[1]: Started libpod-conmon-edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574.scope.
Sep 30 18:11:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:43.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:11:43 compute-0 podman[303364]: 2025-09-30 18:11:43.441732985 +0000 UTC m=+0.023348958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:11:43 compute-0 podman[303364]: 2025-09-30 18:11:43.547329768 +0000 UTC m=+0.128945741 container init edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:11:43 compute-0 podman[303364]: 2025-09-30 18:11:43.558907549 +0000 UTC m=+0.140523512 container start edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:11:43 compute-0 podman[303364]: 2025-09-30 18:11:43.562243256 +0000 UTC m=+0.143859279 container attach edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:11:43 compute-0 crazy_engelbart[303382]: 167 167
Sep 30 18:11:43 compute-0 systemd[1]: libpod-edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574.scope: Deactivated successfully.
Sep 30 18:11:43 compute-0 podman[303364]: 2025-09-30 18:11:43.565523001 +0000 UTC m=+0.147138954 container died edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 18:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-20cb520b1ea20983a78e75ad6f39352dfea0457b5b2c0afe11f027d4989da2a7-merged.mount: Deactivated successfully.
Sep 30 18:11:43 compute-0 podman[303364]: 2025-09-30 18:11:43.604978956 +0000 UTC m=+0.186594929 container remove edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_engelbart, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:11:43 compute-0 systemd[1]: libpod-conmon-edb31106e18a369b35a39d110324ddb8d554ea9a4aa3bb4a937f2b16675f8574.scope: Deactivated successfully.
Sep 30 18:11:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:43.671Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:43 compute-0 podman[303408]: 2025-09-30 18:11:43.780053875 +0000 UTC m=+0.043853451 container create 264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:11:43 compute-0 systemd[1]: Started libpod-conmon-264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a.scope.
Sep 30 18:11:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb591196786749941cee0c5043cf6af84f9f83ee9568b717ce6846eab797c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb591196786749941cee0c5043cf6af84f9f83ee9568b717ce6846eab797c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb591196786749941cee0c5043cf6af84f9f83ee9568b717ce6846eab797c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb591196786749941cee0c5043cf6af84f9f83ee9568b717ce6846eab797c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:43 compute-0 podman[303408]: 2025-09-30 18:11:43.854121429 +0000 UTC m=+0.117921015 container init 264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nash, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:11:43 compute-0 podman[303408]: 2025-09-30 18:11:43.760171738 +0000 UTC m=+0.023971324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:11:43 compute-0 podman[303408]: 2025-09-30 18:11:43.862568278 +0000 UTC m=+0.126367844 container start 264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nash, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:11:43 compute-0 podman[303408]: 2025-09-30 18:11:43.866942932 +0000 UTC m=+0.130742528 container attach 264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nash, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:11:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.015 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.017 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.018 2 INFO nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Creating image(s)
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.050 2 DEBUG nova.storage.rbd_utils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.085 2 DEBUG nova.storage.rbd_utils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.113 2 DEBUG nova.storage.rbd_utils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.118 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:44 compute-0 laughing_nash[303424]: {
Sep 30 18:11:44 compute-0 laughing_nash[303424]:     "0": [
Sep 30 18:11:44 compute-0 laughing_nash[303424]:         {
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "devices": [
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "/dev/loop3"
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             ],
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "lv_name": "ceph_lv0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "lv_size": "21470642176",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "name": "ceph_lv0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "tags": {
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.cluster_name": "ceph",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.crush_device_class": "",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.encrypted": "0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.osd_id": "0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.type": "block",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.vdo": "0",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:                 "ceph.with_tpm": "0"
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             },
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "type": "block",
Sep 30 18:11:44 compute-0 laughing_nash[303424]:             "vg_name": "ceph_vg0"
Sep 30 18:11:44 compute-0 laughing_nash[303424]:         }
Sep 30 18:11:44 compute-0 laughing_nash[303424]:     ]
Sep 30 18:11:44 compute-0 laughing_nash[303424]: }
Sep 30 18:11:44 compute-0 systemd[1]: libpod-264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a.scope: Deactivated successfully.
Sep 30 18:11:44 compute-0 podman[303408]: 2025-09-30 18:11:44.166029332 +0000 UTC m=+0.429828918 container died 264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.181 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.182 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.183 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.184 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-65cb591196786749941cee0c5043cf6af84f9f83ee9568b717ce6846eab797c0-merged.mount: Deactivated successfully.
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.214 2 DEBUG nova.storage.rbd_utils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:11:44 compute-0 podman[303408]: 2025-09-30 18:11:44.216152664 +0000 UTC m=+0.479952230 container remove 264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_nash, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.221 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:44 compute-0 systemd[1]: libpod-conmon-264407cf814a3c4d5319eda8d07d758f0c9a37f8873f3e188867100f44d55b7a.scope: Deactivated successfully.
Sep 30 18:11:44 compute-0 sudo[303299]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:44 compute-0 sudo[303520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:11:44 compute-0 sudo[303520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:44 compute-0 sudo[303520]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:44 compute-0 sudo[303560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:11:44 compute-0 sudo[303560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.499 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.589 2 DEBUG nova.storage.rbd_utils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] resizing rbd image 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:11:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:44.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.705 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.707 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Ensure instance console log exists: /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.707 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.708 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.708 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.710 2 DEBUG nova.network.neutron [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Successfully updated port: 26d718f0-c921-490f-815d-8221b976f012 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.771 2 DEBUG nova.compute.manager [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-changed-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.772 2 DEBUG nova.compute.manager [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Refreshing instance network info cache due to event network-changed-26d718f0-c921-490f-815d-8221b976f012. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.772 2 DEBUG oslo_concurrency.lockutils [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.773 2 DEBUG oslo_concurrency.lockutils [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:11:44 compute-0 nova_compute[265391]: 2025-09-30 18:11:44.773 2 DEBUG nova.network.neutron [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Refreshing network info cache for port 26d718f0-c921-490f-815d-8221b976f012 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:11:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:44 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:44 compute-0 podman[303702]: 2025-09-30 18:11:44.841665555 +0000 UTC m=+0.050750619 container create 59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:11:44 compute-0 systemd[1]: Started libpod-conmon-59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777.scope.
Sep 30 18:11:44 compute-0 podman[303702]: 2025-09-30 18:11:44.81492387 +0000 UTC m=+0.024009024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:11:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:11:44 compute-0 podman[303702]: 2025-09-30 18:11:44.938760988 +0000 UTC m=+0.147846142 container init 59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:11:44 compute-0 podman[303702]: 2025-09-30 18:11:44.953630264 +0000 UTC m=+0.162715348 container start 59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:11:44 compute-0 podman[303702]: 2025-09-30 18:11:44.957140535 +0000 UTC m=+0.166225659 container attach 59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:11:44 compute-0 sleepy_shirley[303719]: 167 167
Sep 30 18:11:44 compute-0 systemd[1]: libpod-59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777.scope: Deactivated successfully.
Sep 30 18:11:44 compute-0 podman[303702]: 2025-09-30 18:11:44.961015226 +0000 UTC m=+0.170100300 container died 59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e62c8b960e9101b36dca72054f8514fff4fec1fa96087eb4d9a9a85a0a461efa-merged.mount: Deactivated successfully.
Sep 30 18:11:44 compute-0 podman[303702]: 2025-09-30 18:11:44.997516694 +0000 UTC m=+0.206601758 container remove 59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:11:45 compute-0 systemd[1]: libpod-conmon-59fc7b05867c9a9b6eca3ccbd43c1a05f1af978432842f399cc7625a7440f777.scope: Deactivated successfully.
Sep 30 18:11:45 compute-0 ceph-mon[73755]: pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:11:45 compute-0 podman[303744]: 2025-09-30 18:11:45.183819005 +0000 UTC m=+0.055954105 container create 9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_blackwell, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Sep 30 18:11:45 compute-0 nova_compute[265391]: 2025-09-30 18:11:45.215 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:11:45 compute-0 systemd[1]: Started libpod-conmon-9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e.scope.
Sep 30 18:11:45 compute-0 podman[303744]: 2025-09-30 18:11:45.153594689 +0000 UTC m=+0.025729869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:11:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02366f28e2f4bdbfe3c43bbc84381c5339a4073374bc31311b89dc2059fdcc6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02366f28e2f4bdbfe3c43bbc84381c5339a4073374bc31311b89dc2059fdcc6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02366f28e2f4bdbfe3c43bbc84381c5339a4073374bc31311b89dc2059fdcc6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02366f28e2f4bdbfe3c43bbc84381c5339a4073374bc31311b89dc2059fdcc6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:45 compute-0 nova_compute[265391]: 2025-09-30 18:11:45.278 2 WARNING neutronclient.v2_0.client [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:11:45 compute-0 podman[303744]: 2025-09-30 18:11:45.286141313 +0000 UTC m=+0.158276513 container init 9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 18:11:45 compute-0 podman[303744]: 2025-09-30 18:11:45.298286709 +0000 UTC m=+0.170421809 container start 9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:11:45 compute-0 podman[303744]: 2025-09-30 18:11:45.302602591 +0000 UTC m=+0.174737721 container attach 9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_blackwell, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:11:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:45 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:45.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:45 compute-0 nova_compute[265391]: 2025-09-30 18:11:45.696 2 DEBUG nova.network.neutron [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:11:45 compute-0 nova_compute[265391]: 2025-09-30 18:11:45.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:45 compute-0 nova_compute[265391]: 2025-09-30 18:11:45.881 2 DEBUG nova.network.neutron [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:11:45 compute-0 lvm[303837]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:11:45 compute-0 lvm[303837]: VG ceph_vg0 finished
Sep 30 18:11:46 compute-0 modest_blackwell[303760]: {}
Sep 30 18:11:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:46 compute-0 systemd[1]: libpod-9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e.scope: Deactivated successfully.
Sep 30 18:11:46 compute-0 systemd[1]: libpod-9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e.scope: Consumed 1.148s CPU time.
Sep 30 18:11:46 compute-0 podman[303744]: 2025-09-30 18:11:46.049932227 +0000 UTC m=+0.922067327 container died 9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:11:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-02366f28e2f4bdbfe3c43bbc84381c5339a4073374bc31311b89dc2059fdcc6f-merged.mount: Deactivated successfully.
Sep 30 18:11:46 compute-0 podman[303744]: 2025-09-30 18:11:46.104537235 +0000 UTC m=+0.976672335 container remove 9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 18:11:46 compute-0 systemd[1]: libpod-conmon-9231851f5324dfd0084efc6a1a3c2d079cf120636139f50628a229894210788e.scope: Deactivated successfully.
Sep 30 18:11:46 compute-0 sudo[303560]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:11:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:11:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:46 compute-0 sudo[303852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:11:46 compute-0 sudo[303852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:46 compute-0 sudo[303852]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:46 compute-0 nova_compute[265391]: 2025-09-30 18:11:46.387 2 DEBUG oslo_concurrency.lockutils [req-16c028e7-7a8c-4ebc-9c2e-fe8cd89fc140 req-05859b6d-9717-43f6-8cbf-61b471950af5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:11:46 compute-0 nova_compute[265391]: 2025-09-30 18:11:46.387 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquired lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:11:46 compute-0 nova_compute[265391]: 2025-09-30 18:11:46.388 2 DEBUG nova.network.neutron [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:11:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:46 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:47.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:47 compute-0 ceph-mon[73755]: pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:11:47 compute-0 nova_compute[265391]: 2025-09-30 18:11:47.273 2 DEBUG nova.network.neutron [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:11:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:47 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:47.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:47 compute-0 nova_compute[265391]: 2025-09-30 18:11:47.549 2 WARNING neutronclient.v2_0.client [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:11:47 compute-0 nova_compute[265391]: 2025-09-30 18:11:47.752 2 DEBUG nova.network.neutron [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Updating instance_info_cache with network_info: [{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:11:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.259 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Releasing lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.260 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance network_info: |[{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.262 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Start _get_guest_xml network_info=[{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.267 2 WARNING nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.268 2 DEBUG nova.virt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-1100695055', uuid='761dbb06-6272-4941-a65f-5ad2f8cfbb70'), owner=OwnerMeta(userid='dc3bb71c425f484fbc46f90978029403', username='tempest-TestExecuteActionsViaActuator-837729328-project-admin', projectid='ddd1f985d8b64b449c79d55b0cbd6422', projectname='tempest-TestExecuteActionsViaActuator-837729328'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759255908.2685108) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.272 2 DEBUG nova.virt.libvirt.host [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.272 2 DEBUG nova.virt.libvirt.host [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.274 2 DEBUG nova.virt.libvirt.host [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.275 2 DEBUG nova.virt.libvirt.host [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.275 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.275 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.275 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.276 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.276 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.276 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.276 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.276 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.276 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.276 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.277 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.277 2 DEBUG nova.virt.hardware [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.279 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:48.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:11:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006239968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.714 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.745 2 DEBUG nova.storage.rbd_utils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:11:48 compute-0 nova_compute[265391]: 2025-09-30 18:11:48.749 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:48] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:11:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:48] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:11:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:48 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:11:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470287802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.223 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.226 2 DEBUG nova.virt.libvirt.vif [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1100695055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1100695055',id=4,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-9gd0nhai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:11:43Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=761dbb06-6272-4941-a65f-5ad2f8cfbb70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.228 2 DEBUG nova.network.os_vif_util [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converting VIF {"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.229 2 DEBUG nova.network.os_vif_util [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.231 2 DEBUG nova.objects.instance [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 761dbb06-6272-4941-a65f-5ad2f8cfbb70 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:11:49 compute-0 ceph-mon[73755]: pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 191 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:11:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3006239968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:11:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/470287802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:11:49 compute-0 sudo[303942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:11:49 compute-0 sudo[303942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:11:49 compute-0 sudo[303942]: pam_unix(sudo:session): session closed for user root
Sep 30 18:11:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:49 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:49 compute-0 podman[303966]: 2025-09-30 18:11:49.381601215 +0000 UTC m=+0.074437825 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_managed=true)
Sep 30 18:11:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:49.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.741 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <uuid>761dbb06-6272-4941-a65f-5ad2f8cfbb70</uuid>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <name>instance-00000004</name>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1100695055</nova:name>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:11:48</nova:creationTime>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:11:49 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:11:49 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:user uuid="dc3bb71c425f484fbc46f90978029403">tempest-TestExecuteActionsViaActuator-837729328-project-admin</nova:user>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:project uuid="ddd1f985d8b64b449c79d55b0cbd6422">tempest-TestExecuteActionsViaActuator-837729328</nova:project>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <nova:port uuid="26d718f0-c921-490f-815d-8221b976f012">
Sep 30 18:11:49 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <system>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <entry name="serial">761dbb06-6272-4941-a65f-5ad2f8cfbb70</entry>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <entry name="uuid">761dbb06-6272-4941-a65f-5ad2f8cfbb70</entry>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </system>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <os>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   </os>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <features>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   </features>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk">
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk.config">
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:11:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:0e:22:1f"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <target dev="tap26d718f0-c9"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/console.log" append="off"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <video>
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </video>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:11:49 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:11:49 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:11:49 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:11:49 compute-0 nova_compute[265391]: </domain>
Sep 30 18:11:49 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.742 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Preparing to wait for external event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.742 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.742 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.743 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.744 2 DEBUG nova.virt.libvirt.vif [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1100695055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1100695055',id=4,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-9gd0nhai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:11:43Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=761dbb06-6272-4941-a65f-5ad2f8cfbb70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.744 2 DEBUG nova.network.os_vif_util [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converting VIF {"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.745 2 DEBUG nova.network.os_vif_util [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.745 2 DEBUG os_vif [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.746 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'ea499168-1274-5a76-acdd-bd3df25ba063', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.758 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d718f0-c9, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap26d718f0-c9, col_values=(('qos', UUID('09d4bfae-d17b-43fd-b0e0-8dfa242ddb31')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap26d718f0-c9, col_values=(('external_ids', {'iface-id': '26d718f0-c921-490f-815d-8221b976f012', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:22:1f', 'vm-uuid': '761dbb06-6272-4941-a65f-5ad2f8cfbb70'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:49 compute-0 NetworkManager[45059]: <info>  [1759255909.7632] manager: (tap26d718f0-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:49 compute-0 nova_compute[265391]: 2025-09-30 18:11:49.817 2 INFO os_vif [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9')
Sep 30 18:11:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:11:50 compute-0 ceph-mon[73755]: pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:11:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:50 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:51 compute-0 nova_compute[265391]: 2025-09-30 18:11:51.358 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:11:51 compute-0 nova_compute[265391]: 2025-09-30 18:11:51.359 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:11:51 compute-0 nova_compute[265391]: 2025-09-30 18:11:51.359 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No VIF found with MAC fa:16:3e:0e:22:1f, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:11:51 compute-0 nova_compute[265391]: 2025-09-30 18:11:51.359 2 INFO nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Using config drive
Sep 30 18:11:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:51 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:51 compute-0 nova_compute[265391]: 2025-09-30 18:11:51.383 2 DEBUG nova.storage.rbd_utils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:11:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:51.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:51 compute-0 nova_compute[265391]: 2025-09-30 18:11:51.897 2 WARNING neutronclient.v2_0.client [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:11:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:11:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:11:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:11:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:52 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:52 compute-0 nova_compute[265391]: 2025-09-30 18:11:52.811 2 INFO nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Creating config drive at /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/disk.config
Sep 30 18:11:52 compute-0 nova_compute[265391]: 2025-09-30 18:11:52.818 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpkxjd1fi6 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:52 compute-0 nova_compute[265391]: 2025-09-30 18:11:52.963 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpkxjd1fi6" returned: 0 in 0.145s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:52 compute-0 nova_compute[265391]: 2025-09-30 18:11:52.994 2 DEBUG nova.storage.rbd_utils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:11:52 compute-0 nova_compute[265391]: 2025-09-30 18:11:52.998 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/disk.config 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:11:53 compute-0 ceph-mon[73755]: pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:11:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.176 2 DEBUG oslo_concurrency.processutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/disk.config 761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.177 2 INFO nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Deleting local config drive /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/disk.config because it was imported into RBD.
Sep 30 18:11:53 compute-0 kernel: tap26d718f0-c9: entered promiscuous mode
Sep 30 18:11:53 compute-0 NetworkManager[45059]: <info>  [1759255913.2280] manager: (tap26d718f0-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:53 compute-0 ovn_controller[156242]: 2025-09-30T18:11:53Z|00058|binding|INFO|Claiming lport 26d718f0-c921-490f-815d-8221b976f012 for this chassis.
Sep 30 18:11:53 compute-0 ovn_controller[156242]: 2025-09-30T18:11:53Z|00059|binding|INFO|26d718f0-c921-490f-815d-8221b976f012: Claiming fa:16:3e:0e:22:1f 10.100.0.7
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.247 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:22:1f 10.100.0.7'], port_security=['fa:16:3e:0e:22:1f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '761dbb06-6272-4941-a65f-5ad2f8cfbb70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=26d718f0-c921-490f-815d-8221b976f012) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.248 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 26d718f0-c921-490f-815d-8221b976f012 in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab bound to our chassis
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.249 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:11:53 compute-0 systemd-udevd[304063]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:11:53 compute-0 systemd-machined[219917]: New machine qemu-3-instance-00000004.
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.264 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9819f7-e187-4b56-9e93-fbb505a13951]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.264 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5fff1904-11 in ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.266 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5fff1904-10 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.266 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a111616-6c40-4f68-bf58-bf5f444d9d47]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.267 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[63f0f3e4-aabf-465d-a606-5bd4989537ae]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 NetworkManager[45059]: <info>  [1759255913.2747] device (tap26d718f0-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:11:53 compute-0 NetworkManager[45059]: <info>  [1759255913.2757] device (tap26d718f0-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.284 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1ea6e3-3f6a-404f-8868-e9cb02fa94df]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.301 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[86e3228c-e6eb-4568-86a6-bcfd6c3a16bf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_controller[156242]: 2025-09-30T18:11:53Z|00060|binding|INFO|Setting lport 26d718f0-c921-490f-815d-8221b976f012 ovn-installed in OVS
Sep 30 18:11:53 compute-0 ovn_controller[156242]: 2025-09-30T18:11:53Z|00061|binding|INFO|Setting lport 26d718f0-c921-490f-815d-8221b976f012 up in Southbound
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.332 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[856b5406-59c6-4439-a6ac-1dd6afe1433a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 NetworkManager[45059]: <info>  [1759255913.3369] manager: (tap5fff1904-10): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.336 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[106a3100-5c28-4a01-91e6-0607f6d6b0d8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.372 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[d32fab22-7728-4b94-9369-b393007e97e6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.374 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[370ce38f-d431-495e-90b9-40ad4db51eed]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:53 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:53 compute-0 NetworkManager[45059]: <info>  [1759255913.4008] device (tap5fff1904-10): carrier: link connected
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.406 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9cc202-fd09-4fe2-911f-d740526a9c7a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.422 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[33ff43a0-c63b-485f-a7d7-c75eeaa28cb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fff1904-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:07:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451209, 'reachable_time': 29414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304097, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.439 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[60f54745-34ce-47ee-8219-15f05bdad438]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:77b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451209, 'tstamp': 451209}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304098, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.465 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[386f3b97-1ffc-4416-9d76-e4ef74f57225]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fff1904-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:07:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451209, 'reachable_time': 29414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304099, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.474 2 DEBUG nova.compute.manager [req-11b37611-4682-4920-adc9-12eafbdedf92 req-510a17cd-ac9f-4d41-a143-838f5d3af801 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.475 2 DEBUG oslo_concurrency.lockutils [req-11b37611-4682-4920-adc9-12eafbdedf92 req-510a17cd-ac9f-4d41-a143-838f5d3af801 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.475 2 DEBUG oslo_concurrency.lockutils [req-11b37611-4682-4920-adc9-12eafbdedf92 req-510a17cd-ac9f-4d41-a143-838f5d3af801 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.475 2 DEBUG oslo_concurrency.lockutils [req-11b37611-4682-4920-adc9-12eafbdedf92 req-510a17cd-ac9f-4d41-a143-838f5d3af801 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.475 2 DEBUG nova.compute.manager [req-11b37611-4682-4920-adc9-12eafbdedf92 req-510a17cd-ac9f-4d41-a143-838f5d3af801 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Processing event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.507 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2a659b44-447b-4d48-8c0a-40030b89f3e9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:53.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.585 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c284a4f6-2006-4281-b3d0-5928798512a2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.586 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fff1904-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.586 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.586 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fff1904-10, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:53 compute-0 NetworkManager[45059]: <info>  [1759255913.5884] manager: (tap5fff1904-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:53 compute-0 kernel: tap5fff1904-10: entered promiscuous mode
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.590 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5fff1904-10, col_values=(('external_ids', {'iface-id': '3a8ea0a0-c179-4516-9404-04b68a17e79e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:11:53 compute-0 ovn_controller[156242]: 2025-09-30T18:11:53Z|00062|binding|INFO|Releasing lport 3a8ea0a0-c179-4516-9404-04b68a17e79e from this chassis (sb_readonly=0)
Sep 30 18:11:53 compute-0 nova_compute[265391]: 2025-09-30 18:11:53.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.614 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[62e81b97-6517-4113-91c8-0bd674d1c617]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.614 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.615 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.615 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 5fff1904-159a-4b76-8c46-feabf17f29ab disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.615 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.615 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c913d0e1-1ef8-4c54-aa8e-fad5eea4cab6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.616 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.616 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e5042383-4678-4b75-9280-8b219fa3f7f3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.616 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:11:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:53.617 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'env', 'PROCESS_TAG=haproxy-5fff1904-159a-4b76-8c46-feabf17f29ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5fff1904-159a-4b76-8c46-feabf17f29ab.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:11:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:53.672Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:11:54 compute-0 podman[304174]: 2025-09-30 18:11:54.015302211 +0000 UTC m=+0.041311385 container create 8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:11:54 compute-0 systemd[1]: Started libpod-conmon-8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222.scope.
Sep 30 18:11:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce662d27ece298740ab445188ea14e28475723bf6fa9a7d319e531bd16ab7eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:11:54 compute-0 podman[304174]: 2025-09-30 18:11:53.99411581 +0000 UTC m=+0.020125014 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:11:54 compute-0 podman[304174]: 2025-09-30 18:11:54.094043867 +0000 UTC m=+0.120053041 container init 8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930)
Sep 30 18:11:54 compute-0 podman[304174]: 2025-09-30 18:11:54.099796926 +0000 UTC m=+0.125806100 container start 8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:11:54 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[304190]: [NOTICE]   (304194) : New worker (304196) forked
Sep 30 18:11:54 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[304190]: [NOTICE]   (304194) : Loading success.
Sep 30 18:11:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:54.277 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:54.277 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:11:54.278 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.345 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.349 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.352 2 INFO nova.virt.libvirt.driver [-] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance spawned successfully.
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.353 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:54.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.868 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.868 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.869 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.870 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.871 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:11:54 compute-0 nova_compute[265391]: 2025-09-30 18:11:54.871 2 DEBUG nova.virt.libvirt.driver [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:11:54 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:11:55 compute-0 ceph-mon[73755]: pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.384 2 INFO nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Took 11.37 seconds to spawn the instance on the hypervisor.
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.384 2 DEBUG nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:11:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:55.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.702 2 DEBUG nova.compute.manager [req-fafd6383-e1fa-431d-8679-3706c5a8fd04 req-d360c670-5812-4b11-9590-772ce02d9b3f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.703 2 DEBUG oslo_concurrency.lockutils [req-fafd6383-e1fa-431d-8679-3706c5a8fd04 req-d360c670-5812-4b11-9590-772ce02d9b3f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.703 2 DEBUG oslo_concurrency.lockutils [req-fafd6383-e1fa-431d-8679-3706c5a8fd04 req-d360c670-5812-4b11-9590-772ce02d9b3f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.703 2 DEBUG oslo_concurrency.lockutils [req-fafd6383-e1fa-431d-8679-3706c5a8fd04 req-d360c670-5812-4b11-9590-772ce02d9b3f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.703 2 DEBUG nova.compute.manager [req-fafd6383-e1fa-431d-8679-3706c5a8fd04 req-d360c670-5812-4b11-9590-772ce02d9b3f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] No waiting events found dispatching network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.704 2 WARNING nova.compute.manager [req-fafd6383-e1fa-431d-8679-3706c5a8fd04 req-d360c670-5812-4b11-9590-772ce02d9b3f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received unexpected event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 for instance with vm_state active and task_state None.
Sep 30 18:11:55 compute-0 nova_compute[265391]: 2025-09-30 18:11:55.931 2 INFO nova.compute.manager [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Took 17.08 seconds to build instance.
Sep 30 18:11:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:11:56 compute-0 nova_compute[265391]: 2025-09-30 18:11:56.438 2 DEBUG oslo_concurrency.lockutils [None req-f6df02f9-a9bb-4967-bdeb-cfbf5bf0584e dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.607s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:11:56 compute-0 podman[304209]: 2025-09-30 18:11:56.573074003 +0000 UTC m=+0.089579099 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:11:56 compute-0 podman[304210]: 2025-09-30 18:11:56.578035961 +0000 UTC m=+0.093202852 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:11:56 compute-0 podman[304211]: 2025-09-30 18:11:56.60415 +0000 UTC m=+0.117908444 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:11:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:56.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:56 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:57 compute-0 ceph-mon[73755]: pgmap v993: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:11:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:11:57.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:11:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:57 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:11:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/564030798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:11:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:11:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/564030798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:11:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:57.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:11:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/564030798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:11:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/564030798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:11:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:11:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:11:58.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:11:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:58] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:11:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:11:58] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:11:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:58 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:59 compute-0 ceph-mon[73755]: pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:11:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:11:59 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:11:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:11:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:11:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:11:59.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:11:59 compute-0 nova_compute[265391]: 2025-09-30 18:11:59.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:59 compute-0 podman[276673]: time="2025-09-30T18:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:11:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:11:59 compute-0 nova_compute[265391]: 2025-09-30 18:11:59.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:11:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10723 "" "Go-http-client/1.1"
Sep 30 18:12:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:12:00 compute-0 ceph-mon[73755]: pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:12:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:00.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:00 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:01 compute-0 openstack_network_exporter[279566]: ERROR   18:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:12:01 compute-0 openstack_network_exporter[279566]: ERROR   18:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:12:01 compute-0 openstack_network_exporter[279566]: ERROR   18:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:12:01 compute-0 openstack_network_exporter[279566]: ERROR   18:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:12:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:12:01 compute-0 openstack_network_exporter[279566]: ERROR   18:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:12:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:12:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:01.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:12:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=infra.usagestats t=2025-09-30T18:12:02.533023634Z level=info msg="Usage stats are ready to report"
Sep 30 18:12:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:02.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:02 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:03 compute-0 ceph-mon[73755]: pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:12:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/95796165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:03 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:03.675Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:12:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:12:04 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:04.141 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:12:04 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:04.145 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:12:04 compute-0 nova_compute[265391]: 2025-09-30 18:12:04.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:04 compute-0 nova_compute[265391]: 2025-09-30 18:12:04.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:04.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:04 compute-0 nova_compute[265391]: 2025-09-30 18:12:04.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:04 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:05 compute-0 ceph-mon[73755]: pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:12:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:05 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 68 op/s
Sep 30 18:12:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:06.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:06 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc22c004200 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:07 compute-0 ceph-mon[73755]: pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 68 op/s
Sep 30 18:12:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:07.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:07 compute-0 ovn_controller[156242]: 2025-09-30T18:12:07Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:22:1f 10.100.0.7
Sep 30 18:12:07 compute-0 ovn_controller[156242]: 2025-09-30T18:12:07Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:22:1f 10.100.0.7
Sep 30 18:12:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:12:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:12:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c003b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:12:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:07.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005229063904082829 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:12:07
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', '.nfs', 'vms', 'volumes', 'default.rgw.control', 'images', 'backups', 'cephfs.cephfs.data']
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:12:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:12:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 68 op/s
Sep 30 18:12:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:12:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:12:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:12:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:09 compute-0 ceph-mon[73755]: pgmap v999: 353 pgs: 353 active+clean; 88 MiB data, 209 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 68 op/s
Sep 30 18:12:09 compute-0 sudo[304282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:12:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:09 compute-0 sudo[304282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:09 compute-0 sudo[304282]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:09 compute-0 nova_compute[265391]: 2025-09-30 18:12:09.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:09.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:09 compute-0 nova_compute[265391]: 2025-09-30 18:12:09.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Sep 30 18:12:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c002c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:11 compute-0 ceph-mon[73755]: pgmap v1000: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Sep 30 18:12:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3709444185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:12:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3388835610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:12:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:11.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 328 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:12:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:12.152 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:12.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:13 compute-0 ceph-mon[73755]: pgmap v1001: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 328 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:12:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:13 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2240014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:13.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:13 compute-0 podman[304315]: 2025-09-30 18:12:13.549364054 +0000 UTC m=+0.085711118 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:12:13 compute-0 podman[304314]: 2025-09-30 18:12:13.555197306 +0000 UTC m=+0.095265887 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:12:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:13.676Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 329 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:12:14 compute-0 sshd-session[304311]: Invalid user www from 14.225.220.107 port 47752
Sep 30 18:12:14 compute-0 sshd-session[304311]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:12:14 compute-0 sshd-session[304311]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:12:14 compute-0 nova_compute[265391]: 2025-09-30 18:12:14.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:14.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:14 compute-0 nova_compute[265391]: 2025-09-30 18:12:14.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2240023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:15 compute-0 ceph-mon[73755]: pgmap v1002: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 329 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:12:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:15 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:15.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 328 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:12:16 compute-0 sshd-session[304311]: Failed password for invalid user www from 14.225.220.107 port 47752 ssh2
Sep 30 18:12:16 compute-0 sshd-session[304311]: Received disconnect from 14.225.220.107 port 47752:11: Bye Bye [preauth]
Sep 30 18:12:16 compute-0 sshd-session[304311]: Disconnected from invalid user www 14.225.220.107 port 47752 [preauth]
Sep 30 18:12:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:16.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:16 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:17.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:17 compute-0 ceph-mon[73755]: pgmap v1003: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 328 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:12:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:17.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 328 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:12:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:18.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:12:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:12:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:18 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2240023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:19 compute-0 ceph-mon[73755]: pgmap v1004: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 328 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:12:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:19 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:19 compute-0 unix_chkpwd[304372]: password check failed for user (root)
Sep 30 18:12:19 compute-0 sshd-session[304367]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:12:19 compute-0 podman[304373]: 2025-09-30 18:12:19.541625155 +0000 UTC m=+0.071664733 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:12:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:19.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:19 compute-0 nova_compute[265391]: 2025-09-30 18:12:19.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:19 compute-0 nova_compute[265391]: 2025-09-30 18:12:19.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Sep 30 18:12:20 compute-0 sshd[192864]: Timeout before authentication for connection from 115.190.39.222 to 38.102.83.202, pid = 301456
Sep 30 18:12:20 compute-0 ceph-mon[73755]: pgmap v1005: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Sep 30 18:12:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:20.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:20 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:21 compute-0 sshd-session[304367]: Failed password for root from 45.252.249.158 port 43502 ssh2
Sep 30 18:12:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:21 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:21 compute-0 nova_compute[265391]: 2025-09-30 18:12:21.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:12:21 compute-0 nova_compute[265391]: 2025-09-30 18:12:21.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:12:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:21.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:21 compute-0 nova_compute[265391]: 2025-09-30 18:12:21.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:21 compute-0 nova_compute[265391]: 2025-09-30 18:12:21.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:21 compute-0 nova_compute[265391]: 2025-09-30 18:12:21.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:21 compute-0 nova_compute[265391]: 2025-09-30 18:12:21.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:12:21 compute-0 nova_compute[265391]: 2025-09-30 18:12:21.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:12:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Sep 30 18:12:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:12:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:12:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:12:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867667517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:22 compute-0 nova_compute[265391]: 2025-09-30 18:12:22.427 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:12:22 compute-0 sshd-session[304367]: Received disconnect from 45.252.249.158 port 43502:11: Bye Bye [preauth]
Sep 30 18:12:22 compute-0 sshd-session[304367]: Disconnected from authenticating user root 45.252.249.158 port 43502 [preauth]
Sep 30 18:12:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:22 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2240023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:23 compute-0 ceph-mon[73755]: pgmap v1006: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Sep 30 18:12:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:12:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2867667517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:23 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:23 compute-0 nova_compute[265391]: 2025-09-30 18:12:23.470 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:12:23 compute-0 nova_compute[265391]: 2025-09-30 18:12:23.471 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:12:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:23.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:23 compute-0 nova_compute[265391]: 2025-09-30 18:12:23.642 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:12:23 compute-0 nova_compute[265391]: 2025-09-30 18:12:23.643 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:12:23 compute-0 nova_compute[265391]: 2025-09-30 18:12:23.663 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:12:23 compute-0 nova_compute[265391]: 2025-09-30 18:12:23.664 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4334MB free_disk=39.92576217651367GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:12:23 compute-0 nova_compute[265391]: 2025-09-30 18:12:23.664 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:23 compute-0 nova_compute[265391]: 2025-09-30 18:12:23.664 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:23.676Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Sep 30 18:12:24 compute-0 nova_compute[265391]: 2025-09-30 18:12:24.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:24 compute-0 nova_compute[265391]: 2025-09-30 18:12:24.717 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 761dbb06-6272-4941-a65f-5ad2f8cfbb70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:12:24 compute-0 nova_compute[265391]: 2025-09-30 18:12:24.718 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:12:24 compute-0 nova_compute[265391]: 2025-09-30 18:12:24.718 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:12:23 up  1:15,  0 user,  load average: 0.64, 0.74, 0.86\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_ddd1f985d8b64b449c79d55b0cbd6422': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:12:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:24.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:24 compute-0 nova_compute[265391]: 2025-09-30 18:12:24.754 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:12:24 compute-0 nova_compute[265391]: 2025-09-30 18:12:24.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:24 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:25 compute-0 ceph-mon[73755]: pgmap v1007: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Sep 30 18:12:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:12:25 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224065394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:25 compute-0 nova_compute[265391]: 2025-09-30 18:12:25.251 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:12:25 compute-0 nova_compute[265391]: 2025-09-30 18:12:25.259 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:12:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:25 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:25.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:25 compute-0 nova_compute[265391]: 2025-09-30 18:12:25.767 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:12:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:12:26 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3224065394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:26 compute-0 nova_compute[265391]: 2025-09-30 18:12:26.282 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:12:26 compute-0 nova_compute[265391]: 2025-09-30 18:12:26.283 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.618s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:26 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:27 compute-0 ceph-mon[73755]: pgmap v1008: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:12:27 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3669096758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:27.161Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:27 compute-0 nova_compute[265391]: 2025-09-30 18:12:27.282 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:12:27 compute-0 nova_compute[265391]: 2025-09-30 18:12:27.282 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:12:27 compute-0 nova_compute[265391]: 2025-09-30 18:12:27.283 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:12:27 compute-0 nova_compute[265391]: 2025-09-30 18:12:27.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:12:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:27 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2180010d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:27 compute-0 nova_compute[265391]: 2025-09-30 18:12:27.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:12:27 compute-0 podman[304448]: 2025-09-30 18:12:27.526247938 +0000 UTC m=+0.063121041 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Sep 30 18:12:27 compute-0 podman[304450]: 2025-09-30 18:12:27.539393869 +0000 UTC m=+0.067574966 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, distribution-scope=public, io.openshift.expose-services=)
Sep 30 18:12:27 compute-0 podman[304449]: 2025-09-30 18:12:27.545257672 +0000 UTC m=+0.080206885 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:12:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:27.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:12:28 compute-0 nova_compute[265391]: 2025-09-30 18:12:28.118 2 DEBUG nova.compute.manager [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Stashing vm_state: active _prep_resize /usr/lib/python3.12/site-packages/nova/compute/manager.py:6169
Sep 30 18:12:28 compute-0 nova_compute[265391]: 2025-09-30 18:12:28.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:12:28 compute-0 nova_compute[265391]: 2025-09-30 18:12:28.430 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:12:28 compute-0 nova_compute[265391]: 2025-09-30 18:12:28.640 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:28 compute-0 nova_compute[265391]: 2025-09-30 18:12:28.641 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:28.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:28] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:12:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:28] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:12:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:28 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:29 compute-0 nova_compute[265391]: 2025-09-30 18:12:29.154 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:12:29 compute-0 ceph-mon[73755]: pgmap v1009: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:12:29 compute-0 nova_compute[265391]: 2025-09-30 18:12:29.155 2 INFO nova.compute.claims [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:12:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:29 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:29 compute-0 sudo[304507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:12:29 compute-0 sudo[304507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:29 compute-0 sudo[304507]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:29.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:29 compute-0 nova_compute[265391]: 2025-09-30 18:12:29.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:29 compute-0 nova_compute[265391]: 2025-09-30 18:12:29.665 2 INFO nova.compute.resource_tracker [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Updating resource usage from migration d5fb4765-43c9-45e5-9a44-0773cf0d1510
Sep 30 18:12:29 compute-0 nova_compute[265391]: 2025-09-30 18:12:29.728 2 DEBUG oslo_concurrency.processutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:12:29 compute-0 podman[276673]: time="2025-09-30T18:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:12:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:12:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10729 "" "Go-http-client/1.1"
Sep 30 18:12:29 compute-0 nova_compute[265391]: 2025-09-30 18:12:29.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Sep 30 18:12:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:12:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1422138412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/433824411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1422138412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:30 compute-0 nova_compute[265391]: 2025-09-30 18:12:30.178 2 DEBUG oslo_concurrency.processutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:12:30 compute-0 nova_compute[265391]: 2025-09-30 18:12:30.184 2 DEBUG nova.compute.provider_tree [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:12:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:30 compute-0 nova_compute[265391]: 2025-09-30 18:12:30.693 2 DEBUG nova.scheduler.client.report [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:12:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:30.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:30 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:31 compute-0 ceph-mon[73755]: pgmap v1010: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Sep 30 18:12:31 compute-0 nova_compute[265391]: 2025-09-30 18:12:31.204 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 2.563s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:31 compute-0 nova_compute[265391]: 2025-09-30 18:12:31.204 2 INFO nova.compute.manager [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Migrating
Sep 30 18:12:31 compute-0 nova_compute[265391]: 2025-09-30 18:12:31.205 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:12:31 compute-0 nova_compute[265391]: 2025-09-30 18:12:31.205 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:12:31 compute-0 openstack_network_exporter[279566]: ERROR   18:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:12:31 compute-0 openstack_network_exporter[279566]: ERROR   18:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:12:31 compute-0 openstack_network_exporter[279566]: ERROR   18:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:12:31 compute-0 openstack_network_exporter[279566]: ERROR   18:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:12:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:12:31 compute-0 openstack_network_exporter[279566]: ERROR   18:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:12:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:12:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2180010d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:31.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:31 compute-0 nova_compute[265391]: 2025-09-30 18:12:31.711 2 INFO nova.compute.rpcapi [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Automatically selected compute RPC version 6.4 from minimum service version 70
Sep 30 18:12:31 compute-0 nova_compute[265391]: 2025-09-30 18:12:31.713 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:12:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 3.5 KiB/s rd, 1023 B/s wr, 1 op/s
Sep 30 18:12:32 compute-0 nova_compute[265391]: 2025-09-30 18:12:32.244 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:12:32 compute-0 nova_compute[265391]: 2025-09-30 18:12:32.245 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:12:32 compute-0 nova_compute[265391]: 2025-09-30 18:12:32.245 2 DEBUG nova.network.neutron [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:12:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:32.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:32 compute-0 nova_compute[265391]: 2025-09-30 18:12:32.753 2 WARNING neutronclient.v2_0.client [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:32 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:33 compute-0 ceph-mon[73755]: pgmap v1011: 353 pgs: 353 active+clean; 167 MiB data, 256 MiB used, 40 GiB / 40 GiB avail; 3.5 KiB/s rd, 1023 B/s wr, 1 op/s
Sep 30 18:12:33 compute-0 nova_compute[265391]: 2025-09-30 18:12:33.359 2 WARNING neutronclient.v2_0.client [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:33 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:33 compute-0 nova_compute[265391]: 2025-09-30 18:12:33.512 2 DEBUG nova.network.neutron [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Updating instance_info_cache with network_info: [{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:12:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:33.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:33.677Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:34 compute-0 nova_compute[265391]: 2025-09-30 18:12:34.023 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:12:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:12:34 compute-0 nova_compute[265391]: 2025-09-30 18:12:34.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:34.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:34 compute-0 nova_compute[265391]: 2025-09-30 18:12:34.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:35 compute-0 ceph-mon[73755]: pgmap v1012: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:12:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:35 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:35 compute-0 nova_compute[265391]: 2025-09-30 18:12:35.663 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12417
Sep 30 18:12:35 compute-0 nova_compute[265391]: 2025-09-30 18:12:35.667 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4247
Sep 30 18:12:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:12:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:12:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3554528154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:12:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:12:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3554528154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:12:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:36 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc21000bd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:37.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:37 compute-0 ceph-mon[73755]: pgmap v1013: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:12:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3554528154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:12:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3554528154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:12:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:12:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:12:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:12:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:12:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:12:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:12:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:12:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:12:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:37.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:37 compute-0 kernel: tap26d718f0-c9 (unregistering): left promiscuous mode
Sep 30 18:12:37 compute-0 NetworkManager[45059]: <info>  [1759255957.9088] device (tap26d718f0-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:12:37 compute-0 nova_compute[265391]: 2025-09-30 18:12:37.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:37 compute-0 ovn_controller[156242]: 2025-09-30T18:12:37Z|00063|binding|INFO|Releasing lport 26d718f0-c921-490f-815d-8221b976f012 from this chassis (sb_readonly=0)
Sep 30 18:12:37 compute-0 ovn_controller[156242]: 2025-09-30T18:12:37Z|00064|binding|INFO|Setting lport 26d718f0-c921-490f-815d-8221b976f012 down in Southbound
Sep 30 18:12:37 compute-0 ovn_controller[156242]: 2025-09-30T18:12:37Z|00065|binding|INFO|Removing iface tap26d718f0-c9 ovn-installed in OVS
Sep 30 18:12:37 compute-0 nova_compute[265391]: 2025-09-30 18:12:37.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:37.976 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:22:1f 10.100.0.7'], port_security=['fa:16:3e:0e:22:1f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '761dbb06-6272-4941-a65f-5ad2f8cfbb70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '5', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=26d718f0-c921-490f-815d-8221b976f012) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:12:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:37.978 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 26d718f0-c921-490f-815d-8221b976f012 in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab unbound from our chassis
Sep 30 18:12:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:37.980 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5fff1904-159a-4b76-8c46-feabf17f29ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:12:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:37.981 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b19a8940-5f15-42f0-9125-1cd0a25b459a]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:37.981 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab namespace which is not needed anymore
Sep 30 18:12:37 compute-0 nova_compute[265391]: 2025-09-30 18:12:37.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 sshd-session[304559]: Invalid user baidu from 154.125.120.7 port 47841
Sep 30 18:12:38 compute-0 sshd-session[304559]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:12:38 compute-0 sshd-session[304559]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.125.120.7
Sep 30 18:12:38 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Sep 30 18:12:38 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 14.320s CPU time.
Sep 30 18:12:38 compute-0 systemd-machined[219917]: Machine qemu-3-instance-00000004 terminated.
Sep 30 18:12:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:12:38 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[304190]: [NOTICE]   (304194) : haproxy version is 3.0.5-8e879a5
Sep 30 18:12:38 compute-0 podman[304589]: 2025-09-30 18:12:38.152468961 +0000 UTC m=+0.051396906 container kill 8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:12:38 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[304190]: [NOTICE]   (304194) : path to executable is /usr/sbin/haproxy
Sep 30 18:12:38 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[304190]: [WARNING]  (304194) : Exiting Master process...
Sep 30 18:12:38 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[304190]: [ALERT]    (304194) : Current worker (304196) exited with code 143 (Terminated)
Sep 30 18:12:38 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[304190]: [WARNING]  (304194) : All workers exited. Exiting... (0)
Sep 30 18:12:38 compute-0 systemd[1]: libpod-8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222.scope: Deactivated successfully.
Sep 30 18:12:38 compute-0 kernel: tap26d718f0-c9: entered promiscuous mode
Sep 30 18:12:38 compute-0 NetworkManager[45059]: <info>  [1759255958.2004] manager: (tap26d718f0-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Sep 30 18:12:38 compute-0 systemd-udevd[304569]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:12:38 compute-0 podman[304604]: 2025-09-30 18:12:38.203376964 +0000 UTC m=+0.028636255 container died 8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 ovn_controller[156242]: 2025-09-30T18:12:38Z|00066|binding|INFO|Claiming lport 26d718f0-c921-490f-815d-8221b976f012 for this chassis.
Sep 30 18:12:38 compute-0 ovn_controller[156242]: 2025-09-30T18:12:38Z|00067|binding|INFO|26d718f0-c921-490f-815d-8221b976f012: Claiming fa:16:3e:0e:22:1f 10.100.0.7
Sep 30 18:12:38 compute-0 kernel: tap26d718f0-c9 (unregistering): left promiscuous mode
Sep 30 18:12:38 compute-0 ovn_controller[156242]: 2025-09-30T18:12:38Z|00068|binding|INFO|Setting lport 26d718f0-c921-490f-815d-8221b976f012 ovn-installed in OVS
Sep 30 18:12:38 compute-0 ovn_controller[156242]: 2025-09-30T18:12:38Z|00069|if_status|INFO|Not setting lport 26d718f0-c921-490f-815d-8221b976f012 down as sb is readonly
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:12:38 compute-0 ceph-mon[73755]: pgmap v1014: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222-userdata-shm.mount: Deactivated successfully.
Sep 30 18:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ce662d27ece298740ab445188ea14e28475723bf6fa9a7d319e531bd16ab7eb-merged.mount: Deactivated successfully.
Sep 30 18:12:38 compute-0 ovn_controller[156242]: 2025-09-30T18:12:38Z|00070|binding|INFO|Releasing lport 26d718f0-c921-490f-815d-8221b976f012 from this chassis (sb_readonly=0)
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.250 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:22:1f 10.100.0.7'], port_security=['fa:16:3e:0e:22:1f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '761dbb06-6272-4941-a65f-5ad2f8cfbb70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '5', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=26d718f0-c921-490f-815d-8221b976f012) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.257 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:22:1f 10.100.0.7'], port_security=['fa:16:3e:0e:22:1f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '761dbb06-6272-4941-a65f-5ad2f8cfbb70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '5', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=26d718f0-c921-490f-815d-8221b976f012) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:12:38 compute-0 podman[304604]: 2025-09-30 18:12:38.258312401 +0000 UTC m=+0.083571702 container cleanup 8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:12:38 compute-0 systemd[1]: libpod-conmon-8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222.scope: Deactivated successfully.
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 podman[304606]: 2025-09-30 18:12:38.276869024 +0000 UTC m=+0.082117375 container remove 8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.283 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[06952f0b-261b-4f37-b418-16b9236ca6a9]: (4, ("Tue Sep 30 06:12:38 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab (8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222)\n8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222\nTue Sep 30 06:12:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab (8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222)\n8234154c27df71632f82d331ad623da3f850f79179b7c7fadd301ccf4f375222\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.285 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2ad9af-6c62-4d18-a739-461422cd7f30]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.285 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.286 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4e0ce52e-9a82-4368-8d4e-7af363dc0f65]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.286 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fff1904-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 kernel: tap5fff1904-10: left promiscuous mode
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.312 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a165a3-a029-43b6-bbaf-da9b3f7d961f]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.342 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0571dc08-875f-4f6c-ab1c-6472d6034c10]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.343 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d69c6f86-d555-4d00-a5de-e50cb24e2ce0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.359 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec80e3c-dd6e-4f2b-a0ab-27343b813edd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451201, 'reachable_time': 18412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304640, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.361 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.361 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff9ed8b-e547-4bc0-ad51-b3a1e61b76f7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.362 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 26d718f0-c921-490f-815d-8221b976f012 in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab unbound from our chassis
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.363 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5fff1904-159a-4b76-8c46-feabf17f29ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:12:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d5fff1904\x2d159a\x2d4b76\x2d8c46\x2dfeabf17f29ab.mount: Deactivated successfully.
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.363 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2f3347-eb58-4163-a741-67903de03d8f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.363 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 26d718f0-c921-490f-815d-8221b976f012 in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab unbound from our chassis
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.364 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5fff1904-159a-4b76-8c46-feabf17f29ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:12:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:38.364 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b4aef6f1-58c5-4dcc-8682-c47cfb24030f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.516 2 DEBUG nova.compute.manager [req-3c2a5908-90c9-4c52-95b0-cf46588d2a68 req-843ae0eb-d9b0-4e4b-a570-14ee90affe31 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.516 2 DEBUG oslo_concurrency.lockutils [req-3c2a5908-90c9-4c52-95b0-cf46588d2a68 req-843ae0eb-d9b0-4e4b-a570-14ee90affe31 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.516 2 DEBUG oslo_concurrency.lockutils [req-3c2a5908-90c9-4c52-95b0-cf46588d2a68 req-843ae0eb-d9b0-4e4b-a570-14ee90affe31 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.517 2 DEBUG oslo_concurrency.lockutils [req-3c2a5908-90c9-4c52-95b0-cf46588d2a68 req-843ae0eb-d9b0-4e4b-a570-14ee90affe31 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.517 2 DEBUG nova.compute.manager [req-3c2a5908-90c9-4c52-95b0-cf46588d2a68 req-843ae0eb-d9b0-4e4b-a570-14ee90affe31 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] No waiting events found dispatching network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.518 2 WARNING nova.compute.manager [req-3c2a5908-90c9-4c52-95b0-cf46588d2a68 req-843ae0eb-d9b0-4e4b-a570-14ee90affe31 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received unexpected event network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 for instance with vm_state active and task_state resize_migrating.
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.685 2 INFO nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance shutdown successfully after 3 seconds.
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.693 2 INFO nova.virt.libvirt.driver [-] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance destroyed successfully.
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.695 2 DEBUG nova.virt.libvirt.vif [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1100695055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1100695055',id=4,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:11:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-9gd0nhai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:12:28Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=761dbb06-6272-4941-a65f-5ad2f8cfbb70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:0e:22:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.695 2 DEBUG nova.network.os_vif_util [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:0e:22:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.696 2 DEBUG nova.network.os_vif_util [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.697 2 DEBUG os_vif [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.701 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d718f0-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=09d4bfae-d17b-43fd-b0e0-8dfa242ddb31) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.715 2 INFO os_vif [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9')
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.720 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.720 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.722 2 WARNING neutronclient.v2_0.client [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.722 2 WARNING neutronclient.v2_0.client [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:38.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:12:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:12:38 compute-0 nova_compute[265391]: 2025-09-30 18:12:38.801 2 DEBUG nova.network.neutron [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Port 26d718f0-c921-490f-815d-8221b976f012 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3231
Sep 30 18:12:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:38 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224003910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:39 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:39 compute-0 nova_compute[265391]: 2025-09-30 18:12:39.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:39.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:39 compute-0 nova_compute[265391]: 2025-09-30 18:12:39.853 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:39 compute-0 nova_compute[265391]: 2025-09-30 18:12:39.854 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:39 compute-0 nova_compute[265391]: 2025-09-30 18:12:39.854 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:40 compute-0 sshd-session[304559]: Failed password for invalid user baidu from 154.125.120.7 port 47841 ssh2
Sep 30 18:12:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 274 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Sep 30 18:12:40 compute-0 nova_compute[265391]: 2025-09-30 18:12:40.576 2 DEBUG nova.compute.manager [req-f13acd02-584e-46d6-9a8e-074ee0c3a1c9 req-27bec688-d7ca-4a31-8f44-fbc326a4696c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:12:40 compute-0 nova_compute[265391]: 2025-09-30 18:12:40.576 2 DEBUG oslo_concurrency.lockutils [req-f13acd02-584e-46d6-9a8e-074ee0c3a1c9 req-27bec688-d7ca-4a31-8f44-fbc326a4696c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:40 compute-0 nova_compute[265391]: 2025-09-30 18:12:40.577 2 DEBUG oslo_concurrency.lockutils [req-f13acd02-584e-46d6-9a8e-074ee0c3a1c9 req-27bec688-d7ca-4a31-8f44-fbc326a4696c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:40 compute-0 nova_compute[265391]: 2025-09-30 18:12:40.577 2 DEBUG oslo_concurrency.lockutils [req-f13acd02-584e-46d6-9a8e-074ee0c3a1c9 req-27bec688-d7ca-4a31-8f44-fbc326a4696c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:40 compute-0 nova_compute[265391]: 2025-09-30 18:12:40.577 2 DEBUG nova.compute.manager [req-f13acd02-584e-46d6-9a8e-074ee0c3a1c9 req-27bec688-d7ca-4a31-8f44-fbc326a4696c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] No waiting events found dispatching network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:12:40 compute-0 nova_compute[265391]: 2025-09-30 18:12:40.577 2 WARNING nova.compute.manager [req-f13acd02-584e-46d6-9a8e-074ee0c3a1c9 req-27bec688-d7ca-4a31-8f44-fbc326a4696c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received unexpected event network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 for instance with vm_state active and task_state resize_migrated.
Sep 30 18:12:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:40.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:40 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc21000bd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:40 compute-0 nova_compute[265391]: 2025-09-30 18:12:40.857 2 WARNING neutronclient.v2_0.client [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:41 compute-0 ceph-mon[73755]: pgmap v1015: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 274 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Sep 30 18:12:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:41 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:41 compute-0 nova_compute[265391]: 2025-09-30 18:12:41.540 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:12:41 compute-0 nova_compute[265391]: 2025-09-30 18:12:41.540 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:12:41 compute-0 nova_compute[265391]: 2025-09-30 18:12:41.540 2 DEBUG nova.network.neutron [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:12:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:41.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:41 compute-0 sshd-session[304559]: Received disconnect from 154.125.120.7 port 47841:11: Bye Bye [preauth]
Sep 30 18:12:41 compute-0 sshd-session[304559]: Disconnected from invalid user baidu 154.125.120.7 port 47841 [preauth]
Sep 30 18:12:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 270 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Sep 30 18:12:42 compute-0 nova_compute[265391]: 2025-09-30 18:12:42.051 2 WARNING neutronclient.v2_0.client [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:42.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:42 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:43 compute-0 ceph-mon[73755]: pgmap v1016: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 270 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Sep 30 18:12:43 compute-0 nova_compute[265391]: 2025-09-30 18:12:43.149 2 WARNING neutronclient.v2_0.client [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:43 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:43.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:43.677Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:43 compute-0 nova_compute[265391]: 2025-09-30 18:12:43.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:43 compute-0 nova_compute[265391]: 2025-09-30 18:12:43.787 2 DEBUG nova.network.neutron [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Updating instance_info_cache with network_info: [{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:12:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 271 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Sep 30 18:12:44 compute-0 nova_compute[265391]: 2025-09-30 18:12:44.296 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:12:44 compute-0 podman[304650]: 2025-09-30 18:12:44.56563903 +0000 UTC m=+0.098758657 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:12:44 compute-0 nova_compute[265391]: 2025-09-30 18:12:44.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:44 compute-0 podman[304649]: 2025-09-30 18:12:44.574682875 +0000 UTC m=+0.111279432 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:12:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:44.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:44 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc21000bd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:44 compute-0 nova_compute[265391]: 2025-09-30 18:12:44.852 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Starting finish_migration finish_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12604
Sep 30 18:12:44 compute-0 nova_compute[265391]: 2025-09-30 18:12:44.854 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance directory exists: not creating _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5134
Sep 30 18:12:44 compute-0 nova_compute[265391]: 2025-09-30 18:12:44.854 2 INFO nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Creating image(s)
Sep 30 18:12:44 compute-0 nova_compute[265391]: 2025-09-30 18:12:44.893 2 DEBUG nova.storage.rbd_utils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] creating snapshot(nova-resize) on rbd image(761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk) create_snap /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:462
Sep 30 18:12:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Sep 30 18:12:45 compute-0 ceph-mon[73755]: pgmap v1017: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 271 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Sep 30 18:12:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e138 e138: 2 total, 2 up, 2 in
Sep 30 18:12:45 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e138: 2 total, 2 up, 2 in
Sep 30 18:12:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:45 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:45.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.807 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Did not create local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5272
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.807 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Ensure instance console log exists: /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.807 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.808 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.808 2 DEBUG oslo_concurrency.lockutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.810 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Start _get_guest_xml network_info=[{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:0e:22:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.814 2 WARNING nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.815 2 DEBUG nova.virt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-1100695055', uuid='761dbb06-6272-4941-a65f-5ad2f8cfbb70'), owner=OwnerMeta(userid='dc3bb71c425f484fbc46f90978029403', username='tempest-TestExecuteActionsViaActuator-837729328-project-admin', projectid='ddd1f985d8b64b449c79d55b0cbd6422', projectname='tempest-TestExecuteActionsViaActuator-837729328'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_cdrom_bus': 'sata', 'hw_disk_bus': 'virtio', 'hw_input_bus': 'usb', 'hw_machine_type': 'q35', 'hw_pointer_model': 'usbtablet', 'hw_rng_model': 'virtio', 'hw_video_model': 'virtio', 'hw_vif_model': 'virtio'}), flavor=FlavorMeta(name='m1.micro', flavorid='a7318ed3-ef7b-41dc-b9aa-f008e6525e57', memory_mb=192, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:0e:22:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759255965.815817) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.820 2 DEBUG nova.virt.libvirt.host [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.820 2 DEBUG nova.virt.libvirt.host [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.823 2 DEBUG nova.virt.libvirt.host [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.824 2 DEBUG nova.virt.libvirt.host [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.824 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.824 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a7318ed3-ef7b-41dc-b9aa-f008e6525e57',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.825 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.825 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.825 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.825 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.825 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.825 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.825 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.825 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.826 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.826 2 DEBUG nova.virt.hardware [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:12:45 compute-0 nova_compute[265391]: 2025-09-30 18:12:45.828 2 DEBUG oslo_concurrency.processutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:12:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 3.2 KiB/s rd, 33 KiB/s wr, 4 op/s
Sep 30 18:12:46 compute-0 ceph-mon[73755]: osdmap e138: 2 total, 2 up, 2 in
Sep 30 18:12:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:12:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1649060969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.288 2 DEBUG oslo_concurrency.processutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.328 2 DEBUG oslo_concurrency.processutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:12:46 compute-0 sudo[304813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:12:46 compute-0 sudo[304813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:46 compute-0 sudo[304813]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:46 compute-0 sudo[304857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 18:12:46 compute-0 sudo[304857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:12:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300006370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:12:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:12:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.776 2 DEBUG oslo_concurrency.processutils [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.779 2 DEBUG nova.virt.libvirt.vif [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1100695055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1100695055',id=4,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:11:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-9gd0nhai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:12:39Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=761dbb06-6272-4941-a65f-5ad2f8cfbb70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:0e:22:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.781 2 DEBUG nova.network.os_vif_util [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:0e:22:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.782 2 DEBUG nova.network.os_vif_util [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.784 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <uuid>761dbb06-6272-4941-a65f-5ad2f8cfbb70</uuid>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <name>instance-00000004</name>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <memory>196608</memory>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1100695055</nova:name>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:12:45</nova:creationTime>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <nova:flavor name="m1.micro" id="a7318ed3-ef7b-41dc-b9aa-f008e6525e57">
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:memory>192</nova:memory>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:property name="hw_cdrom_bus">sata</nova:property>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:property name="hw_disk_bus">virtio</nova:property>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:property name="hw_input_bus">usb</nova:property>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:property name="hw_machine_type">q35</nova:property>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:property name="hw_pointer_model">usbtablet</nova:property>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:property name="hw_video_model">virtio</nova:property>
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:property name="hw_vif_model">virtio</nova:property>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:user uuid="dc3bb71c425f484fbc46f90978029403">tempest-TestExecuteActionsViaActuator-837729328-project-admin</nova:user>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:project uuid="ddd1f985d8b64b449c79d55b0cbd6422">tempest-TestExecuteActionsViaActuator-837729328</nova:project>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <nova:port uuid="26d718f0-c921-490f-815d-8221b976f012">
Sep 30 18:12:46 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <system>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <entry name="serial">761dbb06-6272-4941-a65f-5ad2f8cfbb70</entry>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <entry name="uuid">761dbb06-6272-4941-a65f-5ad2f8cfbb70</entry>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </system>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <os>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   </os>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <features>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   </features>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk">
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       </source>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk.config">
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       </source>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:12:46 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:0e:22:1f"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <target dev="tap26d718f0-c9"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70/console.log" append="off"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <video>
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </video>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:12:46 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:12:46 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:12:46 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:12:46 compute-0 nova_compute[265391]: </domain>
Sep 30 18:12:46 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.785 2 DEBUG nova.virt.libvirt.vif [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1100695055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1100695055',id=4,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:11:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-9gd0nhai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:12:39Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=761dbb06-6272-4941-a65f-5ad2f8cfbb70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:0e:22:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.785 2 DEBUG nova.network.os_vif_util [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:0e:22:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.785 2 DEBUG nova.network.os_vif_util [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.786 2 DEBUG os_vif [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'ea499168-1274-5a76-acdd-bd3df25ba063', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d718f0-c9, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap26d718f0-c9, col_values=(('qos', UUID('82434abe-6ab7-45c4-9039-448a91acd1b8')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap26d718f0-c9, col_values=(('external_ids', {'iface-id': '26d718f0-c921-490f-815d-8221b976f012', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:22:1f', 'vm-uuid': '761dbb06-6272-4941-a65f-5ad2f8cfbb70'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:46 compute-0 NetworkManager[45059]: <info>  [1759255966.7948] manager: (tap26d718f0-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:46 compute-0 nova_compute[265391]: 2025-09-30 18:12:46.800 2 INFO os_vif [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9')
Sep 30 18:12:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:46 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:47.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:47 compute-0 ceph-mon[73755]: pgmap v1019: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 3.2 KiB/s rd, 33 KiB/s wr, 4 op/s
Sep 30 18:12:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1649060969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:12:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3300006370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:12:47 compute-0 podman[304959]: 2025-09-30 18:12:47.274942719 +0000 UTC m=+0.100120603 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:12:47 compute-0 podman[304959]: 2025-09-30 18:12:47.401934078 +0000 UTC m=+0.227111872 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 18:12:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:47 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:47.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:47 compute-0 podman[305074]: 2025-09-30 18:12:47.886771514 +0000 UTC m=+0.066794666 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:12:47 compute-0 podman[305099]: 2025-09-30 18:12:47.958617761 +0000 UTC m=+0.055121443 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:12:47 compute-0 podman[305074]: 2025-09-30 18:12:47.965193022 +0000 UTC m=+0.145216144 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:12:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 3.2 KiB/s rd, 33 KiB/s wr, 4 op/s
Sep 30 18:12:48 compute-0 ceph-mon[73755]: pgmap v1020: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 3.2 KiB/s rd, 33 KiB/s wr, 4 op/s
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.348 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.348 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.348 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] No VIF found with MAC fa:16:3e:0e:22:1f, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.349 2 INFO nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Using config drive
Sep 30 18:12:48 compute-0 podman[305167]: 2025-09-30 18:12:48.35545261 +0000 UTC m=+0.055930844 container exec a75bc0871d2876441e946a87cea944e2694054e1fcd6c747829e72a413847002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:12:48 compute-0 podman[305167]: 2025-09-30 18:12:48.393101298 +0000 UTC m=+0.093579502 container exec_died a75bc0871d2876441e946a87cea944e2694054e1fcd6c747829e72a413847002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:12:48 compute-0 kernel: tap26d718f0-c9: entered promiscuous mode
Sep 30 18:12:48 compute-0 NetworkManager[45059]: <info>  [1759255968.4522] manager: (tap26d718f0-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:48 compute-0 ovn_controller[156242]: 2025-09-30T18:12:48Z|00071|binding|INFO|Claiming lport 26d718f0-c921-490f-815d-8221b976f012 for this chassis.
Sep 30 18:12:48 compute-0 ovn_controller[156242]: 2025-09-30T18:12:48Z|00072|binding|INFO|26d718f0-c921-490f-815d-8221b976f012: Claiming fa:16:3e:0e:22:1f 10.100.0.7
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.460 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:22:1f 10.100.0.7'], port_security=['fa:16:3e:0e:22:1f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '761dbb06-6272-4941-a65f-5ad2f8cfbb70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '6', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=26d718f0-c921-490f-815d-8221b976f012) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.461 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 26d718f0-c921-490f-815d-8221b976f012 in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab bound to our chassis
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.464 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:12:48 compute-0 ovn_controller[156242]: 2025-09-30T18:12:48Z|00073|binding|INFO|Setting lport 26d718f0-c921-490f-815d-8221b976f012 ovn-installed in OVS
Sep 30 18:12:48 compute-0 ovn_controller[156242]: 2025-09-30T18:12:48Z|00074|binding|INFO|Setting lport 26d718f0-c921-490f-815d-8221b976f012 up in Southbound
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.484 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[29163cf2-ec16-4f35-81a2-74f4646a3bf4]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.486 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5fff1904-11 in ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.487 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5fff1904-10 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.487 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0f6bdf-12ac-47f6-b4e1-a31eecadf81a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.488 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf6a2d1-121a-4145-ba41-1021b74edf9e]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 systemd-machined[219917]: New machine qemu-4-instance-00000004.
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.504 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[370c9b82-1704-49ec-aac0-94c495864799]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Sep 30 18:12:48 compute-0 systemd-udevd[305249]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.522 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[eec10e54-ed94-446b-8598-68197e481b7b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 NetworkManager[45059]: <info>  [1759255968.5382] device (tap26d718f0-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:12:48 compute-0 NetworkManager[45059]: <info>  [1759255968.5396] device (tap26d718f0-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.568 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[729c1465-9f92-479c-8475-6b9d39c80d1d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.572 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b02f78-9118-467b-9ce6-42fea6974481]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 NetworkManager[45059]: <info>  [1759255968.5778] manager: (tap5fff1904-10): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.621 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb64d45-4c83-4562-a3cb-6c77090adc5c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.624 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[39362d15-563d-4f9b-8012-90e120f86baf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 NetworkManager[45059]: <info>  [1759255968.6472] device (tap5fff1904-10): carrier: link connected
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.652 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[299496d1-76d3-4dbf-8a64-22a4246bc3b3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.677 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[437d8212-7eb6-4488-bb47-cebfc4a8e8b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fff1904-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:07:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456734, 'reachable_time': 15699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305307, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.694 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[504272de-4f4b-41ca-a40c-28ea276dc66a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:77b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456734, 'tstamp': 456734}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305311, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.715 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[270980ca-851d-4235-9fde-9ff15f991fc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fff1904-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:07:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456734, 'reachable_time': 15699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305312, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.728 2 DEBUG nova.compute.manager [req-41d55baf-859b-4c88-a94c-ec4a1710368e req-8fb2fd9d-700d-446f-bb3d-520e46981ae4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.728 2 DEBUG oslo_concurrency.lockutils [req-41d55baf-859b-4c88-a94c-ec4a1710368e req-8fb2fd9d-700d-446f-bb3d-520e46981ae4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.728 2 DEBUG oslo_concurrency.lockutils [req-41d55baf-859b-4c88-a94c-ec4a1710368e req-8fb2fd9d-700d-446f-bb3d-520e46981ae4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.729 2 DEBUG oslo_concurrency.lockutils [req-41d55baf-859b-4c88-a94c-ec4a1710368e req-8fb2fd9d-700d-446f-bb3d-520e46981ae4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.729 2 DEBUG nova.compute.manager [req-41d55baf-859b-4c88-a94c-ec4a1710368e req-8fb2fd9d-700d-446f-bb3d-520e46981ae4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] No waiting events found dispatching network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.729 2 WARNING nova.compute.manager [req-41d55baf-859b-4c88-a94c-ec4a1710368e req-8fb2fd9d-700d-446f-bb3d-520e46981ae4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received unexpected event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 for instance with vm_state active and task_state resize_finish.
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.748 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb4d9ea-85e4-479d-8f84-408d3251dd40]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 podman[305295]: 2025-09-30 18:12:48.763606814 +0000 UTC m=+0.126048636 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:12:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:48.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:48 compute-0 podman[305295]: 2025-09-30 18:12:48.775638247 +0000 UTC m=+0.138080039 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:12:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:12:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.822 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[18ba4d9c-71b9-43c5-8b8c-d4f204cc8b99]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.823 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fff1904-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.824 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.824 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fff1904-10, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:48 compute-0 NetworkManager[45059]: <info>  [1759255968.8269] manager: (tap5fff1904-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:48 compute-0 kernel: tap5fff1904-10: entered promiscuous mode
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.830 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5fff1904-10, col_values=(('external_ids', {'iface-id': '3a8ea0a0-c179-4516-9404-04b68a17e79e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:48 compute-0 ovn_controller[156242]: 2025-09-30T18:12:48Z|00075|binding|INFO|Releasing lport 3a8ea0a0-c179-4516-9404-04b68a17e79e from this chassis (sb_readonly=0)
Sep 30 18:12:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:48 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc21000bd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:48 compute-0 nova_compute[265391]: 2025-09-30 18:12:48.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.856 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[72c8ffa7-b865-48d5-a6a4-905c8eee141f]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.856 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.856 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.856 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 5fff1904-159a-4b76-8c46-feabf17f29ab disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.857 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.857 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2abc1ba2-e930-496d-8136-6ff8ddc754bb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.857 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.857 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9e648f1d-160d-4ce9-97f1-1b19fccf3d56]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.858 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:12:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:48.858 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'env', 'PROCESS_TAG=haproxy-5fff1904-159a-4b76-8c46-feabf17f29ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5fff1904-159a-4b76-8c46-feabf17f29ab.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:12:49 compute-0 podman[305409]: 2025-09-30 18:12:49.088025443 +0000 UTC m=+0.055705788 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, description=keepalived for Ceph, architecture=x86_64, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived)
Sep 30 18:12:49 compute-0 podman[305409]: 2025-09-30 18:12:49.114547192 +0000 UTC m=+0.082227567 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 18:12:49 compute-0 podman[305484]: 2025-09-30 18:12:49.318268295 +0000 UTC m=+0.062570227 container create 1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:12:49 compute-0 systemd[1]: Started libpod-conmon-1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7.scope.
Sep 30 18:12:49 compute-0 podman[305484]: 2025-09-30 18:12:49.277379212 +0000 UTC m=+0.021681164 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:12:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/956b7df1d67a8dfff3db6885f1285258ecb09f779be952bb04f304c6a43d2c10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:49 compute-0 podman[305510]: 2025-09-30 18:12:49.406003954 +0000 UTC m=+0.068482420 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:12:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:49 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:49 compute-0 podman[305510]: 2025-09-30 18:12:49.502248145 +0000 UTC m=+0.164726621 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:12:49 compute-0 podman[305484]: 2025-09-30 18:12:49.503717163 +0000 UTC m=+0.248019105 container init 1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:12:49 compute-0 podman[305484]: 2025-09-30 18:12:49.513662911 +0000 UTC m=+0.257964853 container start 1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Sep 30 18:12:49 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[305525]: [NOTICE]   (305557) : New worker (305561) forked
Sep 30 18:12:49 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[305525]: [NOTICE]   (305557) : Loading success.
Sep 30 18:12:49 compute-0 nova_compute[265391]: 2025-09-30 18:12:49.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:49.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:49 compute-0 nova_compute[265391]: 2025-09-30 18:12:49.597 2 DEBUG nova.compute.manager [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:12:49 compute-0 nova_compute[265391]: 2025-09-30 18:12:49.601 2 INFO nova.virt.libvirt.driver [-] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance running successfully.
Sep 30 18:12:49 compute-0 virtqemud[265263]: argument unsupported: QEMU guest agent is not configured
Sep 30 18:12:49 compute-0 nova_compute[265391]: 2025-09-30 18:12:49.604 2 DEBUG nova.virt.libvirt.guest [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:200
Sep 30 18:12:49 compute-0 nova_compute[265391]: 2025-09-30 18:12:49.604 2 DEBUG nova.virt.libvirt.driver [None req-00c6bc0e-df2e-4ab2-8c48-d2bfcf7fb8e6 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] finish_migration finished successfully. finish_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12699
Sep 30 18:12:49 compute-0 sudo[305569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:12:49 compute-0 sudo[305569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:49 compute-0 sudo[305569]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:49 compute-0 podman[305578]: 2025-09-30 18:12:49.659590272 +0000 UTC m=+0.074563739 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:12:49 compute-0 podman[305644]: 2025-09-30 18:12:49.810967695 +0000 UTC m=+0.090456691 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:12:50 compute-0 podman[305644]: 2025-09-30 18:12:50.005613602 +0000 UTC m=+0.285102568 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:12:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 15 KiB/s rd, 1.6 KiB/s wr, 18 op/s
Sep 30 18:12:50 compute-0 podman[305759]: 2025-09-30 18:12:50.440690776 +0000 UTC m=+0.068390968 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:12:50 compute-0 podman[305759]: 2025-09-30 18:12:50.478000035 +0000 UTC m=+0.105700227 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:12:50 compute-0 sudo[304857]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:12:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:12:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:50 compute-0 sudo[305799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:12:50 compute-0 sudo[305799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:50 compute-0 sudo[305799]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:50 compute-0 sudo[305824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:12:50 compute-0 sudo[305824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:50.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:50 compute-0 nova_compute[265391]: 2025-09-30 18:12:50.836 2 DEBUG nova.compute.manager [req-f9b1467a-f3bb-4220-856b-c311394681f0 req-ff952b1d-155e-478d-90e3-2eb70084d774 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:12:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:50 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:50 compute-0 nova_compute[265391]: 2025-09-30 18:12:50.838 2 DEBUG oslo_concurrency.lockutils [req-f9b1467a-f3bb-4220-856b-c311394681f0 req-ff952b1d-155e-478d-90e3-2eb70084d774 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:50 compute-0 nova_compute[265391]: 2025-09-30 18:12:50.838 2 DEBUG oslo_concurrency.lockutils [req-f9b1467a-f3bb-4220-856b-c311394681f0 req-ff952b1d-155e-478d-90e3-2eb70084d774 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:50 compute-0 nova_compute[265391]: 2025-09-30 18:12:50.840 2 DEBUG oslo_concurrency.lockutils [req-f9b1467a-f3bb-4220-856b-c311394681f0 req-ff952b1d-155e-478d-90e3-2eb70084d774 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:50 compute-0 nova_compute[265391]: 2025-09-30 18:12:50.841 2 DEBUG nova.compute.manager [req-f9b1467a-f3bb-4220-856b-c311394681f0 req-ff952b1d-155e-478d-90e3-2eb70084d774 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] No waiting events found dispatching network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:12:50 compute-0 nova_compute[265391]: 2025-09-30 18:12:50.841 2 WARNING nova.compute.manager [req-f9b1467a-f3bb-4220-856b-c311394681f0 req-ff952b1d-155e-478d-90e3-2eb70084d774 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received unexpected event network-vif-plugged-26d718f0-c921-490f-815d-8221b976f012 for instance with vm_state resized and task_state None.
Sep 30 18:12:51 compute-0 ceph-mon[73755]: pgmap v1021: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 15 KiB/s rd, 1.6 KiB/s wr, 18 op/s
Sep 30 18:12:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:51 compute-0 sudo[305824]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:12:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:12:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:12:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:12:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:12:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:12:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:12:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:12:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:12:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:12:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:12:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:12:51 compute-0 sudo[305880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:12:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:51 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:51 compute-0 sudo[305880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:51 compute-0 sudo[305880]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:51 compute-0 sudo[305906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:12:51 compute-0 sudo[305906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:51.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:51 compute-0 nova_compute[265391]: 2025-09-30 18:12:51.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:52 compute-0 podman[305972]: 2025-09-30 18:12:52.005277914 +0000 UTC m=+0.041688564 container create dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:12:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 15 KiB/s rd, 1.6 KiB/s wr, 18 op/s
Sep 30 18:12:52 compute-0 systemd[1]: Started libpod-conmon-dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de.scope.
Sep 30 18:12:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:12:52 compute-0 podman[305972]: 2025-09-30 18:12:51.987230655 +0000 UTC m=+0.023641335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:12:52 compute-0 podman[305972]: 2025-09-30 18:12:52.094325887 +0000 UTC m=+0.130736627 container init dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_curie, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:12:52 compute-0 podman[305972]: 2025-09-30 18:12:52.101383571 +0000 UTC m=+0.137794221 container start dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 18:12:52 compute-0 podman[305972]: 2025-09-30 18:12:52.104688737 +0000 UTC m=+0.141099427 container attach dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_curie, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 18:12:52 compute-0 confident_curie[305989]: 167 167
Sep 30 18:12:52 compute-0 systemd[1]: libpod-dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de.scope: Deactivated successfully.
Sep 30 18:12:52 compute-0 podman[305972]: 2025-09-30 18:12:52.108633439 +0000 UTC m=+0.145044099 container died dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:12:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:12:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:12:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:12:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:12:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-58771dd6d58c6828d61fbdbc9305b735b7f1ce6218d4957e2c61507489902a8c-merged.mount: Deactivated successfully.
Sep 30 18:12:52 compute-0 podman[305972]: 2025-09-30 18:12:52.150826335 +0000 UTC m=+0.187236975 container remove dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:12:52 compute-0 systemd[1]: libpod-conmon-dd21f49b7d9967e9cf539e031af3c745e823b8a244e20cb4ddbabb8d5ec3d6de.scope: Deactivated successfully.
Sep 30 18:12:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:12:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:12:52 compute-0 podman[306012]: 2025-09-30 18:12:52.351884129 +0000 UTC m=+0.040238647 container create 38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_einstein, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:12:52 compute-0 systemd[1]: Started libpod-conmon-38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50.scope.
Sep 30 18:12:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1cb6722e1e69d1073895f5a7996d2c58b7d0f0cd0cad002e1774283963aeed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1cb6722e1e69d1073895f5a7996d2c58b7d0f0cd0cad002e1774283963aeed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1cb6722e1e69d1073895f5a7996d2c58b7d0f0cd0cad002e1774283963aeed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1cb6722e1e69d1073895f5a7996d2c58b7d0f0cd0cad002e1774283963aeed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1cb6722e1e69d1073895f5a7996d2c58b7d0f0cd0cad002e1774283963aeed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:52 compute-0 podman[306012]: 2025-09-30 18:12:52.333575623 +0000 UTC m=+0.021930141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:12:52 compute-0 podman[306012]: 2025-09-30 18:12:52.444817403 +0000 UTC m=+0.133171931 container init 38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_einstein, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 18:12:52 compute-0 podman[306012]: 2025-09-30 18:12:52.451978259 +0000 UTC m=+0.140332777 container start 38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:12:52 compute-0 podman[306012]: 2025-09-30 18:12:52.457370569 +0000 UTC m=+0.145725087 container attach 38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:12:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:52.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:52 compute-0 relaxed_einstein[306028]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:12:52 compute-0 relaxed_einstein[306028]: --> All data devices are unavailable
Sep 30 18:12:52 compute-0 systemd[1]: libpod-38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50.scope: Deactivated successfully.
Sep 30 18:12:52 compute-0 podman[306012]: 2025-09-30 18:12:52.826708565 +0000 UTC m=+0.515063103 container died 38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_einstein, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:12:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:52 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc21000bd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a1cb6722e1e69d1073895f5a7996d2c58b7d0f0cd0cad002e1774283963aeed-merged.mount: Deactivated successfully.
Sep 30 18:12:52 compute-0 podman[306012]: 2025-09-30 18:12:52.879462406 +0000 UTC m=+0.567816934 container remove 38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:12:52 compute-0 systemd[1]: libpod-conmon-38a3ed5bb49e1c762a25a65646669cbbc9b356421f24247e7672dd02e36aea50.scope: Deactivated successfully.
Sep 30 18:12:52 compute-0 sudo[305906]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:53 compute-0 sudo[306056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:12:53 compute-0 sudo[306056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:53 compute-0 sudo[306056]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:53 compute-0 sudo[306081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:12:53 compute-0 sudo[306081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:53 compute-0 ceph-mon[73755]: pgmap v1022: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 15 KiB/s rd, 1.6 KiB/s wr, 18 op/s
Sep 30 18:12:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:12:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:53 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:53 compute-0 podman[306149]: 2025-09-30 18:12:53.556515956 +0000 UTC m=+0.053793899 container create 63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 18:12:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:53.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:53 compute-0 systemd[1]: Started libpod-conmon-63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e.scope.
Sep 30 18:12:53 compute-0 podman[306149]: 2025-09-30 18:12:53.526127987 +0000 UTC m=+0.023405960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:12:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:12:53 compute-0 podman[306149]: 2025-09-30 18:12:53.657583192 +0000 UTC m=+0.154861155 container init 63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:12:53 compute-0 podman[306149]: 2025-09-30 18:12:53.665367704 +0000 UTC m=+0.162645657 container start 63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:12:53 compute-0 podman[306149]: 2025-09-30 18:12:53.669520242 +0000 UTC m=+0.166798215 container attach 63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:12:53 compute-0 naughty_fermat[306165]: 167 167
Sep 30 18:12:53 compute-0 systemd[1]: libpod-63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e.scope: Deactivated successfully.
Sep 30 18:12:53 compute-0 podman[306149]: 2025-09-30 18:12:53.672288554 +0000 UTC m=+0.169566497 container died 63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 18:12:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:53.678Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b099b8b8b0053603c9770b1157964fba4f03cc7027ee95bcb81e41b9fcf6c54f-merged.mount: Deactivated successfully.
Sep 30 18:12:53 compute-0 podman[306149]: 2025-09-30 18:12:53.716722468 +0000 UTC m=+0.214000411 container remove 63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:12:53 compute-0 systemd[1]: libpod-conmon-63f643c77c4540724457b51eba935647b4f1a3ea73b6e43a7a7ca7b0daddd51e.scope: Deactivated successfully.
Sep 30 18:12:53 compute-0 podman[306188]: 2025-09-30 18:12:53.943556082 +0000 UTC m=+0.060703768 container create c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_satoshi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:12:53 compute-0 systemd[1]: Started libpod-conmon-c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a.scope.
Sep 30 18:12:54 compute-0 podman[306188]: 2025-09-30 18:12:53.912865734 +0000 UTC m=+0.030013510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:12:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24f2d1d4ec9a56a2e3a6cd7349ef609468445ce415321073b729b5a955e546/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24f2d1d4ec9a56a2e3a6cd7349ef609468445ce415321073b729b5a955e546/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24f2d1d4ec9a56a2e3a6cd7349ef609468445ce415321073b729b5a955e546/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f24f2d1d4ec9a56a2e3a6cd7349ef609468445ce415321073b729b5a955e546/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 1.7 KiB/s wr, 104 op/s
Sep 30 18:12:54 compute-0 podman[306188]: 2025-09-30 18:12:54.047073111 +0000 UTC m=+0.164220827 container init c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:12:54 compute-0 podman[306188]: 2025-09-30 18:12:54.055850219 +0000 UTC m=+0.172997945 container start c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:12:54 compute-0 podman[306188]: 2025-09-30 18:12:54.059998057 +0000 UTC m=+0.177145773 container attach c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_satoshi, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:12:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:54.280 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:54.280 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:12:54.281 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:12:54 compute-0 funny_satoshi[306205]: {
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:     "0": [
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:         {
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "devices": [
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "/dev/loop3"
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             ],
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "lv_name": "ceph_lv0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "lv_size": "21470642176",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "name": "ceph_lv0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "tags": {
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.cluster_name": "ceph",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.crush_device_class": "",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.encrypted": "0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.osd_id": "0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.type": "block",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.vdo": "0",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:                 "ceph.with_tpm": "0"
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             },
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "type": "block",
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:             "vg_name": "ceph_vg0"
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:         }
Sep 30 18:12:54 compute-0 funny_satoshi[306205]:     ]
Sep 30 18:12:54 compute-0 funny_satoshi[306205]: }
Sep 30 18:12:54 compute-0 systemd[1]: libpod-c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a.scope: Deactivated successfully.
Sep 30 18:12:54 compute-0 conmon[306205]: conmon c2c78dc8dceba1d68da4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a.scope/container/memory.events
Sep 30 18:12:54 compute-0 podman[306188]: 2025-09-30 18:12:54.387205388 +0000 UTC m=+0.504353084 container died c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f24f2d1d4ec9a56a2e3a6cd7349ef609468445ce415321073b729b5a955e546-merged.mount: Deactivated successfully.
Sep 30 18:12:54 compute-0 nova_compute[265391]: 2025-09-30 18:12:54.432 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:54 compute-0 nova_compute[265391]: 2025-09-30 18:12:54.432 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:54 compute-0 nova_compute[265391]: 2025-09-30 18:12:54.432 2 DEBUG nova.compute.manager [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Going to confirm migration 1 do_confirm_resize /usr/lib/python3.12/site-packages/nova/compute/manager.py:5283
Sep 30 18:12:54 compute-0 podman[306188]: 2025-09-30 18:12:54.436665343 +0000 UTC m=+0.553813039 container remove c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:12:54 compute-0 systemd[1]: libpod-conmon-c2c78dc8dceba1d68da4399f07af032ff322eb41ec9f50d204feb9a630e9248a.scope: Deactivated successfully.
Sep 30 18:12:54 compute-0 sudo[306081]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:54 compute-0 sudo[306228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:12:54 compute-0 sudo[306228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:54 compute-0 sudo[306228]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:54 compute-0 nova_compute[265391]: 2025-09-30 18:12:54.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:54 compute-0 sudo[306253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:12:54 compute-0 sudo[306253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:54.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:54 compute-0 nova_compute[265391]: 2025-09-30 18:12:54.944 2 DEBUG nova.objects.instance [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'info_cache' on Instance uuid 761dbb06-6272-4941-a65f-5ad2f8cfbb70 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:12:55 compute-0 podman[306319]: 2025-09-30 18:12:55.060694076 +0000 UTC m=+0.047239069 container create 3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_keller, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:12:55 compute-0 systemd[1]: Started libpod-conmon-3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d.scope.
Sep 30 18:12:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:12:55 compute-0 podman[306319]: 2025-09-30 18:12:55.041143858 +0000 UTC m=+0.027688881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:12:55 compute-0 podman[306319]: 2025-09-30 18:12:55.137693346 +0000 UTC m=+0.124238379 container init 3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_keller, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:12:55 compute-0 ceph-mon[73755]: pgmap v1023: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 1.7 KiB/s wr, 104 op/s
Sep 30 18:12:55 compute-0 podman[306319]: 2025-09-30 18:12:55.147142072 +0000 UTC m=+0.133687055 container start 3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:12:55 compute-0 podman[306319]: 2025-09-30 18:12:55.151433413 +0000 UTC m=+0.137978436 container attach 3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:12:55 compute-0 vigilant_keller[306336]: 167 167
Sep 30 18:12:55 compute-0 systemd[1]: libpod-3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d.scope: Deactivated successfully.
Sep 30 18:12:55 compute-0 podman[306319]: 2025-09-30 18:12:55.158008474 +0000 UTC m=+0.144553467 container died 3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_keller, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Sep 30 18:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-76dd6dd83eb8a09164e38066fb741dea6031bbd2480ee3b44d573b6f396bf1a0-merged.mount: Deactivated successfully.
Sep 30 18:12:55 compute-0 podman[306319]: 2025-09-30 18:12:55.19248251 +0000 UTC m=+0.179027493 container remove 3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:12:55 compute-0 systemd[1]: libpod-conmon-3119139d3ad5a5770212dea2f4e3e21d039256077a98d8443ac52521a578a21d.scope: Deactivated successfully.
Sep 30 18:12:55 compute-0 podman[306360]: 2025-09-30 18:12:55.360399622 +0000 UTC m=+0.047582297 container create e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_euclid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:12:55 compute-0 systemd[1]: Started libpod-conmon-e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b.scope.
Sep 30 18:12:55 compute-0 podman[306360]: 2025-09-30 18:12:55.341400139 +0000 UTC m=+0.028582794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:12:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9acf838605bc522c85841b31d17f1d4b5d6a75e6f870c9f3427579d0cfa4fe7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9acf838605bc522c85841b31d17f1d4b5d6a75e6f870c9f3427579d0cfa4fe7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9acf838605bc522c85841b31d17f1d4b5d6a75e6f870c9f3427579d0cfa4fe7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:55 compute-0 nova_compute[265391]: 2025-09-30 18:12:55.459 2 WARNING neutronclient.v2_0.client [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9acf838605bc522c85841b31d17f1d4b5d6a75e6f870c9f3427579d0cfa4fe7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:12:55 compute-0 podman[306360]: 2025-09-30 18:12:55.470026659 +0000 UTC m=+0.157209294 container init e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_euclid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 18:12:55 compute-0 podman[306360]: 2025-09-30 18:12:55.477652418 +0000 UTC m=+0.164835053 container start e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:12:55 compute-0 podman[306360]: 2025-09-30 18:12:55.48044156 +0000 UTC m=+0.167624195 container attach e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_euclid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:12:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:55.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:12:55 compute-0 nova_compute[265391]: 2025-09-30 18:12:55.734 2 WARNING neutronclient.v2_0.client [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:55 compute-0 nova_compute[265391]: 2025-09-30 18:12:55.734 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:12:55 compute-0 nova_compute[265391]: 2025-09-30 18:12:55.735 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:12:55 compute-0 nova_compute[265391]: 2025-09-30 18:12:55.735 2 DEBUG nova.network.neutron [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:12:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.6 KiB/s wr, 95 op/s
Sep 30 18:12:56 compute-0 lvm[306453]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:12:56 compute-0 lvm[306453]: VG ceph_vg0 finished
Sep 30 18:12:56 compute-0 lvm[306457]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:12:56 compute-0 lvm[306457]: VG ceph_vg0 finished
Sep 30 18:12:56 compute-0 romantic_euclid[306377]: {}
Sep 30 18:12:56 compute-0 systemd[1]: libpod-e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b.scope: Deactivated successfully.
Sep 30 18:12:56 compute-0 systemd[1]: libpod-e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b.scope: Consumed 1.121s CPU time.
Sep 30 18:12:56 compute-0 podman[306360]: 2025-09-30 18:12:56.210415255 +0000 UTC m=+0.897597880 container died e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_euclid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:12:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9acf838605bc522c85841b31d17f1d4b5d6a75e6f870c9f3427579d0cfa4fe7-merged.mount: Deactivated successfully.
Sep 30 18:12:56 compute-0 nova_compute[265391]: 2025-09-30 18:12:56.242 2 WARNING neutronclient.v2_0.client [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:56 compute-0 podman[306360]: 2025-09-30 18:12:56.268971977 +0000 UTC m=+0.956154652 container remove e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:12:56 compute-0 systemd[1]: libpod-conmon-e1e14eef3191b19fade7155531aef437aa27fc9b7dd512d11e4f7937d645a88b.scope: Deactivated successfully.
Sep 30 18:12:56 compute-0 sudo[306253]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:12:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:12:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:56 compute-0 sudo[306469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:12:56 compute-0 sudo[306469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:12:56 compute-0 sudo[306469]: pam_unix(sudo:session): session closed for user root
Sep 30 18:12:56 compute-0 nova_compute[265391]: 2025-09-30 18:12:56.715 2 WARNING neutronclient.v2_0.client [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:12:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:56.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:56 compute-0 nova_compute[265391]: 2025-09-30 18:12:56.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:56 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc21000bd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:56 compute-0 nova_compute[265391]: 2025-09-30 18:12:56.866 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:56 compute-0 nova_compute[265391]: 2025-09-30 18:12:56.867 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:56 compute-0 nova_compute[265391]: 2025-09-30 18:12:56.876 2 DEBUG nova.network.neutron [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Updating instance_info_cache with network_info: [{"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:12:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:12:57.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:12:57 compute-0 ceph-mon[73755]: pgmap v1024: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.6 KiB/s wr, 95 op/s
Sep 30 18:12:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:12:57 compute-0 nova_compute[265391]: 2025-09-30 18:12:57.372 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:12:57 compute-0 nova_compute[265391]: 2025-09-30 18:12:57.381 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-761dbb06-6272-4941-a65f-5ad2f8cfbb70" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:12:57 compute-0 nova_compute[265391]: 2025-09-30 18:12:57.382 2 DEBUG nova.objects.instance [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 761dbb06-6272-4941-a65f-5ad2f8cfbb70 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:12:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:57 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:12:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2881353993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:12:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:12:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2881353993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:12:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:57.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:57 compute-0 nova_compute[265391]: 2025-09-30 18:12:57.889 2 DEBUG nova.objects.base [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Object Instance<761dbb06-6272-4941-a65f-5ad2f8cfbb70> lazy-loaded attributes: info_cache,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Sep 30 18:12:57 compute-0 nova_compute[265391]: 2025-09-30 18:12:57.976 2 DEBUG nova.storage.rbd_utils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] removing snapshot(nova-resize) on rbd image(761dbb06-6272-4941-a65f-5ad2f8cfbb70_disk) remove_snap /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:489
Sep 30 18:12:57 compute-0 nova_compute[265391]: 2025-09-30 18:12:57.997 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:57 compute-0 nova_compute[265391]: 2025-09-30 18:12:57.999 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:12:58 compute-0 nova_compute[265391]: 2025-09-30 18:12:58.007 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:12:58 compute-0 nova_compute[265391]: 2025-09-30 18:12:58.008 2 INFO nova.compute.claims [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:12:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 87 op/s
Sep 30 18:12:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Sep 30 18:12:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2881353993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:12:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2881353993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:12:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e139 e139: 2 total, 2 up, 2 in
Sep 30 18:12:58 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e139: 2 total, 2 up, 2 in
Sep 30 18:12:58 compute-0 nova_compute[265391]: 2025-09-30 18:12:58.245 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:12:58 compute-0 podman[306533]: 2025-09-30 18:12:58.581812296 +0000 UTC m=+0.101779485 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 18:12:58 compute-0 podman[306534]: 2025-09-30 18:12:58.599328031 +0000 UTC m=+0.112802372 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Sep 30 18:12:58 compute-0 podman[306532]: 2025-09-30 18:12:58.623283293 +0000 UTC m=+0.144781932 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:12:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:12:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:12:58.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:12:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:58] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:12:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:12:58] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:12:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:58 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:59 compute-0 nova_compute[265391]: 2025-09-30 18:12:59.088 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:12:59 compute-0 ceph-mon[73755]: pgmap v1025: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 87 op/s
Sep 30 18:12:59 compute-0 ceph-mon[73755]: osdmap e139: 2 total, 2 up, 2 in
Sep 30 18:12:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:12:59 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:12:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:12:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916960512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:12:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:12:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:12:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:12:59.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:12:59 compute-0 nova_compute[265391]: 2025-09-30 18:12:59.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:12:59 compute-0 nova_compute[265391]: 2025-09-30 18:12:59.667 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:12:59 compute-0 nova_compute[265391]: 2025-09-30 18:12:59.673 2 DEBUG nova.compute.provider_tree [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:12:59 compute-0 podman[276673]: time="2025-09-30T18:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:12:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:12:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10718 "" "Go-http-client/1.1"
Sep 30 18:13:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 341 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 96 op/s
Sep 30 18:13:00 compute-0 nova_compute[265391]: 2025-09-30 18:13:00.181 2 DEBUG nova.scheduler.client.report [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:13:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2916960512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:00 compute-0 nova_compute[265391]: 2025-09-30 18:13:00.692 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.694s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:00 compute-0 nova_compute[265391]: 2025-09-30 18:13:00.693 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:13:00 compute-0 nova_compute[265391]: 2025-09-30 18:13:00.697 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 2.452s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:00 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc21000bd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.209 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.210 2 DEBUG nova.network.neutron [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.210 2 WARNING neutronclient.v2_0.client [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.211 2 WARNING neutronclient.v2_0.client [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:01 compute-0 ceph-mon[73755]: pgmap v1027: 353 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 341 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 96 op/s
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.301 2 DEBUG oslo_concurrency.processutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:01 compute-0 openstack_network_exporter[279566]: ERROR   18:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:13:01 compute-0 openstack_network_exporter[279566]: ERROR   18:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:13:01 compute-0 openstack_network_exporter[279566]: ERROR   18:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:13:01 compute-0 openstack_network_exporter[279566]: ERROR   18:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:13:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:13:01 compute-0 openstack_network_exporter[279566]: ERROR   18:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:13:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:13:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:01.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.730 2 INFO nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:13:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:13:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301438619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.782 2 DEBUG oslo_concurrency.processutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.788 2 DEBUG nova.compute.provider_tree [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:01 compute-0 nova_compute[265391]: 2025-09-30 18:13:01.988 2 DEBUG nova.network.neutron [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Successfully created port: 50f7398a-769c-4636-b498-5162fce10f7d _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:13:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 341 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 96 op/s
Sep 30 18:13:02 compute-0 nova_compute[265391]: 2025-09-30 18:13:02.239 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:13:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3301438619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:02 compute-0 ceph-mon[73755]: pgmap v1028: 353 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 341 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 96 op/s
Sep 30 18:13:02 compute-0 nova_compute[265391]: 2025-09-30 18:13:02.298 2 DEBUG nova.scheduler.client.report [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:13:02 compute-0 ovn_controller[156242]: 2025-09-30T18:13:02Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:22:1f 10.100.0.7
Sep 30 18:13:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:02.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:02 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.084 2 DEBUG nova.network.neutron [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Successfully updated port: 50f7398a-769c-4636-b498-5162fce10f7d _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.149 2 DEBUG nova.compute.manager [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-changed-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.150 2 DEBUG nova.compute.manager [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Refreshing instance network info cache due to event network-changed-50f7398a-769c-4636-b498-5162fce10f7d. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.150 2 DEBUG oslo_concurrency.lockutils [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.150 2 DEBUG oslo_concurrency.lockutils [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.150 2 DEBUG nova.network.neutron [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Refreshing network info cache for port 50f7398a-769c-4636-b498-5162fce10f7d _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.263 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.264 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.264 2 INFO nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Creating image(s)
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.287 2 DEBUG nova.storage.rbd_utils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.308 2 DEBUG nova.storage.rbd_utils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.328 2 DEBUG nova.storage.rbd_utils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.331 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.343 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 2.645s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.405 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.406 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.406 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.407 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.426 2 DEBUG nova.storage.rbd_utils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.429 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:03 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.593 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:13:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:03.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.658 2 WARNING neutronclient.v2_0.client [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:03.681Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.715 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.790 2 DEBUG nova.network.neutron [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.798 2 DEBUG nova.storage.rbd_utils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] resizing rbd image 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.932 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.934 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Ensure instance console log exists: /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.934 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.935 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.935 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:03 compute-0 nova_compute[265391]: 2025-09-30 18:13:03.963 2 INFO nova.scheduler.client.report [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration d5fb4765-43c9-45e5-9a44-0773cf0d1510
Sep 30 18:13:04 compute-0 nova_compute[265391]: 2025-09-30 18:13:04.005 2 DEBUG nova.network.neutron [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:13:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 645 KiB/s rd, 17 KiB/s wr, 65 op/s
Sep 30 18:13:04 compute-0 nova_compute[265391]: 2025-09-30 18:13:04.473 2 DEBUG oslo_concurrency.lockutils [None req-22188bb0-f742-46be-9d38-991ce5b2d168 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 10.040s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:04 compute-0 nova_compute[265391]: 2025-09-30 18:13:04.515 2 DEBUG oslo_concurrency.lockutils [req-9734e999-5eb1-464a-be68-e24b62a7f6b8 req-a1245f8f-d6a1-4cac-8ce8-fb47d6921961 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:13:04 compute-0 nova_compute[265391]: 2025-09-30 18:13:04.516 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquired lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:13:04 compute-0 nova_compute[265391]: 2025-09-30 18:13:04.516 2 DEBUG nova.network.neutron [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:13:04 compute-0 nova_compute[265391]: 2025-09-30 18:13:04.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:04.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:04 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc21000bd00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:05 compute-0 ceph-mon[73755]: pgmap v1029: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 645 KiB/s rd, 17 KiB/s wr, 65 op/s
Sep 30 18:13:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:05 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Sep 30 18:13:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:05.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 e140: 2 total, 2 up, 2 in
Sep 30 18:13:05 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e140: 2 total, 2 up, 2 in
Sep 30 18:13:05 compute-0 nova_compute[265391]: 2025-09-30 18:13:05.782 2 DEBUG nova.network.neutron [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:13:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 806 KiB/s rd, 21 KiB/s wr, 82 op/s
Sep 30 18:13:06 compute-0 ceph-mon[73755]: osdmap e140: 2 total, 2 up, 2 in
Sep 30 18:13:06 compute-0 ceph-mon[73755]: pgmap v1031: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 806 KiB/s rd, 21 KiB/s wr, 82 op/s
Sep 30 18:13:06 compute-0 nova_compute[265391]: 2025-09-30 18:13:06.753 2 WARNING neutronclient.v2_0.client [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:06.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:06 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:06 compute-0 nova_compute[265391]: 2025-09-30 18:13:06.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:06 compute-0 nova_compute[265391]: 2025-09-30 18:13:06.928 2 DEBUG nova.network.neutron [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Updating instance_info_cache with network_info: [{"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:13:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:07.166Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:13:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.436 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Releasing lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.437 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Instance network_info: |[{"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.441 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Start _get_guest_xml network_info=[{"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.447 2 WARNING nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.450 2 DEBUG nova.virt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-19459247', uuid='23ad643b-d29f-4fe8-a347-92df178ae0cd'), owner=OwnerMeta(userid='dc3bb71c425f484fbc46f90978029403', username='tempest-TestExecuteActionsViaActuator-837729328-project-admin', projectid='ddd1f985d8b64b449c79d55b0cbd6422', projectname='tempest-TestExecuteActionsViaActuator-837729328'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759255987.4501073) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.455 2 DEBUG nova.virt.libvirt.host [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.456 2 DEBUG nova.virt.libvirt.host [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.459 2 DEBUG nova.virt.libvirt.host [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.459 2 DEBUG nova.virt.libvirt.host [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.460 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.460 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.461 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.462 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.462 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.463 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.463 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.463 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.464 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.464 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.465 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.465 2 DEBUG nova.virt.hardware [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.471 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:07.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:13:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002277819154131666 of space, bias 1.0, pg target 0.4555638308263332 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:13:07
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'images', '.nfs', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'volumes', 'backups']
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:13:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:13:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:13:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477657276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.938 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.972 2 DEBUG nova.storage.rbd_utils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:07 compute-0 nova_compute[265391]: 2025-09-30 18:13:07.976 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 655 KiB/s rd, 17 KiB/s wr, 65 op/s
Sep 30 18:13:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:13:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651128805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:08 compute-0 nova_compute[265391]: 2025-09-30 18:13:08.481 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:08 compute-0 nova_compute[265391]: 2025-09-30 18:13:08.484 2 DEBUG nova.virt.libvirt.vif [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:12:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-19459247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-19459247',id=6,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-i5u830kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:13:02Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=23ad643b-d29f-4fe8-a347-92df178ae0cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:13:08 compute-0 nova_compute[265391]: 2025-09-30 18:13:08.485 2 DEBUG nova.network.os_vif_util [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converting VIF {"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:13:08 compute-0 nova_compute[265391]: 2025-09-30 18:13:08.486 2 DEBUG nova.network.os_vif_util [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:86:73,bridge_name='br-int',has_traffic_filtering=True,id=50f7398a-769c-4636-b498-5162fce10f7d,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f7398a-76') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:13:08 compute-0 nova_compute[265391]: 2025-09-30 18:13:08.488 2 DEBUG nova.objects.instance [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23ad643b-d29f-4fe8-a347-92df178ae0cd obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:13:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3477657276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:08 compute-0 ceph-mon[73755]: pgmap v1032: 353 pgs: 353 active+clean; 200 MiB data, 285 MiB used, 40 GiB / 40 GiB avail; 655 KiB/s rd, 17 KiB/s wr, 65 op/s
Sep 30 18:13:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2651128805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:08] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:13:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:08] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:13:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.016 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <uuid>23ad643b-d29f-4fe8-a347-92df178ae0cd</uuid>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <name>instance-00000006</name>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-19459247</nova:name>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:13:07</nova:creationTime>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:13:09 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:13:09 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:user uuid="dc3bb71c425f484fbc46f90978029403">tempest-TestExecuteActionsViaActuator-837729328-project-admin</nova:user>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:project uuid="ddd1f985d8b64b449c79d55b0cbd6422">tempest-TestExecuteActionsViaActuator-837729328</nova:project>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <nova:port uuid="50f7398a-769c-4636-b498-5162fce10f7d">
Sep 30 18:13:09 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <system>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <entry name="serial">23ad643b-d29f-4fe8-a347-92df178ae0cd</entry>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <entry name="uuid">23ad643b-d29f-4fe8-a347-92df178ae0cd</entry>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </system>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <os>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   </os>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <features>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   </features>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/23ad643b-d29f-4fe8-a347-92df178ae0cd_disk">
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       </source>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config">
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       </source>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:13:09 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:d1:86:73"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <target dev="tap50f7398a-76"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/console.log" append="off"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <video>
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </video>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:13:09 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:13:09 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:13:09 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:13:09 compute-0 nova_compute[265391]: </domain>
Sep 30 18:13:09 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.016 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Preparing to wait for external event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.017 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.017 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.017 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.018 2 DEBUG nova.virt.libvirt.vif [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:12:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-19459247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-19459247',id=6,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-i5u830kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:13:02Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=23ad643b-d29f-4fe8-a347-92df178ae0cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.019 2 DEBUG nova.network.os_vif_util [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converting VIF {"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.020 2 DEBUG nova.network.os_vif_util [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:86:73,bridge_name='br-int',has_traffic_filtering=True,id=50f7398a-769c-4636-b498-5162fce10f7d,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f7398a-76') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.021 2 DEBUG os_vif [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:86:73,bridge_name='br-int',has_traffic_filtering=True,id=50f7398a-769c-4636-b498-5162fce10f7d,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f7398a-76') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '288f77af-2421-5067-9a5e-6eb1b0752a1f', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f7398a-76, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap50f7398a-76, col_values=(('qos', UUID('7dad547f-c386-4e00-ac36-4c93668a6ee8')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap50f7398a-76, col_values=(('external_ids', {'iface-id': '50f7398a-769c-4636-b498-5162fce10f7d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:86:73', 'vm-uuid': '23ad643b-d29f-4fe8-a347-92df178ae0cd'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:09 compute-0 NetworkManager[45059]: <info>  [1759255989.0337] manager: (tap50f7398a-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.045 2 INFO os_vif [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:86:73,bridge_name='br-int',has_traffic_filtering=True,id=50f7398a-769c-4636-b498-5162fce10f7d,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f7398a-76')
Sep 30 18:13:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:09 compute-0 nova_compute[265391]: 2025-09-30 18:13:09.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:09.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:09 compute-0 sudo[306878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:13:09 compute-0 sudo[306878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:09 compute-0 sudo[306878]: pam_unix(sudo:session): session closed for user root
Sep 30 18:13:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 658 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Sep 30 18:13:10 compute-0 nova_compute[265391]: 2025-09-30 18:13:10.610 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:13:10 compute-0 nova_compute[265391]: 2025-09-30 18:13:10.610 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:13:10 compute-0 nova_compute[265391]: 2025-09-30 18:13:10.611 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No VIF found with MAC fa:16:3e:d1:86:73, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:13:10 compute-0 nova_compute[265391]: 2025-09-30 18:13:10.612 2 INFO nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Using config drive
Sep 30 18:13:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:10 compute-0 nova_compute[265391]: 2025-09-30 18:13:10.647 2 DEBUG nova.storage.rbd_utils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:10.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:11 compute-0 ceph-mon[73755]: pgmap v1033: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 658 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Sep 30 18:13:11 compute-0 nova_compute[265391]: 2025-09-30 18:13:11.163 2 WARNING neutronclient.v2_0.client [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:11 compute-0 nova_compute[265391]: 2025-09-30 18:13:11.574 2 INFO nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Creating config drive at /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/disk.config
Sep 30 18:13:11 compute-0 nova_compute[265391]: 2025-09-30 18:13:11.580 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpkc4oyrjf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:11.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:11 compute-0 nova_compute[265391]: 2025-09-30 18:13:11.716 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpkc4oyrjf" returned: 0 in 0.136s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:11 compute-0 nova_compute[265391]: 2025-09-30 18:13:11.755 2 DEBUG nova.storage.rbd_utils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:11 compute-0 nova_compute[265391]: 2025-09-30 18:13:11.759 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/disk.config 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:11 compute-0 nova_compute[265391]: 2025-09-30 18:13:11.943 2 DEBUG oslo_concurrency.processutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/disk.config 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:11 compute-0 nova_compute[265391]: 2025-09-30 18:13:11.945 2 INFO nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Deleting local config drive /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/disk.config because it was imported into RBD.
Sep 30 18:13:12 compute-0 kernel: tap50f7398a-76: entered promiscuous mode
Sep 30 18:13:12 compute-0 NetworkManager[45059]: <info>  [1759255992.0054] manager: (tap50f7398a-76): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Sep 30 18:13:12 compute-0 ovn_controller[156242]: 2025-09-30T18:13:12Z|00076|binding|INFO|Claiming lport 50f7398a-769c-4636-b498-5162fce10f7d for this chassis.
Sep 30 18:13:12 compute-0 ovn_controller[156242]: 2025-09-30T18:13:12Z|00077|binding|INFO|50f7398a-769c-4636-b498-5162fce10f7d: Claiming fa:16:3e:d1:86:73 10.100.0.9
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.016 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:86:73 10.100.0.9'], port_security=['fa:16:3e:d1:86:73 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '23ad643b-d29f-4fe8-a347-92df178ae0cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=50f7398a-769c-4636-b498-5162fce10f7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.017 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 50f7398a-769c-4636-b498-5162fce10f7d in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab bound to our chassis
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.018 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:13:12 compute-0 ovn_controller[156242]: 2025-09-30T18:13:12Z|00078|binding|INFO|Setting lport 50f7398a-769c-4636-b498-5162fce10f7d ovn-installed in OVS
Sep 30 18:13:12 compute-0 ovn_controller[156242]: 2025-09-30T18:13:12Z|00079|binding|INFO|Setting lport 50f7398a-769c-4636-b498-5162fce10f7d up in Southbound
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.043 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f227ccba-0cc3-451f-ad39-62d738043014]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:12 compute-0 systemd-udevd[306978]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:13:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 658 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Sep 30 18:13:12 compute-0 systemd-machined[219917]: New machine qemu-5-instance-00000006.
Sep 30 18:13:12 compute-0 NetworkManager[45059]: <info>  [1759255992.0608] device (tap50f7398a-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:13:12 compute-0 NetworkManager[45059]: <info>  [1759255992.0622] device (tap50f7398a-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:13:12 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000006.
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.081 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[36c92c4d-c79a-46bd-af93-2254c4768f9b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.083 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[fd53732d-71ee-4375-b1b0-5d8fad04fc39]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.113 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[dc88e943-1c6a-41c2-8cba-34361d1bd956]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.135 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[23ef9098-f72d-43ab-afde-1a11873701aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fff1904-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:07:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456734, 'reachable_time': 15699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306991, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.157 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[49fecc7b-55c6-48ec-90c5-c8dc841aa3bd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5fff1904-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456747, 'tstamp': 456747}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306993, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5fff1904-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456750, 'tstamp': 456750}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306993, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.158 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fff1904-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.163 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fff1904-10, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.163 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.164 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5fff1904-10, col_values=(('external_ids', {'iface-id': '3a8ea0a0-c179-4516-9404-04b68a17e79e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.164 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.167 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9c0885-9b9b-44b6-a08f-19e9eb2cbc87]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-5fff1904-159a-4b76-8c46-feabf17f29ab\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 5fff1904-159a-4b76-8c46-feabf17f29ab\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.196 2 DEBUG nova.compute.manager [req-52b78bb4-cd23-41bb-a886-ded937cfd29c req-0195dcc9-88a5-40bf-be6e-17fb4104a8d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.197 2 DEBUG oslo_concurrency.lockutils [req-52b78bb4-cd23-41bb-a886-ded937cfd29c req-0195dcc9-88a5-40bf-be6e-17fb4104a8d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.197 2 DEBUG oslo_concurrency.lockutils [req-52b78bb4-cd23-41bb-a886-ded937cfd29c req-0195dcc9-88a5-40bf-be6e-17fb4104a8d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.197 2 DEBUG oslo_concurrency.lockutils [req-52b78bb4-cd23-41bb-a886-ded937cfd29c req-0195dcc9-88a5-40bf-be6e-17fb4104a8d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.198 2 DEBUG nova.compute.manager [req-52b78bb4-cd23-41bb-a886-ded937cfd29c req-0195dcc9-88a5-40bf-be6e-17fb4104a8d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Processing event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.762 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:13:12 compute-0 nova_compute[265391]: 2025-09-30 18:13:12.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.766 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:13:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:12.767 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:12.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:13 compute-0 ceph-mon[73755]: pgmap v1034: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 658 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Sep 30 18:13:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:13 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:13.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:13.682Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:13 compute-0 nova_compute[265391]: 2025-09-30 18:13:13.950 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:13:13 compute-0 nova_compute[265391]: 2025-09-30 18:13:13.954 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:13:13 compute-0 nova_compute[265391]: 2025-09-30 18:13:13.959 2 INFO nova.virt.libvirt.driver [-] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Instance spawned successfully.
Sep 30 18:13:13 compute-0 nova_compute[265391]: 2025-09-30 18:13:13.960 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 39 op/s
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.275 2 DEBUG nova.compute.manager [req-9ff10092-7191-48b2-a51b-c3ea27d863d3 req-69558204-067e-4f4e-be62-b8f3d21b4d9f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.275 2 DEBUG oslo_concurrency.lockutils [req-9ff10092-7191-48b2-a51b-c3ea27d863d3 req-69558204-067e-4f4e-be62-b8f3d21b4d9f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.275 2 DEBUG oslo_concurrency.lockutils [req-9ff10092-7191-48b2-a51b-c3ea27d863d3 req-69558204-067e-4f4e-be62-b8f3d21b4d9f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.275 2 DEBUG oslo_concurrency.lockutils [req-9ff10092-7191-48b2-a51b-c3ea27d863d3 req-69558204-067e-4f4e-be62-b8f3d21b4d9f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.276 2 DEBUG nova.compute.manager [req-9ff10092-7191-48b2-a51b-c3ea27d863d3 req-69558204-067e-4f4e-be62-b8f3d21b4d9f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] No waiting events found dispatching network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.276 2 WARNING nova.compute.manager [req-9ff10092-7191-48b2-a51b-c3ea27d863d3 req-69558204-067e-4f4e-be62-b8f3d21b4d9f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received unexpected event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d for instance with vm_state building and task_state spawning.
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.487 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.488 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.488 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.488 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.489 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.489 2 DEBUG nova.virt.libvirt.driver [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:13:14 compute-0 nova_compute[265391]: 2025-09-30 18:13:14.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:14.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:15 compute-0 nova_compute[265391]: 2025-09-30 18:13:15.000 2 INFO nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Took 11.74 seconds to spawn the instance on the hypervisor.
Sep 30 18:13:15 compute-0 nova_compute[265391]: 2025-09-30 18:13:15.000 2 DEBUG nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:13:15 compute-0 ceph-mon[73755]: pgmap v1035: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 39 op/s
Sep 30 18:13:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:15 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:15 compute-0 nova_compute[265391]: 2025-09-30 18:13:15.530 2 INFO nova.compute.manager [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Took 17.65 seconds to build instance.
Sep 30 18:13:15 compute-0 podman[307042]: 2025-09-30 18:13:15.546267979 +0000 UTC m=+0.078789708 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:13:15 compute-0 podman[307041]: 2025-09-30 18:13:15.578360493 +0000 UTC m=+0.111287782 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller)
Sep 30 18:13:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:15.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:16 compute-0 nova_compute[265391]: 2025-09-30 18:13:16.035 2 DEBUG oslo_concurrency.lockutils [None req-ac8e5487-ea8a-4dfb-970d-c8a6bbdc1e99 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.168s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Sep 30 18:13:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:16.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:16 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:17 compute-0 ceph-mon[73755]: pgmap v1036: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Sep 30 18:13:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:17.167Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:13:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:17.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:17.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Sep 30 18:13:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:18] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:13:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:18] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:13:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:18 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:19 compute-0 nova_compute[265391]: 2025-09-30 18:13:19.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:19 compute-0 ceph-mon[73755]: pgmap v1037: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.202997) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255999203041, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1918, "num_deletes": 502, "total_data_size": 3098394, "memory_usage": 3143872, "flush_reason": "Manual Compaction"}
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255999215234, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 2725194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27070, "largest_seqno": 28987, "table_properties": {"data_size": 2717285, "index_size": 4211, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 20928, "raw_average_key_size": 20, "raw_value_size": 2699017, "raw_average_value_size": 2590, "num_data_blocks": 185, "num_entries": 1042, "num_filter_entries": 1042, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759255846, "oldest_key_time": 1759255846, "file_creation_time": 1759255999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 12313 microseconds, and 6421 cpu microseconds.
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.215304) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 2725194 bytes OK
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.215336) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.217885) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.217910) EVENT_LOG_v1 {"time_micros": 1759255999217902, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.217937) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3089242, prev total WAL file size 3089242, number of live WAL files 2.
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.219368) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(2661KB)], [62(13MB)]
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255999219417, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 16549300, "oldest_snapshot_seqno": -1}
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5540 keys, 11016712 bytes, temperature: kUnknown
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255999285956, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 11016712, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10981241, "index_size": 20513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 142882, "raw_average_key_size": 25, "raw_value_size": 10882507, "raw_average_value_size": 1964, "num_data_blocks": 827, "num_entries": 5540, "num_filter_entries": 5540, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759255999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.286251) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 11016712 bytes
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.287558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 248.4 rd, 165.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 13.2 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(10.1) write-amplify(4.0) OK, records in: 6548, records dropped: 1008 output_compression: NoCompression
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.287578) EVENT_LOG_v1 {"time_micros": 1759255999287568, "job": 34, "event": "compaction_finished", "compaction_time_micros": 66628, "compaction_time_cpu_micros": 40483, "output_level": 6, "num_output_files": 1, "total_output_size": 11016712, "num_input_records": 6548, "num_output_records": 5540, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255999288144, "job": 34, "event": "table_file_deletion", "file_number": 64}
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759255999291055, "job": 34, "event": "table_file_deletion", "file_number": 62}
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.219236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.291338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.291387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.291389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.291391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:13:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:13:19.291393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:13:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:19 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:19 compute-0 nova_compute[265391]: 2025-09-30 18:13:19.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:19.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:19 compute-0 unix_chkpwd[307101]: password check failed for user (root)
Sep 30 18:13:19 compute-0 sshd-session[307095]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 18:13:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Sep 30 18:13:20 compute-0 unix_chkpwd[307102]: password check failed for user (root)
Sep 30 18:13:20 compute-0 sshd-session[307097]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:13:20 compute-0 podman[307103]: 2025-09-30 18:13:20.560907351 +0000 UTC m=+0.093096479 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:13:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:20.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:20 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:21 compute-0 ceph-mon[73755]: pgmap v1038: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Sep 30 18:13:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:21 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:21.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 75 op/s
Sep 30 18:13:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:13:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/152161458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:22 compute-0 sshd-session[307095]: Failed password for root from 14.225.220.107 port 42406 ssh2
Sep 30 18:13:22 compute-0 ceph-mon[73755]: pgmap v1039: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 75 op/s
Sep 30 18:13:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/152161458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:13:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:13:22 compute-0 nova_compute[265391]: 2025-09-30 18:13:22.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:22 compute-0 nova_compute[265391]: 2025-09-30 18:13:22.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:22 compute-0 sshd-session[307097]: Failed password for root from 45.252.249.158 port 39208 ssh2
Sep 30 18:13:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:13:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:22.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:13:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:22 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:22 compute-0 nova_compute[265391]: 2025-09-30 18:13:22.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:22 compute-0 nova_compute[265391]: 2025-09-30 18:13:22.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:22 compute-0 nova_compute[265391]: 2025-09-30 18:13:22.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:22 compute-0 nova_compute[265391]: 2025-09-30 18:13:22.947 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:13:22 compute-0 nova_compute[265391]: 2025-09-30 18:13:22.948 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:22 compute-0 sshd-session[307095]: Received disconnect from 14.225.220.107 port 42406:11: Bye Bye [preauth]
Sep 30 18:13:22 compute-0 sshd-session[307095]: Disconnected from authenticating user root 14.225.220.107 port 42406 [preauth]
Sep 30 18:13:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:13:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:13:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807150272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:23 compute-0 nova_compute[265391]: 2025-09-30 18:13:23.405 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:23 compute-0 sshd-session[307097]: Received disconnect from 45.252.249.158 port 39208:11: Bye Bye [preauth]
Sep 30 18:13:23 compute-0 sshd-session[307097]: Disconnected from authenticating user root 45.252.249.158 port 39208 [preauth]
Sep 30 18:13:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:23 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:23.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:23.684Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Sep 30 18:13:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2807150272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:24 compute-0 ceph-mon[73755]: pgmap v1040: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.464 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.464 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.469 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.469 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.640 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.642 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.671 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.029s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.672 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4165MB free_disk=39.88016128540039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.672 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:24 compute-0 nova_compute[265391]: 2025-09-30 18:13:24.673 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:24 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:25 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:25.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:25 compute-0 nova_compute[265391]: 2025-09-30 18:13:25.727 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 761dbb06-6272-4941-a65f-5ad2f8cfbb70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:13:25 compute-0 nova_compute[265391]: 2025-09-30 18:13:25.728 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 23ad643b-d29f-4fe8-a347-92df178ae0cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:13:25 compute-0 nova_compute[265391]: 2025-09-30 18:13:25.728 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:13:25 compute-0 nova_compute[265391]: 2025-09-30 18:13:25.728 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=39GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:13:24 up  1:16,  0 user,  load average: 1.36, 0.91, 0.92\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_None': '2', 'num_os_type_None': '2', 'num_proj_ddd1f985d8b64b449c79d55b0cbd6422': '2', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:13:25 compute-0 nova_compute[265391]: 2025-09-30 18:13:25.800 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 8.7 KiB/s wr, 70 op/s
Sep 30 18:13:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:13:26 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/636359647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:26 compute-0 nova_compute[265391]: 2025-09-30 18:13:26.314 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:26 compute-0 nova_compute[265391]: 2025-09-30 18:13:26.324 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:13:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:26.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:26 compute-0 nova_compute[265391]: 2025-09-30 18:13:26.833 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:13:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:26 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:27 compute-0 ceph-mon[73755]: pgmap v1041: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 8.7 KiB/s wr, 70 op/s
Sep 30 18:13:27 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/636359647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:27.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:27 compute-0 nova_compute[265391]: 2025-09-30 18:13:27.343 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:13:27 compute-0 nova_compute[265391]: 2025-09-30 18:13:27.344 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.671s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:27 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:27 compute-0 ovn_controller[156242]: 2025-09-30T18:13:27Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:86:73 10.100.0.9
Sep 30 18:13:27 compute-0 ovn_controller[156242]: 2025-09-30T18:13:27Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:86:73 10.100.0.9
Sep 30 18:13:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:27.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 8.7 KiB/s wr, 70 op/s
Sep 30 18:13:28 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/117727758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:28] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:13:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:28] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:13:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:28.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:28 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:29 compute-0 ceph-mon[73755]: pgmap v1042: 353 pgs: 353 active+clean; 248 MiB data, 307 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 8.7 KiB/s wr, 70 op/s
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.343 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.344 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.344 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.344 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:29 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:29 compute-0 podman[307181]: 2025-09-30 18:13:29.552284461 +0000 UTC m=+0.072264418 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:13:29 compute-0 podman[307180]: 2025-09-30 18:13:29.575562026 +0000 UTC m=+0.093775587 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_id=multipathd, org.label-schema.license=GPLv2)
Sep 30 18:13:29 compute-0 podman[307182]: 2025-09-30 18:13:29.575678119 +0000 UTC m=+0.098214883 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1755695350)
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:29 compute-0 podman[276673]: time="2025-09-30T18:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:13:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:13:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10727 "" "Go-http-client/1.1"
Sep 30 18:13:29 compute-0 sudo[307238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:13:29 compute-0 sudo[307238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:29 compute-0 sudo[307238]: pam_unix(sudo:session): session closed for user root
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.936 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:13:29 compute-0 nova_compute[265391]: 2025-09-30 18:13:29.937 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:13:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Sep 30 18:13:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:30.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:30 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:31 compute-0 ceph-mon[73755]: pgmap v1043: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Sep 30 18:13:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3423975977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/814964195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:31 compute-0 openstack_network_exporter[279566]: ERROR   18:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:13:31 compute-0 openstack_network_exporter[279566]: ERROR   18:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:13:31 compute-0 openstack_network_exporter[279566]: ERROR   18:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:13:31 compute-0 openstack_network_exporter[279566]: ERROR   18:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:13:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:13:31 compute-0 openstack_network_exporter[279566]: ERROR   18:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:13:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:13:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:31.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 382 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:13:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2691203577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:32.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:32 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:33 compute-0 ceph-mon[73755]: pgmap v1044: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 382 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:13:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:33 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:33.685Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:34 compute-0 nova_compute[265391]: 2025-09-30 18:13:34.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 382 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Sep 30 18:13:34 compute-0 nova_compute[265391]: 2025-09-30 18:13:34.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:34.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:35 compute-0 ceph-mon[73755]: pgmap v1045: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 382 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Sep 30 18:13:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:35 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 382 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:13:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:13:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26377187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:13:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:13:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26377187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:13:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:36 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:37.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:37 compute-0 ceph-mon[73755]: pgmap v1046: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 382 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:13:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/26377187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:13:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/26377187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:13:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:13:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:13:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:13:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:13:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:13:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:13:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:13:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:13:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:37.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 382 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:13:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:13:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:38] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:13:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:38] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:13:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:38.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:38 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:39 compute-0 nova_compute[265391]: 2025-09-30 18:13:39.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:39 compute-0 ceph-mon[73755]: pgmap v1047: 353 pgs: 353 active+clean; 327 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 382 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:13:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:39 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:39 compute-0 nova_compute[265391]: 2025-09-30 18:13:39.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Sep 30 18:13:40 compute-0 ceph-mon[73755]: pgmap v1048: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Sep 30 18:13:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:40.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:40 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:41 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:41.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 74 op/s
Sep 30 18:13:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:42.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:42 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:43 compute-0 ceph-mon[73755]: pgmap v1049: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 74 op/s
Sep 30 18:13:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:43 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:43.686Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:43.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:44 compute-0 nova_compute[265391]: 2025-09-30 18:13:44.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 74 op/s
Sep 30 18:13:44 compute-0 nova_compute[265391]: 2025-09-30 18:13:44.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:44.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:44 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:45 compute-0 ceph-mon[73755]: pgmap v1050: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 74 op/s
Sep 30 18:13:45 compute-0 nova_compute[265391]: 2025-09-30 18:13:45.317 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:45 compute-0 nova_compute[265391]: 2025-09-30 18:13:45.318 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:45 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:45.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:45 compute-0 nova_compute[265391]: 2025-09-30 18:13:45.823 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:13:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:13:46 compute-0 nova_compute[265391]: 2025-09-30 18:13:46.394 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:46 compute-0 nova_compute[265391]: 2025-09-30 18:13:46.395 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:46 compute-0 nova_compute[265391]: 2025-09-30 18:13:46.404 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:13:46 compute-0 nova_compute[265391]: 2025-09-30 18:13:46.404 2 INFO nova.compute.claims [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:13:46 compute-0 podman[307284]: 2025-09-30 18:13:46.570786607 +0000 UTC m=+0.094497536 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:13:46 compute-0 podman[307283]: 2025-09-30 18:13:46.605505799 +0000 UTC m=+0.130848921 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller)
Sep 30 18:13:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:46.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:46 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:47.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:47 compute-0 ceph-mon[73755]: pgmap v1051: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:13:47 compute-0 nova_compute[265391]: 2025-09-30 18:13:47.502 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:47 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:13:47 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897963439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:48 compute-0 nova_compute[265391]: 2025-09-30 18:13:48.004 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:48 compute-0 nova_compute[265391]: 2025-09-30 18:13:48.013 2 DEBUG nova.compute.provider_tree [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:13:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:13:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2897963439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:13:48 compute-0 nova_compute[265391]: 2025-09-30 18:13:48.526 2 DEBUG nova.scheduler.client.report [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:13:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:48] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:13:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:48] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:13:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:48.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:48 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:49 compute-0 nova_compute[265391]: 2025-09-30 18:13:49.040 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.645s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:49 compute-0 nova_compute[265391]: 2025-09-30 18:13:49.041 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:13:49 compute-0 nova_compute[265391]: 2025-09-30 18:13:49.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:49 compute-0 ceph-mon[73755]: pgmap v1052: 353 pgs: 353 active+clean; 328 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:13:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:49 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:49 compute-0 nova_compute[265391]: 2025-09-30 18:13:49.554 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:13:49 compute-0 nova_compute[265391]: 2025-09-30 18:13:49.554 2 DEBUG nova.network.neutron [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:13:49 compute-0 nova_compute[265391]: 2025-09-30 18:13:49.554 2 WARNING neutronclient.v2_0.client [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:49 compute-0 nova_compute[265391]: 2025-09-30 18:13:49.555 2 WARNING neutronclient.v2_0.client [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:49 compute-0 nova_compute[265391]: 2025-09-30 18:13:49.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:49.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:49 compute-0 sudo[307356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:13:49 compute-0 sudo[307356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:49 compute-0 sudo[307356]: pam_unix(sudo:session): session closed for user root
Sep 30 18:13:50 compute-0 nova_compute[265391]: 2025-09-30 18:13:50.061 2 INFO nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:13:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 333 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 537 KiB/s wr, 86 op/s
Sep 30 18:13:50 compute-0 ceph-mon[73755]: pgmap v1053: 353 pgs: 353 active+clean; 333 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 537 KiB/s wr, 86 op/s
Sep 30 18:13:50 compute-0 nova_compute[265391]: 2025-09-30 18:13:50.436 2 DEBUG nova.network.neutron [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Successfully created port: b4130889-fd6e-44b4-8184-b79693b30d78 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:13:50 compute-0 nova_compute[265391]: 2025-09-30 18:13:50.570 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:13:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:50 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.113 2 DEBUG nova.network.neutron [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Successfully updated port: b4130889-fd6e-44b4-8184-b79693b30d78 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.177 2 DEBUG nova.compute.manager [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received event network-changed-b4130889-fd6e-44b4-8184-b79693b30d78 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.177 2 DEBUG nova.compute.manager [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Refreshing instance network info cache due to event network-changed-b4130889-fd6e-44b4-8184-b79693b30d78. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.177 2 DEBUG oslo_concurrency.lockutils [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.177 2 DEBUG oslo_concurrency.lockutils [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.178 2 DEBUG nova.network.neutron [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Refreshing network info cache for port b4130889-fd6e-44b4-8184-b79693b30d78 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:13:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:51 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:51 compute-0 podman[307382]: 2025-09-30 18:13:51.53313093 +0000 UTC m=+0.065592135 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.588 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.590 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.591 2 INFO nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Creating image(s)
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.631 2 DEBUG nova.storage.rbd_utils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 28ad2702-2baf-4865-be24-c468842cee03_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.670 2 DEBUG nova.storage.rbd_utils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 28ad2702-2baf-4865-be24-c468842cee03_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:51.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.704 2 DEBUG nova.storage.rbd_utils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 28ad2702-2baf-4865-be24-c468842cee03_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.708 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.719 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.721 2 WARNING neutronclient.v2_0.client [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.768 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.768 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.769 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.769 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.799 2 DEBUG nova.storage.rbd_utils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 28ad2702-2baf-4865-be24-c468842cee03_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:51 compute-0 nova_compute[265391]: 2025-09-30 18:13:51.805 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 28ad2702-2baf-4865-be24-c468842cee03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 333 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 524 KiB/s wr, 12 op/s
Sep 30 18:13:52 compute-0 nova_compute[265391]: 2025-09-30 18:13:52.077 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 28ad2702-2baf-4865-be24-c468842cee03_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:52 compute-0 nova_compute[265391]: 2025-09-30 18:13:52.152 2 DEBUG nova.storage.rbd_utils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] resizing rbd image 28ad2702-2baf-4865-be24-c468842cee03_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:13:52 compute-0 nova_compute[265391]: 2025-09-30 18:13:52.264 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:13:52 compute-0 nova_compute[265391]: 2025-09-30 18:13:52.265 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Ensure instance console log exists: /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:13:52 compute-0 nova_compute[265391]: 2025-09-30 18:13:52.265 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:52 compute-0 nova_compute[265391]: 2025-09-30 18:13:52.265 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:52 compute-0 nova_compute[265391]: 2025-09-30 18:13:52.266 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:13:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:13:52 compute-0 nova_compute[265391]: 2025-09-30 18:13:52.804 2 DEBUG nova.network.neutron [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:13:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:52 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:52.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:53 compute-0 ceph-mon[73755]: pgmap v1054: 353 pgs: 353 active+clean; 333 MiB data, 353 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 524 KiB/s wr, 12 op/s
Sep 30 18:13:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:13:53 compute-0 nova_compute[265391]: 2025-09-30 18:13:53.310 2 DEBUG nova.network.neutron [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:13:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:53 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:53.687Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:53.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:53 compute-0 nova_compute[265391]: 2025-09-30 18:13:53.818 2 DEBUG oslo_concurrency.lockutils [req-0b1d96ed-2643-41a7-961f-669bc797592b req-efe6926e-b7cb-4e0e-a484-3e844ec7acb0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:13:53 compute-0 nova_compute[265391]: 2025-09-30 18:13:53.819 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquired lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:13:53 compute-0 nova_compute[265391]: 2025-09-30 18:13:53.820 2 DEBUG nova.network.neutron [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:13:54 compute-0 nova_compute[265391]: 2025-09-30 18:13:54.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:13:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:54.282 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:54.283 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:13:54.284 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:54 compute-0 nova_compute[265391]: 2025-09-30 18:13:54.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:54 compute-0 nova_compute[265391]: 2025-09-30 18:13:54.787 2 DEBUG nova.network.neutron [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:13:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:54.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.053 2 WARNING neutronclient.v2_0.client [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:55 compute-0 ceph-mon[73755]: pgmap v1055: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.228 2 DEBUG nova.network.neutron [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Updating instance_info_cache with network_info: [{"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:13:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:13:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:55.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.735 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Releasing lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.736 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Instance network_info: |[{"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.738 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Start _get_guest_xml network_info=[{"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.743 2 WARNING nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.744 2 DEBUG nova.virt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-1899978059', uuid='28ad2702-2baf-4865-be24-c468842cee03'), owner=OwnerMeta(userid='dc3bb71c425f484fbc46f90978029403', username='tempest-TestExecuteActionsViaActuator-837729328-project-admin', projectid='ddd1f985d8b64b449c79d55b0cbd6422', projectname='tempest-TestExecuteActionsViaActuator-837729328'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759256035.7443757) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.749 2 DEBUG nova.virt.libvirt.host [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.750 2 DEBUG nova.virt.libvirt.host [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.752 2 DEBUG nova.virt.libvirt.host [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.753 2 DEBUG nova.virt.libvirt.host [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.753 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.753 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.754 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.754 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.754 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.755 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.755 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.755 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.756 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.756 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.756 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.756 2 DEBUG nova.virt.hardware [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:13:55 compute-0 nova_compute[265391]: 2025-09-30 18:13:55.759 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:13:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:13:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792961879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:56 compute-0 nova_compute[265391]: 2025-09-30 18:13:56.260 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:56 compute-0 nova_compute[265391]: 2025-09-30 18:13:56.300 2 DEBUG nova.storage.rbd_utils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 28ad2702-2baf-4865-be24-c468842cee03_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:56 compute-0 nova_compute[265391]: 2025-09-30 18:13:56.304 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:56 compute-0 sudo[307636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:13:56 compute-0 sudo[307636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:56 compute-0 sudo[307636]: pam_unix(sudo:session): session closed for user root
Sep 30 18:13:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:13:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584726180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:56 compute-0 nova_compute[265391]: 2025-09-30 18:13:56.805 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:13:56 compute-0 nova_compute[265391]: 2025-09-30 18:13:56.808 2 DEBUG nova.virt.libvirt.vif [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1899978059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1899978059',id=8,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-1d71qtf9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:13:50Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=28ad2702-2baf-4865-be24-c468842cee03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:13:56 compute-0 nova_compute[265391]: 2025-09-30 18:13:56.809 2 DEBUG nova.network.os_vif_util [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converting VIF {"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:13:56 compute-0 nova_compute[265391]: 2025-09-30 18:13:56.810 2 DEBUG nova.network.os_vif_util [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:13:56 compute-0 nova_compute[265391]: 2025-09-30 18:13:56.811 2 DEBUG nova.objects.instance [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28ad2702-2baf-4865-be24-c468842cee03 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:13:56 compute-0 sudo[307661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:13:56 compute-0 sudo[307661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:56 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:56.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:13:57.171Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:13:57 compute-0 ceph-mon[73755]: pgmap v1056: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:13:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3792961879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2584726180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.319 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <uuid>28ad2702-2baf-4865-be24-c468842cee03</uuid>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <name>instance-00000008</name>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1899978059</nova:name>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:13:55</nova:creationTime>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:13:57 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:13:57 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:user uuid="dc3bb71c425f484fbc46f90978029403">tempest-TestExecuteActionsViaActuator-837729328-project-admin</nova:user>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:project uuid="ddd1f985d8b64b449c79d55b0cbd6422">tempest-TestExecuteActionsViaActuator-837729328</nova:project>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <nova:port uuid="b4130889-fd6e-44b4-8184-b79693b30d78">
Sep 30 18:13:57 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <system>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <entry name="serial">28ad2702-2baf-4865-be24-c468842cee03</entry>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <entry name="uuid">28ad2702-2baf-4865-be24-c468842cee03</entry>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </system>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <os>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   </os>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <features>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   </features>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/28ad2702-2baf-4865-be24-c468842cee03_disk">
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       </source>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/28ad2702-2baf-4865-be24-c468842cee03_disk.config">
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       </source>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:13:57 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:f3:96:49"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <target dev="tapb4130889-fd"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/console.log" append="off"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <video>
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </video>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:13:57 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:13:57 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:13:57 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:13:57 compute-0 nova_compute[265391]: </domain>
Sep 30 18:13:57 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.320 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Preparing to wait for external event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.320 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.320 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.321 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.322 2 DEBUG nova.virt.libvirt.vif [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1899978059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1899978059',id=8,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-1d71qtf9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:13:50Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=28ad2702-2baf-4865-be24-c468842cee03,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.322 2 DEBUG nova.network.os_vif_util [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converting VIF {"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.323 2 DEBUG nova.network.os_vif_util [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.323 2 DEBUG os_vif [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.326 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'e5ecaaf6-2a32-542c-8ce0-24a04b92abe1', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4130889-fd, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapb4130889-fd, col_values=(('qos', UUID('6fdcc8c0-bc93-429d-b3a4-500306c94a72')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapb4130889-fd, col_values=(('external_ids', {'iface-id': 'b4130889-fd6e-44b4-8184-b79693b30d78', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:96:49', 'vm-uuid': '28ad2702-2baf-4865-be24-c468842cee03'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:57 compute-0 NetworkManager[45059]: <info>  [1759256037.3364] manager: (tapb4130889-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:57 compute-0 nova_compute[265391]: 2025-09-30 18:13:57.343 2 INFO os_vif [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd')
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131746376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131746376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:13:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:57 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:57 compute-0 sudo[307661]: pam_unix(sudo:session): session closed for user root
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:13:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:13:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:13:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:13:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:57.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:13:57 compute-0 sudo[307725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:13:57 compute-0 sudo[307725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:57 compute-0 sudo[307725]: pam_unix(sudo:session): session closed for user root
Sep 30 18:13:57 compute-0 sudo[307750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:13:57 compute-0 sudo[307750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/131746376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/131746376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:13:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:13:58 compute-0 podman[307817]: 2025-09-30 18:13:58.245680215 +0000 UTC m=+0.054745203 container create ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_germain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:13:58 compute-0 systemd[1]: Started libpod-conmon-ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155.scope.
Sep 30 18:13:58 compute-0 podman[307817]: 2025-09-30 18:13:58.220712217 +0000 UTC m=+0.029777275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:13:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:13:58 compute-0 podman[307817]: 2025-09-30 18:13:58.342772578 +0000 UTC m=+0.151837586 container init ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Sep 30 18:13:58 compute-0 podman[307817]: 2025-09-30 18:13:58.351516285 +0000 UTC m=+0.160581253 container start ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_germain, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:13:58 compute-0 podman[307817]: 2025-09-30 18:13:58.355258972 +0000 UTC m=+0.164323970 container attach ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:13:58 compute-0 stupefied_germain[307833]: 167 167
Sep 30 18:13:58 compute-0 systemd[1]: libpod-ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155.scope: Deactivated successfully.
Sep 30 18:13:58 compute-0 podman[307817]: 2025-09-30 18:13:58.361140825 +0000 UTC m=+0.170205793 container died ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-29fe6ed14c2a5b38c014c6e0afdb37c860d6af58a9b1f33c2c4429a826d43813-merged.mount: Deactivated successfully.
Sep 30 18:13:58 compute-0 podman[307817]: 2025-09-30 18:13:58.410234671 +0000 UTC m=+0.219299639 container remove ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_germain, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:13:58 compute-0 systemd[1]: libpod-conmon-ffb744697047d8536850c1395dea61d6c8793726ea929c63d3d12feaeec6f155.scope: Deactivated successfully.
Sep 30 18:13:58 compute-0 podman[307857]: 2025-09-30 18:13:58.597239489 +0000 UTC m=+0.054841736 container create 33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_saha, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:13:58 compute-0 systemd[1]: Started libpod-conmon-33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734.scope.
Sep 30 18:13:58 compute-0 podman[307857]: 2025-09-30 18:13:58.573152683 +0000 UTC m=+0.030754960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:13:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ec39acc331bf4c2c3db6e3d47580935cedec7964595c046fdec5cf03b71a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ec39acc331bf4c2c3db6e3d47580935cedec7964595c046fdec5cf03b71a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ec39acc331bf4c2c3db6e3d47580935cedec7964595c046fdec5cf03b71a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ec39acc331bf4c2c3db6e3d47580935cedec7964595c046fdec5cf03b71a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ec39acc331bf4c2c3db6e3d47580935cedec7964595c046fdec5cf03b71a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:13:58 compute-0 podman[307857]: 2025-09-30 18:13:58.694078635 +0000 UTC m=+0.151680922 container init 33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:13:58 compute-0 podman[307857]: 2025-09-30 18:13:58.707411982 +0000 UTC m=+0.165014219 container start 33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:13:58 compute-0 podman[307857]: 2025-09-30 18:13:58.71157645 +0000 UTC m=+0.169178767 container attach 33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:13:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:58] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:13:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:13:58] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:13:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:58 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:13:58.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:58 compute-0 nova_compute[265391]: 2025-09-30 18:13:58.891 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:13:58 compute-0 nova_compute[265391]: 2025-09-30 18:13:58.891 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:13:58 compute-0 nova_compute[265391]: 2025-09-30 18:13:58.892 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] No VIF found with MAC fa:16:3e:f3:96:49, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:13:58 compute-0 nova_compute[265391]: 2025-09-30 18:13:58.893 2 INFO nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Using config drive
Sep 30 18:13:58 compute-0 nova_compute[265391]: 2025-09-30 18:13:58.931 2 DEBUG nova.storage.rbd_utils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 28ad2702-2baf-4865-be24-c468842cee03_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:13:59 compute-0 inspiring_saha[307873]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:13:59 compute-0 inspiring_saha[307873]: --> All data devices are unavailable
Sep 30 18:13:59 compute-0 systemd[1]: libpod-33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734.scope: Deactivated successfully.
Sep 30 18:13:59 compute-0 podman[307857]: 2025-09-30 18:13:59.150421861 +0000 UTC m=+0.608024098 container died 33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_saha, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:13:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05ec39acc331bf4c2c3db6e3d47580935cedec7964595c046fdec5cf03b71a6-merged.mount: Deactivated successfully.
Sep 30 18:13:59 compute-0 podman[307857]: 2025-09-30 18:13:59.199178478 +0000 UTC m=+0.656780705 container remove 33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_saha, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:13:59 compute-0 systemd[1]: libpod-conmon-33fee8ebdce9b899c7d309556125d9dbafe20b740853ded5812d67e28c50a734.scope: Deactivated successfully.
Sep 30 18:13:59 compute-0 sudo[307750]: pam_unix(sudo:session): session closed for user root
Sep 30 18:13:59 compute-0 ceph-mon[73755]: pgmap v1057: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:13:59 compute-0 sudo[307920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:13:59 compute-0 sudo[307920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:59 compute-0 sudo[307920]: pam_unix(sudo:session): session closed for user root
Sep 30 18:13:59 compute-0 sudo[307945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:13:59 compute-0 sudo[307945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:13:59 compute-0 nova_compute[265391]: 2025-09-30 18:13:59.448 2 WARNING neutronclient.v2_0.client [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:13:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:13:59 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:13:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:13:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:13:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:13:59.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:13:59 compute-0 nova_compute[265391]: 2025-09-30 18:13:59.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:13:59 compute-0 podman[276673]: time="2025-09-30T18:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:13:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:13:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10734 "" "Go-http-client/1.1"
Sep 30 18:13:59 compute-0 nova_compute[265391]: 2025-09-30 18:13:59.876 2 INFO nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Creating config drive at /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/disk.config
Sep 30 18:13:59 compute-0 nova_compute[265391]: 2025-09-30 18:13:59.882 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpwjr76kv3 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:13:59 compute-0 podman[308012]: 2025-09-30 18:13:59.898916687 +0000 UTC m=+0.047169516 container create 287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:13:59 compute-0 systemd[1]: Started libpod-conmon-287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a.scope.
Sep 30 18:13:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:13:59 compute-0 podman[308012]: 2025-09-30 18:13:59.880186871 +0000 UTC m=+0.028439720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:13:59 compute-0 podman[308012]: 2025-09-30 18:13:59.993274428 +0000 UTC m=+0.141527287 container init 287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaplygin, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:14:00 compute-0 podman[308012]: 2025-09-30 18:14:00.002495248 +0000 UTC m=+0.150748077 container start 287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 18:14:00 compute-0 podman[308032]: 2025-09-30 18:14:00.002538509 +0000 UTC m=+0.059114967 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=iscsid)
Sep 30 18:14:00 compute-0 youthful_chaplygin[308045]: 167 167
Sep 30 18:14:00 compute-0 conmon[308045]: conmon 287e94980ad3c8e5bc10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a.scope/container/memory.events
Sep 30 18:14:00 compute-0 podman[308012]: 2025-09-30 18:14:00.00875782 +0000 UTC m=+0.157010669 container attach 287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:14:00 compute-0 podman[308029]: 2025-09-30 18:14:00.008773671 +0000 UTC m=+0.068770088 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Sep 30 18:14:00 compute-0 systemd[1]: libpod-287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a.scope: Deactivated successfully.
Sep 30 18:14:00 compute-0 podman[308012]: 2025-09-30 18:14:00.011129952 +0000 UTC m=+0.159382791 container died 287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.016 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpwjr76kv3" returned: 0 in 0.134s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8de916b58eb9daae98d6031c70d91315e71fb6c4ca5ea5fde6227a984d77eae3-merged.mount: Deactivated successfully.
Sep 30 18:14:00 compute-0 podman[308012]: 2025-09-30 18:14:00.046715347 +0000 UTC m=+0.194968176 container remove 287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_chaplygin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:14:00 compute-0 podman[308033]: 2025-09-30 18:14:00.04762053 +0000 UTC m=+0.099321371 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.053 2 DEBUG nova.storage.rbd_utils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] rbd image 28ad2702-2baf-4865-be24-c468842cee03_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:14:00 compute-0 systemd[1]: libpod-conmon-287e94980ad3c8e5bc10c7f3f71c1ff9290ee02a2a3f514c461242ea6cf8353a.scope: Deactivated successfully.
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.064 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/disk.config 28ad2702-2baf-4865-be24-c468842cee03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:14:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:14:00 compute-0 podman[308144]: 2025-09-30 18:14:00.267637186 +0000 UTC m=+0.058616724 container create 9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Sep 30 18:14:00 compute-0 systemd[1]: Started libpod-conmon-9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80.scope.
Sep 30 18:14:00 compute-0 podman[308144]: 2025-09-30 18:14:00.240605144 +0000 UTC m=+0.031584702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:14:00 compute-0 ceph-mon[73755]: pgmap v1058: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:14:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6944de9a400710fa0adfcca1da25a8f1d264924d9ada8ab0062124b03de54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6944de9a400710fa0adfcca1da25a8f1d264924d9ada8ab0062124b03de54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6944de9a400710fa0adfcca1da25a8f1d264924d9ada8ab0062124b03de54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6944de9a400710fa0adfcca1da25a8f1d264924d9ada8ab0062124b03de54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:14:00 compute-0 podman[308144]: 2025-09-30 18:14:00.370741375 +0000 UTC m=+0.161720983 container init 9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_rhodes, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:14:00 compute-0 podman[308144]: 2025-09-30 18:14:00.384312738 +0000 UTC m=+0.175292256 container start 9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_rhodes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:14:00 compute-0 podman[308144]: 2025-09-30 18:14:00.392415718 +0000 UTC m=+0.183395256 container attach 9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.465 2 DEBUG oslo_concurrency.processutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/disk.config 28ad2702-2baf-4865-be24-c468842cee03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.467 2 INFO nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Deleting local config drive /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/disk.config because it was imported into RBD.
Sep 30 18:14:00 compute-0 kernel: tapb4130889-fd: entered promiscuous mode
Sep 30 18:14:00 compute-0 NetworkManager[45059]: <info>  [1759256040.5601] manager: (tapb4130889-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:00 compute-0 ovn_controller[156242]: 2025-09-30T18:14:00Z|00080|binding|INFO|Claiming lport b4130889-fd6e-44b4-8184-b79693b30d78 for this chassis.
Sep 30 18:14:00 compute-0 ovn_controller[156242]: 2025-09-30T18:14:00Z|00081|binding|INFO|b4130889-fd6e-44b4-8184-b79693b30d78: Claiming fa:16:3e:f3:96:49 10.100.0.6
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.569 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:96:49 10.100.0.6'], port_security=['fa:16:3e:f3:96:49 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '28ad2702-2baf-4865-be24-c468842cee03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=b4130889-fd6e-44b4-8184-b79693b30d78) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.570 166158 INFO neutron.agent.ovn.metadata.agent [-] Port b4130889-fd6e-44b4-8184-b79693b30d78 in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab bound to our chassis
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.571 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:14:00 compute-0 ovn_controller[156242]: 2025-09-30T18:14:00Z|00082|binding|INFO|Setting lport b4130889-fd6e-44b4-8184-b79693b30d78 ovn-installed in OVS
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:00 compute-0 ovn_controller[156242]: 2025-09-30T18:14:00Z|00083|binding|INFO|Setting lport b4130889-fd6e-44b4-8184-b79693b30d78 up in Southbound
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.595 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[89cab930-36da-4072-a02c-5cb2c5c5f9b9]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:00 compute-0 systemd-udevd[308185]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:14:00 compute-0 systemd-machined[219917]: New machine qemu-6-instance-00000008.
Sep 30 18:14:00 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000008.
Sep 30 18:14:00 compute-0 NetworkManager[45059]: <info>  [1759256040.6287] device (tapb4130889-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:14:00 compute-0 NetworkManager[45059]: <info>  [1759256040.6299] device (tapb4130889-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.637 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4bc5a9-6446-438a-9f8c-929b150ebe67]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.640 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[317933c2-f7e6-425f-aa05-ff931bae9172]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.672 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[674606ac-4ccd-4dd2-b857-d93249919b69]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]: {
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:     "0": [
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:         {
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "devices": [
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "/dev/loop3"
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             ],
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "lv_name": "ceph_lv0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "lv_size": "21470642176",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "name": "ceph_lv0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "tags": {
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.cluster_name": "ceph",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.crush_device_class": "",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.encrypted": "0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.osd_id": "0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.type": "block",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.vdo": "0",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:                 "ceph.with_tpm": "0"
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             },
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "type": "block",
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:             "vg_name": "ceph_vg0"
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:         }
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]:     ]
Sep 30 18:14:00 compute-0 mystifying_rhodes[308163]: }
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.690 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3914416d-e852-43eb-9c03-a682bd645007]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fff1904-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:07:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456734, 'reachable_time': 30287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308198, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.708 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d80ddda5-e4ac-4558-957e-150cb81fa14c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5fff1904-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456747, 'tstamp': 456747}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308199, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5fff1904-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456750, 'tstamp': 456750}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308199, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:00 compute-0 systemd[1]: libpod-9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80.scope: Deactivated successfully.
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.709 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fff1904-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.714 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fff1904-10, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.715 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.715 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5fff1904-10, col_values=(('external_ids', {'iface-id': '3a8ea0a0-c179-4516-9404-04b68a17e79e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.715 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:14:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:00.716 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f10854-2750-4868-b3c1-dc84bd1ebdd2]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-5fff1904-159a-4b76-8c46-feabf17f29ab\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 5fff1904-159a-4b76-8c46-feabf17f29ab\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:00 compute-0 podman[308201]: 2025-09-30 18:14:00.76085569 +0000 UTC m=+0.026915450 container died 9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_rhodes, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-08d6944de9a400710fa0adfcca1da25a8f1d264924d9ada8ab0062124b03de54-merged.mount: Deactivated successfully.
Sep 30 18:14:00 compute-0 podman[308201]: 2025-09-30 18:14:00.800182402 +0000 UTC m=+0.066242162 container remove 9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_rhodes, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:14:00 compute-0 systemd[1]: libpod-conmon-9fc78c110bdd91972e13a1c663432327c42b0f1de58d6c55aeb71bc6e848ad80.scope: Deactivated successfully.
Sep 30 18:14:00 compute-0 sudo[307945]: pam_unix(sudo:session): session closed for user root
Sep 30 18:14:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:00 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:00.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:00 compute-0 sudo[308224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:14:00 compute-0 sudo[308224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:14:00 compute-0 sudo[308224]: pam_unix(sudo:session): session closed for user root
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.935 2 DEBUG nova.compute.manager [req-59d7a45b-7a4c-4f3f-b44d-281769f5382a req-65a176b3-de1e-4da9-a37e-a193569daeab 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.935 2 DEBUG oslo_concurrency.lockutils [req-59d7a45b-7a4c-4f3f-b44d-281769f5382a req-65a176b3-de1e-4da9-a37e-a193569daeab 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.935 2 DEBUG oslo_concurrency.lockutils [req-59d7a45b-7a4c-4f3f-b44d-281769f5382a req-65a176b3-de1e-4da9-a37e-a193569daeab 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.936 2 DEBUG oslo_concurrency.lockutils [req-59d7a45b-7a4c-4f3f-b44d-281769f5382a req-65a176b3-de1e-4da9-a37e-a193569daeab 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:00 compute-0 nova_compute[265391]: 2025-09-30 18:14:00.936 2 DEBUG nova.compute.manager [req-59d7a45b-7a4c-4f3f-b44d-281769f5382a req-65a176b3-de1e-4da9-a37e-a193569daeab 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Processing event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:14:00 compute-0 sudo[308276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:14:00 compute-0 sudo[308276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:14:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:01.017 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:14:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:01.019 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:14:01 compute-0 nova_compute[265391]: 2025-09-30 18:14:01.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:01.020 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:01 compute-0 openstack_network_exporter[279566]: ERROR   18:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:14:01 compute-0 openstack_network_exporter[279566]: ERROR   18:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:14:01 compute-0 openstack_network_exporter[279566]: ERROR   18:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:14:01 compute-0 openstack_network_exporter[279566]: ERROR   18:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:14:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:14:01 compute-0 openstack_network_exporter[279566]: ERROR   18:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:14:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:14:01 compute-0 podman[308350]: 2025-09-30 18:14:01.475567209 +0000 UTC m=+0.065369989 container create 67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:14:01 compute-0 nova_compute[265391]: 2025-09-30 18:14:01.505 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:14:01 compute-0 nova_compute[265391]: 2025-09-30 18:14:01.508 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:14:01 compute-0 nova_compute[265391]: 2025-09-30 18:14:01.514 2 INFO nova.virt.libvirt.driver [-] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Instance spawned successfully.
Sep 30 18:14:01 compute-0 nova_compute[265391]: 2025-09-30 18:14:01.514 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:14:01 compute-0 systemd[1]: Started libpod-conmon-67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76.scope.
Sep 30 18:14:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:01 compute-0 podman[308350]: 2025-09-30 18:14:01.451580676 +0000 UTC m=+0.041383496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:14:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:14:01 compute-0 podman[308350]: 2025-09-30 18:14:01.56759292 +0000 UTC m=+0.157395740 container init 67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:14:01 compute-0 podman[308350]: 2025-09-30 18:14:01.577167779 +0000 UTC m=+0.166970569 container start 67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:14:01 compute-0 podman[308350]: 2025-09-30 18:14:01.581084921 +0000 UTC m=+0.170887711 container attach 67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:14:01 compute-0 romantic_gauss[308368]: 167 167
Sep 30 18:14:01 compute-0 systemd[1]: libpod-67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76.scope: Deactivated successfully.
Sep 30 18:14:01 compute-0 podman[308350]: 2025-09-30 18:14:01.584480309 +0000 UTC m=+0.174283109 container died 67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gauss, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c917221d2d3c93effdae5ee9c0596a3d6e624f90dedfeca04597fefd3e052d-merged.mount: Deactivated successfully.
Sep 30 18:14:01 compute-0 podman[308350]: 2025-09-30 18:14:01.622551528 +0000 UTC m=+0.212354318 container remove 67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Sep 30 18:14:01 compute-0 systemd[1]: libpod-conmon-67a1accb5c135975a834dbc14b206c789725ba3b7f6170ffaa431808d4c3ff76.scope: Deactivated successfully.
Sep 30 18:14:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:01.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:01 compute-0 podman[308393]: 2025-09-30 18:14:01.852181674 +0000 UTC m=+0.047231518 container create 2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:14:01 compute-0 systemd[1]: Started libpod-conmon-2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2.scope.
Sep 30 18:14:01 compute-0 podman[308393]: 2025-09-30 18:14:01.833307114 +0000 UTC m=+0.028356968 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:14:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602038c86e38fd6bdbfbb86aa881a54171c6c9b81294e3639d5b44f3f69135b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602038c86e38fd6bdbfbb86aa881a54171c6c9b81294e3639d5b44f3f69135b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602038c86e38fd6bdbfbb86aa881a54171c6c9b81294e3639d5b44f3f69135b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a602038c86e38fd6bdbfbb86aa881a54171c6c9b81294e3639d5b44f3f69135b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:14:01 compute-0 podman[308393]: 2025-09-30 18:14:01.97056654 +0000 UTC m=+0.165616404 container init 2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 18:14:01 compute-0 podman[308393]: 2025-09-30 18:14:01.978442104 +0000 UTC m=+0.173496188 container start 2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bardeen, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:14:01 compute-0 podman[308393]: 2025-09-30 18:14:01.983712451 +0000 UTC m=+0.178762315 container attach 2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bardeen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.028 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.028 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.029 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.029 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.029 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.030 2 DEBUG nova.virt.libvirt.driver [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:14:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 314 KiB/s rd, 3.4 MiB/s wr, 78 op/s
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.551 2 INFO nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Took 10.96 seconds to spawn the instance on the hypervisor.
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.552 2 DEBUG nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:14:02 compute-0 lvm[308482]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:14:02 compute-0 lvm[308482]: VG ceph_vg0 finished
Sep 30 18:14:02 compute-0 gracious_bardeen[308409]: {}
Sep 30 18:14:02 compute-0 systemd[1]: libpod-2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2.scope: Deactivated successfully.
Sep 30 18:14:02 compute-0 podman[308393]: 2025-09-30 18:14:02.764323872 +0000 UTC m=+0.959373716 container died 2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bardeen, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:14:02 compute-0 systemd[1]: libpod-2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2.scope: Consumed 1.188s CPU time.
Sep 30 18:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a602038c86e38fd6bdbfbb86aa881a54171c6c9b81294e3639d5b44f3f69135b-merged.mount: Deactivated successfully.
Sep 30 18:14:02 compute-0 podman[308393]: 2025-09-30 18:14:02.805922883 +0000 UTC m=+1.000972727 container remove 2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:14:02 compute-0 systemd[1]: libpod-conmon-2627bbdf2d13e4d6748e85b606c01de6a8910a1cbe80ffc878e743f6b2810dd2.scope: Deactivated successfully.
Sep 30 18:14:02 compute-0 sudo[308276]: pam_unix(sudo:session): session closed for user root
Sep 30 18:14:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:14:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:14:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:14:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:02 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:02.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:14:02 compute-0 sudo[308496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:14:02 compute-0 sudo[308496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:14:02 compute-0 sudo[308496]: pam_unix(sudo:session): session closed for user root
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.988 2 DEBUG nova.compute.manager [req-e3427936-d356-4760-9257-bf1a5b8f6171 req-277c866f-4b85-43aa-979c-a63313f8dc02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.988 2 DEBUG oslo_concurrency.lockutils [req-e3427936-d356-4760-9257-bf1a5b8f6171 req-277c866f-4b85-43aa-979c-a63313f8dc02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.989 2 DEBUG oslo_concurrency.lockutils [req-e3427936-d356-4760-9257-bf1a5b8f6171 req-277c866f-4b85-43aa-979c-a63313f8dc02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.989 2 DEBUG oslo_concurrency.lockutils [req-e3427936-d356-4760-9257-bf1a5b8f6171 req-277c866f-4b85-43aa-979c-a63313f8dc02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.989 2 DEBUG nova.compute.manager [req-e3427936-d356-4760-9257-bf1a5b8f6171 req-277c866f-4b85-43aa-979c-a63313f8dc02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] No waiting events found dispatching network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:02 compute-0 nova_compute[265391]: 2025-09-30 18:14:02.989 2 WARNING nova.compute.manager [req-e3427936-d356-4760-9257-bf1a5b8f6171 req-277c866f-4b85-43aa-979c-a63313f8dc02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received unexpected event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 for instance with vm_state active and task_state None.
Sep 30 18:14:03 compute-0 nova_compute[265391]: 2025-09-30 18:14:03.087 2 INFO nova.compute.manager [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Took 16.75 seconds to build instance.
Sep 30 18:14:03 compute-0 ceph-mon[73755]: pgmap v1059: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 314 KiB/s rd, 3.4 MiB/s wr, 78 op/s
Sep 30 18:14:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:14:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:14:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:03 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:03 compute-0 nova_compute[265391]: 2025-09-30 18:14:03.592 2 DEBUG oslo_concurrency.lockutils [None req-e9b3e511-77d9-4e2e-b07e-168e4b7a1e45 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.274s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:03.688Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:03.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 140 op/s
Sep 30 18:14:04 compute-0 nova_compute[265391]: 2025-09-30 18:14:04.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:04 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:04.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:05 compute-0 ceph-mon[73755]: pgmap v1060: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 140 op/s
Sep 30 18:14:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:05 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:05.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 33 KiB/s wr, 62 op/s
Sep 30 18:14:06 compute-0 ceph-mon[73755]: pgmap v1061: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 33 KiB/s wr, 62 op/s
Sep 30 18:14:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:06 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:06.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:07 compute-0 sshd-session[307635]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:14:07 compute-0 sshd-session[307635]: banner exchange: Connection from 115.190.39.222 port 51518: Connection timed out
Sep 30 18:14:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:07.172Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:14:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:07.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:14:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:14:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:14:07 compute-0 nova_compute[265391]: 2025-09-30 18:14:07.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:14:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:14:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005077972382301231 of space, bias 1.0, pg target 1.015594476460246 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.19875870302427234 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006074184411017777 quantized to 16 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.8981826284430553e-05 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011389095770658333 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006453820936706387 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015185461027544442 quantized to 32 (current 32)
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:14:07
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'volumes', '.nfs', 'default.rgw.control', 'vms']
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:14:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:14:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:07.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 33 KiB/s wr, 62 op/s
Sep 30 18:14:08 compute-0 ceph-mon[73755]: pgmap v1062: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 33 KiB/s wr, 62 op/s
Sep 30 18:14:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:08] "GET /metrics HTTP/1.1" 200 46654 "" "Prometheus/2.51.0"
Sep 30 18:14:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:08] "GET /metrics HTTP/1.1" 200 46654 "" "Prometheus/2.51.0"
Sep 30 18:14:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:08.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3791510506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:14:09 compute-0 nova_compute[265391]: 2025-09-30 18:14:09.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:09.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:10 compute-0 sudo[308529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:14:10 compute-0 sudo[308529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:14:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 76 op/s
Sep 30 18:14:10 compute-0 sudo[308529]: pam_unix(sudo:session): session closed for user root
Sep 30 18:14:10 compute-0 ceph-mon[73755]: pgmap v1063: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 76 op/s
Sep 30 18:14:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:10.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:11.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Sep 30 18:14:12 compute-0 nova_compute[265391]: 2025-09-30 18:14:12.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:12.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:13 compute-0 ceph-mon[73755]: pgmap v1064: 353 pgs: 353 active+clean; 407 MiB data, 400 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Sep 30 18:14:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:13 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:13.689Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 418 MiB data, 411 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Sep 30 18:14:14 compute-0 ceph-mon[73755]: pgmap v1065: 353 pgs: 353 active+clean; 418 MiB data, 411 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Sep 30 18:14:14 compute-0 nova_compute[265391]: 2025-09-30 18:14:14.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:14 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:14.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:15 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:15.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 418 MiB data, 411 MiB used, 40 GiB / 40 GiB avail; 442 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Sep 30 18:14:16 compute-0 ovn_controller[156242]: 2025-09-30T18:14:16Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:96:49 10.100.0.6
Sep 30 18:14:16 compute-0 ovn_controller[156242]: 2025-09-30T18:14:16Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:96:49 10.100.0.6
Sep 30 18:14:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:16 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:16.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:17.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:17 compute-0 ceph-mon[73755]: pgmap v1066: 353 pgs: 353 active+clean; 418 MiB data, 411 MiB used, 40 GiB / 40 GiB avail; 442 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Sep 30 18:14:17 compute-0 nova_compute[265391]: 2025-09-30 18:14:17.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:17 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:17 compute-0 podman[308565]: 2025-09-30 18:14:17.582438631 +0000 UTC m=+0.110219745 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:14:17 compute-0 podman[308564]: 2025-09-30 18:14:17.616708901 +0000 UTC m=+0.144735291 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:14:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 418 MiB data, 411 MiB used, 40 GiB / 40 GiB avail; 442 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Sep 30 18:14:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3792421474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:14:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:18] "GET /metrics HTTP/1.1" 200 46654 "" "Prometheus/2.51.0"
Sep 30 18:14:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:18] "GET /metrics HTTP/1.1" 200 46654 "" "Prometheus/2.51.0"
Sep 30 18:14:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:18 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:18.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:19 compute-0 ceph-mon[73755]: pgmap v1067: 353 pgs: 353 active+clean; 418 MiB data, 411 MiB used, 40 GiB / 40 GiB avail; 442 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Sep 30 18:14:19 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1209878573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:14:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:19 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:19 compute-0 nova_compute[265391]: 2025-09-30 18:14:19.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:19.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 486 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 665 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Sep 30 18:14:20 compute-0 ceph-mon[73755]: pgmap v1068: 353 pgs: 353 active+clean; 486 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 665 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Sep 30 18:14:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:20 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:20.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:21 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:21.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 486 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 253 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:14:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:14:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:14:22 compute-0 nova_compute[265391]: 2025-09-30 18:14:22.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:22 compute-0 podman[308617]: 2025-09-30 18:14:22.542443073 +0000 UTC m=+0.074248650 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:14:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:22 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:22.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:23 compute-0 ceph-mon[73755]: pgmap v1069: 353 pgs: 353 active+clean; 486 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 253 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:14:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:14:23 compute-0 nova_compute[265391]: 2025-09-30 18:14:23.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:23 compute-0 nova_compute[265391]: 2025-09-30 18:14:23.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:23 compute-0 nova_compute[265391]: 2025-09-30 18:14:23.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:14:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:23 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:23.690Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:23.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 261 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Sep 30 18:14:24 compute-0 nova_compute[265391]: 2025-09-30 18:14:24.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:24 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:24 compute-0 nova_compute[265391]: 2025-09-30 18:14:24.934 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:24 compute-0 nova_compute[265391]: 2025-09-30 18:14:24.934 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:24.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:25 compute-0 ceph-mon[73755]: pgmap v1070: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 261 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Sep 30 18:14:25 compute-0 nova_compute[265391]: 2025-09-30 18:14:25.453 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:25 compute-0 nova_compute[265391]: 2025-09-30 18:14:25.453 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:25 compute-0 nova_compute[265391]: 2025-09-30 18:14:25.453 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:25 compute-0 nova_compute[265391]: 2025-09-30 18:14:25.454 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:14:25 compute-0 nova_compute[265391]: 2025-09-30 18:14:25.454 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:14:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:25 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:25 compute-0 unix_chkpwd[308663]: password check failed for user (root)
Sep 30 18:14:25 compute-0 sshd-session[308638]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:14:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:25.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:25 compute-0 sshd-session[308640]: Invalid user ircd from 14.225.220.107 port 56952
Sep 30 18:14:25 compute-0 sshd-session[308640]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:14:25 compute-0 sshd-session[308640]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:14:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:14:25 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198616533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:14:25 compute-0 nova_compute[265391]: 2025-09-30 18:14:25.962 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:14:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 230 KiB/s rd, 2.7 MiB/s wr, 77 op/s
Sep 30 18:14:26 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1198616533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:14:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:26 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:26.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.025 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.025 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.029 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.030 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.035 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.036 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:14:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:27.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:27 compute-0 ceph-mon[73755]: pgmap v1071: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 230 KiB/s rd, 2.7 MiB/s wr, 77 op/s
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.243 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.245 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.273 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.028s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.273 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3881MB free_disk=39.74360275268555GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.274 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.274 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:27 compute-0 nova_compute[265391]: 2025-09-30 18:14:27.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:27 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:27.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:28 compute-0 sshd-session[308638]: Failed password for root from 45.252.249.158 port 60728 ssh2
Sep 30 18:14:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 230 KiB/s rd, 2.7 MiB/s wr, 77 op/s
Sep 30 18:14:28 compute-0 sshd-session[308640]: Failed password for invalid user ircd from 14.225.220.107 port 56952 ssh2
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.326 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 761dbb06-6272-4941-a65f-5ad2f8cfbb70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.327 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 23ad643b-d29f-4fe8-a347-92df178ae0cd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.328 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 28ad2702-2baf-4865-be24-c468842cee03 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.328 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.329 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=39GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:14:27 up  1:17,  0 user,  load average: 1.17, 0.96, 0.93\n', 'num_instances': '3', 'num_vm_active': '3', 'num_task_None': '3', 'num_os_type_None': '3', 'num_proj_ddd1f985d8b64b449c79d55b0cbd6422': '3', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.348 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.372 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.372 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.390 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.411 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:14:28 compute-0 nova_compute[265391]: 2025-09-30 18:14:28.483 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:14:28 compute-0 sshd-session[308640]: Received disconnect from 14.225.220.107 port 56952:11: Bye Bye [preauth]
Sep 30 18:14:28 compute-0 sshd-session[308640]: Disconnected from invalid user ircd 14.225.220.107 port 56952 [preauth]
Sep 30 18:14:28 compute-0 sshd-session[308638]: Received disconnect from 45.252.249.158 port 60728:11: Bye Bye [preauth]
Sep 30 18:14:28 compute-0 sshd-session[308638]: Disconnected from authenticating user root 45.252.249.158 port 60728 [preauth]
Sep 30 18:14:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:28] "GET /metrics HTTP/1.1" 200 46652 "" "Prometheus/2.51.0"
Sep 30 18:14:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:28] "GET /metrics HTTP/1.1" 200 46652 "" "Prometheus/2.51.0"
Sep 30 18:14:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:28 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:28.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:14:28 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817885264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:14:29 compute-0 nova_compute[265391]: 2025-09-30 18:14:29.000 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:14:29 compute-0 nova_compute[265391]: 2025-09-30 18:14:29.006 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:14:29 compute-0 ceph-mon[73755]: pgmap v1072: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 230 KiB/s rd, 2.7 MiB/s wr, 77 op/s
Sep 30 18:14:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/817885264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:14:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/919828019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:14:29 compute-0 nova_compute[265391]: 2025-09-30 18:14:29.518 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:14:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:29 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:29 compute-0 nova_compute[265391]: 2025-09-30 18:14:29.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:29 compute-0 podman[276673]: time="2025-09-30T18:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:14:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:29.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:14:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10731 "" "Go-http-client/1.1"
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.032 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.033 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.759s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.033 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 141 op/s
Sep 30 18:14:30 compute-0 sudo[308695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:14:30 compute-0 sudo[308695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:14:30 compute-0 sudo[308695]: pam_unix(sudo:session): session closed for user root
Sep 30 18:14:30 compute-0 podman[308720]: 2025-09-30 18:14:30.240203214 +0000 UTC m=+0.066126789 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Sep 30 18:14:30 compute-0 podman[308719]: 2025-09-30 18:14:30.271365294 +0000 UTC m=+0.096221841 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:14:30 compute-0 podman[308721]: 2025-09-30 18:14:30.293201911 +0000 UTC m=+0.113200772 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, build-date=2025-08-20T13:12:41)
Sep 30 18:14:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:30 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.934 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.935 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.935 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.935 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.935 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.935 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:30 compute-0 nova_compute[265391]: 2025-09-30 18:14:30.935 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:14:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:30.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:31 compute-0 ceph-mon[73755]: pgmap v1073: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 141 op/s
Sep 30 18:14:31 compute-0 openstack_network_exporter[279566]: ERROR   18:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:14:31 compute-0 openstack_network_exporter[279566]: ERROR   18:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:14:31 compute-0 openstack_network_exporter[279566]: ERROR   18:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:14:31 compute-0 openstack_network_exporter[279566]: ERROR   18:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:14:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:14:31 compute-0 openstack_network_exporter[279566]: ERROR   18:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:14:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:14:31 compute-0 nova_compute[265391]: 2025-09-30 18:14:31.441 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:14:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:31 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:31.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:31 compute-0 nova_compute[265391]: 2025-09-30 18:14:31.930 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:14:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 75 op/s
Sep 30 18:14:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2989069023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:14:32 compute-0 nova_compute[265391]: 2025-09-30 18:14:32.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:32 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:32.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:33 compute-0 ceph-mon[73755]: pgmap v1074: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 75 op/s
Sep 30 18:14:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:33 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:33.691Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:33.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 75 op/s
Sep 30 18:14:34 compute-0 ceph-mon[73755]: pgmap v1075: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 75 op/s
Sep 30 18:14:34 compute-0 nova_compute[265391]: 2025-09-30 18:14:34.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:34 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:34.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:35 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:35.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 64 op/s
Sep 30 18:14:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:14:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1819809264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:14:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:14:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1819809264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:14:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:36 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:37.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:37 compute-0 ceph-mon[73755]: pgmap v1076: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 64 op/s
Sep 30 18:14:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1819809264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:14:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1819809264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:14:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:14:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:14:37 compute-0 nova_compute[265391]: 2025-09-30 18:14:37.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:14:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:14:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:14:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:14:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:14:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:14:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:37 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:37.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 64 op/s
Sep 30 18:14:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:14:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:38] "GET /metrics HTTP/1.1" 200 46653 "" "Prometheus/2.51.0"
Sep 30 18:14:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:38] "GET /metrics HTTP/1.1" 200 46653 "" "Prometheus/2.51.0"
Sep 30 18:14:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:38 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:38.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:39 compute-0 ceph-mon[73755]: pgmap v1077: 353 pgs: 353 active+clean; 486 MiB data, 447 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 64 op/s
Sep 30 18:14:39 compute-0 nova_compute[265391]: 2025-09-30 18:14:39.257 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Check if temp file /var/lib/nova/instances/tmphwjwr5f6 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:14:39 compute-0 nova_compute[265391]: 2025-09-30 18:14:39.263 2 DEBUG nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmphwjwr5f6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='23ad643b-d29f-4fe8-a347-92df178ae0cd',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:14:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:39 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:39 compute-0 nova_compute[265391]: 2025-09-30 18:14:39.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:39.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Sep 30 18:14:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:40 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:40.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:41 compute-0 ceph-mon[73755]: pgmap v1078: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Sep 30 18:14:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3615147311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:14:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:41 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:41.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:14:42 compute-0 nova_compute[265391]: 2025-09-30 18:14:42.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:42 compute-0 nova_compute[265391]: 2025-09-30 18:14:42.414 2 DEBUG oslo_concurrency.lockutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:14:42 compute-0 nova_compute[265391]: 2025-09-30 18:14:42.415 2 DEBUG oslo_concurrency.lockutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:14:42 compute-0 nova_compute[265391]: 2025-09-30 18:14:42.415 2 DEBUG nova.network.neutron [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:14:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:42 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:42 compute-0 nova_compute[265391]: 2025-09-30 18:14:42.926 2 WARNING neutronclient.v2_0.client [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:42.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:43 compute-0 ceph-mon[73755]: pgmap v1079: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:14:43 compute-0 nova_compute[265391]: 2025-09-30 18:14:43.449 2 WARNING neutronclient.v2_0.client [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:43 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:43 compute-0 nova_compute[265391]: 2025-09-30 18:14:43.597 2 DEBUG nova.network.neutron [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Updating instance_info_cache with network_info: [{"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:14:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:43.692Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:43 compute-0 nova_compute[265391]: 2025-09-30 18:14:43.754 2 DEBUG nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Preparing to wait for external event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:14:43 compute-0 nova_compute[265391]: 2025-09-30 18:14:43.755 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:43 compute-0 nova_compute[265391]: 2025-09-30 18:14:43.755 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:43 compute-0 nova_compute[265391]: 2025-09-30 18:14:43.755 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:43.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:14:44 compute-0 nova_compute[265391]: 2025-09-30 18:14:44.104 2 DEBUG oslo_concurrency.lockutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:14:44 compute-0 ceph-mon[73755]: pgmap v1080: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:14:44 compute-0 nova_compute[265391]: 2025-09-30 18:14:44.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:44 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc224001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:44.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:45 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:45 compute-0 nova_compute[265391]: 2025-09-30 18:14:45.650 2 DEBUG nova.virt.libvirt.driver [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12417
Sep 30 18:14:45 compute-0 nova_compute[265391]: 2025-09-30 18:14:45.650 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Creating file /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/e38c8ff890624e118928668c9422e0db.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Sep 30 18:14:45 compute-0 nova_compute[265391]: 2025-09-30 18:14:45.651 2 DEBUG oslo_concurrency.processutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/e38c8ff890624e118928668c9422e0db.tmp execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:14:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:45.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:14:46 compute-0 nova_compute[265391]: 2025-09-30 18:14:46.241 2 DEBUG oslo_concurrency.processutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/e38c8ff890624e118928668c9422e0db.tmp" returned: 1 in 0.590s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:14:46 compute-0 nova_compute[265391]: 2025-09-30 18:14:46.242 2 DEBUG oslo_concurrency.processutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03/e38c8ff890624e118928668c9422e0db.tmp' failed. Not Retrying. execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:423
Sep 30 18:14:46 compute-0 nova_compute[265391]: 2025-09-30 18:14:46.243 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Creating directory /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03 on remote host 192.168.122.101 create_dir /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Sep 30 18:14:46 compute-0 nova_compute[265391]: 2025-09-30 18:14:46.243 2 DEBUG oslo_concurrency.processutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:14:46 compute-0 nova_compute[265391]: 2025-09-30 18:14:46.493 2 DEBUG oslo_concurrency.processutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/28ad2702-2baf-4865-be24-c468842cee03" returned: 0 in 0.250s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:14:46 compute-0 nova_compute[265391]: 2025-09-30 18:14:46.498 2 DEBUG nova.virt.libvirt.driver [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4247
Sep 30 18:14:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:46 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc23c004d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:46.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:47 compute-0 ceph-mon[73755]: pgmap v1081: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:14:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:47.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:47 compute-0 nova_compute[265391]: 2025-09-30 18:14:47.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:47 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:14:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:14:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:14:48 compute-0 podman[308801]: 2025-09-30 18:14:48.53224276 +0000 UTC m=+0.062503175 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:14:48 compute-0 podman[308800]: 2025-09-30 18:14:48.55113627 +0000 UTC m=+0.085728098 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.679 2 DEBUG nova.compute.manager [req-2bb6e73a-2870-4ef0-8496-1e1ec4b9e542 req-7e1e9ae0-19cf-4083-8937-a5663137c0a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.680 2 DEBUG oslo_concurrency.lockutils [req-2bb6e73a-2870-4ef0-8496-1e1ec4b9e542 req-7e1e9ae0-19cf-4083-8937-a5663137c0a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.681 2 DEBUG oslo_concurrency.lockutils [req-2bb6e73a-2870-4ef0-8496-1e1ec4b9e542 req-7e1e9ae0-19cf-4083-8937-a5663137c0a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.681 2 DEBUG oslo_concurrency.lockutils [req-2bb6e73a-2870-4ef0-8496-1e1ec4b9e542 req-7e1e9ae0-19cf-4083-8937-a5663137c0a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.681 2 DEBUG nova.compute.manager [req-2bb6e73a-2870-4ef0-8496-1e1ec4b9e542 req-7e1e9ae0-19cf-4083-8937-a5663137c0a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] No event matching network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d in dict_keys([('network-vif-plugged', '50f7398a-769c-4636-b498-5162fce10f7d')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.681 2 DEBUG nova.compute.manager [req-2bb6e73a-2870-4ef0-8496-1e1ec4b9e542 req-7e1e9ae0-19cf-4083-8937-a5663137c0a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:14:48 compute-0 kernel: tapb4130889-fd (unregistering): left promiscuous mode
Sep 30 18:14:48 compute-0 NetworkManager[45059]: <info>  [1759256088.7707] device (tapb4130889-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:48 compute-0 ovn_controller[156242]: 2025-09-30T18:14:48Z|00084|binding|INFO|Releasing lport b4130889-fd6e-44b4-8184-b79693b30d78 from this chassis (sb_readonly=0)
Sep 30 18:14:48 compute-0 ovn_controller[156242]: 2025-09-30T18:14:48Z|00085|binding|INFO|Setting lport b4130889-fd6e-44b4-8184-b79693b30d78 down in Southbound
Sep 30 18:14:48 compute-0 ovn_controller[156242]: 2025-09-30T18:14:48Z|00086|binding|INFO|Removing iface tapb4130889-fd ovn-installed in OVS
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.790 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:96:49 10.100.0.6'], port_security=['fa:16:3e:f3:96:49 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '28ad2702-2baf-4865-be24-c468842cee03', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '5', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=b4130889-fd6e-44b4-8184-b79693b30d78) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:14:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:48] "GET /metrics HTTP/1.1" 200 46653 "" "Prometheus/2.51.0"
Sep 30 18:14:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:48] "GET /metrics HTTP/1.1" 200 46653 "" "Prometheus/2.51.0"
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.791 166158 INFO neutron.agent.ovn.metadata.agent [-] Port b4130889-fd6e-44b4-8184-b79693b30d78 in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab unbound from our chassis
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.794 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.818 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[accf42ef-49db-432c-b50e-c794a3621332]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.857 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[dec15f2c-5440-4a5d-9de3-77c0c777a6d0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:48 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Deactivated successfully.
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.860 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[57ec3f24-b77b-49f9-b255-982c40668727]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:48 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Consumed 15.023s CPU time.
Sep 30 18:14:48 compute-0 systemd-machined[219917]: Machine qemu-6-instance-00000008 terminated.
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.897 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[1bac8ac0-08a7-4577-bd84-b0afb98c2a24]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:48 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.918 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[72b636ee-1867-431a-af40-b25c082c08e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fff1904-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:07:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456734, 'reachable_time': 30287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308863, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.940 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7ea6f7-c887-46ce-ade3-eae76d26a759]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5fff1904-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456747, 'tstamp': 456747}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308865, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5fff1904-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456750, 'tstamp': 456750}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308865, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.941 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fff1904-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.949 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fff1904-10, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.949 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.949 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5fff1904-10, col_values=(('external_ids', {'iface-id': '3a8ea0a0-c179-4516-9404-04b68a17e79e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.950 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:14:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:48.951 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a18317bf-8223-4c6e-b889-fa3537eddcf5]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-5fff1904-159a-4b76-8c46-feabf17f29ab\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 5fff1904-159a-4b76-8c46-feabf17f29ab\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.966 2 DEBUG nova.compute.manager [req-31a36d08-c04e-4f5d-9128-2ff261e5cf94 req-3b487a49-d9f9-4c35-9086-288a159d79ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received event network-vif-unplugged-b4130889-fd6e-44b4-8184-b79693b30d78 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.967 2 DEBUG oslo_concurrency.lockutils [req-31a36d08-c04e-4f5d-9128-2ff261e5cf94 req-3b487a49-d9f9-4c35-9086-288a159d79ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.967 2 DEBUG oslo_concurrency.lockutils [req-31a36d08-c04e-4f5d-9128-2ff261e5cf94 req-3b487a49-d9f9-4c35-9086-288a159d79ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.967 2 DEBUG oslo_concurrency.lockutils [req-31a36d08-c04e-4f5d-9128-2ff261e5cf94 req-3b487a49-d9f9-4c35-9086-288a159d79ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.967 2 DEBUG nova.compute.manager [req-31a36d08-c04e-4f5d-9128-2ff261e5cf94 req-3b487a49-d9f9-4c35-9086-288a159d79ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] No waiting events found dispatching network-vif-unplugged-b4130889-fd6e-44b4-8184-b79693b30d78 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:48 compute-0 nova_compute[265391]: 2025-09-30 18:14:48.968 2 WARNING nova.compute.manager [req-31a36d08-c04e-4f5d-9128-2ff261e5cf94 req-3b487a49-d9f9-4c35-9086-288a159d79ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received unexpected event network-vif-unplugged-b4130889-fd6e-44b4-8184-b79693b30d78 for instance with vm_state active and task_state resize_migrating.
Sep 30 18:14:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:48.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:49 compute-0 ceph-mon[73755]: pgmap v1082: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.515 2 INFO nova.virt.libvirt.driver [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Instance shutdown successfully after 3 seconds.
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.523 2 INFO nova.virt.libvirt.driver [-] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Instance destroyed successfully.
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.524 2 DEBUG nova.virt.libvirt.vif [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1899978059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1899978059',id=8,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:14:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-1d71qtf9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:14:36Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=28ad2702-2baf-4865-be24-c468842cee03,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:f3:96:49"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.524 2 DEBUG nova.network.os_vif_util [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "vif_mac": "fa:16:3e:f3:96:49"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.525 2 DEBUG nova.network.os_vif_util [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.525 2 DEBUG os_vif [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4130889-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=6fdcc8c0-bc93-429d-b3a4-500306c94a72) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.537 2 INFO os_vif [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd')
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.541 2 DEBUG nova.virt.libvirt.driver [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.541 2 DEBUG nova.virt.libvirt.driver [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.542 2 WARNING neutronclient.v2_0.client [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.542 2 WARNING neutronclient.v2_0.client [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:49 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:49.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.774 2 INFO nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Took 6.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:14:49 compute-0 nova_compute[265391]: 2025-09-30 18:14:49.815 2 DEBUG neutronclient.v2_0.client [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port b4130889-fd6e-44b4-8184-b79693b30d78 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.12/site-packages/neutronclient/v2_0/client.py:265
Sep 30 18:14:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 336 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Sep 30 18:14:50 compute-0 sudo[308880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:14:50 compute-0 sudo[308880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:14:50 compute-0 sudo[308880]: pam_unix(sudo:session): session closed for user root
Sep 30 18:14:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.757 2 DEBUG nova.compute.manager [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.758 2 DEBUG oslo_concurrency.lockutils [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.758 2 DEBUG oslo_concurrency.lockutils [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.758 2 DEBUG oslo_concurrency.lockutils [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.758 2 DEBUG nova.compute.manager [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Processing event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.758 2 DEBUG nova.compute.manager [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-changed-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.758 2 DEBUG nova.compute.manager [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Refreshing instance network info cache due to event network-changed-50f7398a-769c-4636-b498-5162fce10f7d. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.758 2 DEBUG oslo_concurrency.lockutils [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.759 2 DEBUG oslo_concurrency.lockutils [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.759 2 DEBUG nova.network.neutron [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Refreshing network info cache for port 50f7398a-769c-4636-b498-5162fce10f7d _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.760 2 DEBUG nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.872 2 DEBUG oslo_concurrency.lockutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.873 2 DEBUG oslo_concurrency.lockutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:50 compute-0 nova_compute[265391]: 2025-09-30 18:14:50.873 2 DEBUG oslo_concurrency.lockutils [None req-9b5e76aa-f784-4d18-9b10-b48d08620a4e 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:50 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:50.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.033 2 DEBUG nova.compute.manager [req-510962e9-b680-4da1-a234-ed63023f8ee6 req-00ff36e3-3b94-41b8-947d-d1c33a44b1b8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received event network-vif-unplugged-b4130889-fd6e-44b4-8184-b79693b30d78 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.033 2 DEBUG oslo_concurrency.lockutils [req-510962e9-b680-4da1-a234-ed63023f8ee6 req-00ff36e3-3b94-41b8-947d-d1c33a44b1b8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.034 2 DEBUG oslo_concurrency.lockutils [req-510962e9-b680-4da1-a234-ed63023f8ee6 req-00ff36e3-3b94-41b8-947d-d1c33a44b1b8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.034 2 DEBUG oslo_concurrency.lockutils [req-510962e9-b680-4da1-a234-ed63023f8ee6 req-00ff36e3-3b94-41b8-947d-d1c33a44b1b8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.034 2 DEBUG nova.compute.manager [req-510962e9-b680-4da1-a234-ed63023f8ee6 req-00ff36e3-3b94-41b8-947d-d1c33a44b1b8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] No waiting events found dispatching network-vif-unplugged-b4130889-fd6e-44b4-8184-b79693b30d78 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.034 2 WARNING nova.compute.manager [req-510962e9-b680-4da1-a234-ed63023f8ee6 req-00ff36e3-3b94-41b8-947d-d1c33a44b1b8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received unexpected event network-vif-unplugged-b4130889-fd6e-44b4-8184-b79693b30d78 for instance with vm_state active and task_state resize_migrated.
Sep 30 18:14:51 compute-0 ceph-mon[73755]: pgmap v1083: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 336 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.266 2 WARNING neutronclient.v2_0.client [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.269 2 DEBUG nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmphwjwr5f6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='23ad643b-d29f-4fe8-a347-92df178ae0cd',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(8cf4fc6f-b422-4e3e-850a-f9666bba70e7),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.273 2 DEBUG nova.objects.instance [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 23ad643b-d29f-4fe8-a347-92df178ae0cd obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.275 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.276 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.277 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:14:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:51 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.674 2 WARNING neutronclient.v2_0.client [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:51.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.779 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.779 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.796 2 DEBUG nova.virt.libvirt.vif [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:12:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-19459247',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-19459247',id=6,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:13:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-i5u830kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:13:15Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=23ad643b-d29f-4fe8-a347-92df178ae0cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.796 2 DEBUG nova.network.os_vif_util [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.797 2 DEBUG nova.network.os_vif_util [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:86:73,bridge_name='br-int',has_traffic_filtering=True,id=50f7398a-769c-4636-b498-5162fce10f7d,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f7398a-76') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.798 2 DEBUG nova.virt.libvirt.migration [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:d1:86:73"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <target dev="tap50f7398a-76"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]: </interface>
Sep 30 18:14:51 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.798 2 DEBUG nova.virt.libvirt.migration [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <name>instance-00000006</name>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <uuid>23ad643b-d29f-4fe8-a347-92df178ae0cd</uuid>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-19459247</nova:name>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:13:07</nova:creationTime>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:user uuid="dc3bb71c425f484fbc46f90978029403">tempest-TestExecuteActionsViaActuator-837729328-project-admin</nova:user>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:project uuid="ddd1f985d8b64b449c79d55b0cbd6422">tempest-TestExecuteActionsViaActuator-837729328</nova:project>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:port uuid="50f7398a-769c-4636-b498-5162fce10f7d">
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <system>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="serial">23ad643b-d29f-4fe8-a347-92df178ae0cd</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="uuid">23ad643b-d29f-4fe8-a347-92df178ae0cd</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </system>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <os>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </os>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <features>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </features>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/23ad643b-d29f-4fe8-a347-92df178ae0cd_disk">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </source>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </source>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:d1:86:73"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap50f7398a-76"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/console.log" append="off"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </target>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/console.log" append="off"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </console>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </input>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <video>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </video>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]: </domain>
Sep 30 18:14:51 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.800 2 DEBUG nova.virt.libvirt.migration [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <name>instance-00000006</name>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <uuid>23ad643b-d29f-4fe8-a347-92df178ae0cd</uuid>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-19459247</nova:name>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:13:07</nova:creationTime>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:user uuid="dc3bb71c425f484fbc46f90978029403">tempest-TestExecuteActionsViaActuator-837729328-project-admin</nova:user>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:project uuid="ddd1f985d8b64b449c79d55b0cbd6422">tempest-TestExecuteActionsViaActuator-837729328</nova:project>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:port uuid="50f7398a-769c-4636-b498-5162fce10f7d">
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <system>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="serial">23ad643b-d29f-4fe8-a347-92df178ae0cd</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="uuid">23ad643b-d29f-4fe8-a347-92df178ae0cd</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </system>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <os>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </os>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <features>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </features>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/23ad643b-d29f-4fe8-a347-92df178ae0cd_disk">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </source>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </source>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:d1:86:73"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap50f7398a-76"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/console.log" append="off"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </target>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/console.log" append="off"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </console>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </input>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <video>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </video>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]: </domain>
Sep 30 18:14:51 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.802 2 DEBUG nova.virt.libvirt.migration [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <name>instance-00000006</name>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <uuid>23ad643b-d29f-4fe8-a347-92df178ae0cd</uuid>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-19459247</nova:name>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:13:07</nova:creationTime>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:user uuid="dc3bb71c425f484fbc46f90978029403">tempest-TestExecuteActionsViaActuator-837729328-project-admin</nova:user>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:project uuid="ddd1f985d8b64b449c79d55b0cbd6422">tempest-TestExecuteActionsViaActuator-837729328</nova:project>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <nova:port uuid="50f7398a-769c-4636-b498-5162fce10f7d">
Sep 30 18:14:51 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <system>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="serial">23ad643b-d29f-4fe8-a347-92df178ae0cd</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="uuid">23ad643b-d29f-4fe8-a347-92df178ae0cd</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </system>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <os>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </os>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <features>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </features>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/23ad643b-d29f-4fe8-a347-92df178ae0cd_disk">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </source>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/23ad643b-d29f-4fe8-a347-92df178ae0cd_disk.config">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </source>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:d1:86:73"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap50f7398a-76"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/console.log" append="off"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:14:51 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       </target>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd/console.log" append="off"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </console>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </input>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <video>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </video>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:14:51 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:14:51 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:14:51 compute-0 nova_compute[265391]: </domain>
Sep 30 18:14:51 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.803 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.886 2 DEBUG nova.network.neutron [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Updated VIF entry in instance network info cache for port 50f7398a-769c-4636-b498-5162fce10f7d. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:14:51 compute-0 nova_compute[265391]: 2025-09-30 18:14:51.887 2 DEBUG nova.network.neutron [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Updating instance_info_cache with network_info: [{"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:14:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 29 KiB/s wr, 3 op/s
Sep 30 18:14:52 compute-0 nova_compute[265391]: 2025-09-30 18:14:52.283 2 DEBUG nova.virt.libvirt.migration [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:14:52 compute-0 nova_compute[265391]: 2025-09-30 18:14:52.283 2 INFO nova.virt.libvirt.migration [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:14:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:14:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:14:52 compute-0 nova_compute[265391]: 2025-09-30 18:14:52.395 2 DEBUG oslo_concurrency.lockutils [req-5b6d4d92-9ab2-40ab-ab17-d2a6104f5487 req-65561c2f-7bcb-41db-8945-6434019e500f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-23ad643b-d29f-4fe8-a347-92df178ae0cd" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:14:52 compute-0 nova_compute[265391]: 2025-09-30 18:14:52.841 2 DEBUG nova.compute.manager [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received event network-changed-b4130889-fd6e-44b4-8184-b79693b30d78 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:52 compute-0 nova_compute[265391]: 2025-09-30 18:14:52.841 2 DEBUG nova.compute.manager [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Refreshing instance network info cache due to event network-changed-b4130889-fd6e-44b4-8184-b79693b30d78. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:14:52 compute-0 nova_compute[265391]: 2025-09-30 18:14:52.841 2 DEBUG oslo_concurrency.lockutils [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:14:52 compute-0 nova_compute[265391]: 2025-09-30 18:14:52.842 2 DEBUG oslo_concurrency.lockutils [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:14:52 compute-0 nova_compute[265391]: 2025-09-30 18:14:52.842 2 DEBUG nova.network.neutron [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Refreshing network info cache for port b4130889-fd6e-44b4-8184-b79693b30d78 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:14:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:52 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:52.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:53 compute-0 ceph-mon[73755]: pgmap v1084: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 29 KiB/s wr, 3 op/s
Sep 30 18:14:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.306 2 INFO nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.347 2 WARNING neutronclient.v2_0.client [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:53 compute-0 podman[308911]: 2025-09-30 18:14:53.543867264 +0000 UTC m=+0.071137559 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4)
Sep 30 18:14:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:53 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:53.693Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:53.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:53 compute-0 kernel: tap50f7398a-76 (unregistering): left promiscuous mode
Sep 30 18:14:53 compute-0 NetworkManager[45059]: <info>  [1759256093.8004] device (tap50f7398a-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:14:53 compute-0 ovn_controller[156242]: 2025-09-30T18:14:53Z|00087|binding|INFO|Releasing lport 50f7398a-769c-4636-b498-5162fce10f7d from this chassis (sb_readonly=0)
Sep 30 18:14:53 compute-0 ovn_controller[156242]: 2025-09-30T18:14:53Z|00088|binding|INFO|Setting lport 50f7398a-769c-4636-b498-5162fce10f7d down in Southbound
Sep 30 18:14:53 compute-0 ovn_controller[156242]: 2025-09-30T18:14:53Z|00089|binding|INFO|Removing iface tap50f7398a-76 ovn-installed in OVS
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.822 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:86:73 10.100.0.9'], port_security=['fa:16:3e:d1:86:73 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '23ad643b-d29f-4fe8-a347-92df178ae0cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '10', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=50f7398a-769c-4636-b498-5162fce10f7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.823 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 50f7398a-769c-4636-b498-5162fce10f7d in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab unbound from our chassis
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.825 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5fff1904-159a-4b76-8c46-feabf17f29ab
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.836 2 WARNING neutronclient.v2_0.client [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.848 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1c14a92e-b689-4985-967e-e6caeed13eb9]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:53 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Deactivated successfully.
Sep 30 18:14:53 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Consumed 19.155s CPU time.
Sep 30 18:14:53 compute-0 systemd-machined[219917]: Machine qemu-5-instance-00000006 terminated.
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.886 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[115f0bc1-7644-48c9-ab4c-8ed7be0ee40d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.889 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[03584887-0aa4-4eaf-b3ce-e07d516f40ec]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.922 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[32af5c26-24da-4f42-ab88-fa1621d8704d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.942 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[394c5906-c8fa-4ee7-b2fa-0bdac565ae17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5fff1904-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:07:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456734, 'reachable_time': 30287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308944, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:53 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk: No such file or directory
Sep 30 18:14:53 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 23ad643b-d29f-4fe8-a347-92df178ae0cd_disk: No such file or directory
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.960 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[88865e8e-ef64-492c-933c-738cba95676a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5fff1904-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456747, 'tstamp': 456747}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308945, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5fff1904-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456750, 'tstamp': 456750}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308945, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.961 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fff1904-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.970 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fff1904-10, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.971 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.971 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5fff1904-10, col_values=(('external_ids', {'iface-id': '3a8ea0a0-c179-4516-9404-04b68a17e79e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.972 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:14:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:53.974 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[def48bc8-415a-4397-b99c-b17ca9cd3f38]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-5fff1904-159a-4b76-8c46-feabf17f29ab\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 5fff1904-159a-4b76-8c46-feabf17f29ab\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.979 2 DEBUG nova.virt.libvirt.guest [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.980 2 INFO nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Migration operation has completed
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.980 2 INFO nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] _post_live_migration() is started..
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.982 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.983 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.983 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.994 2 WARNING neutronclient.v2_0.client [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:53 compute-0 nova_compute[265391]: 2025-09-30 18:14:53.995 2 WARNING neutronclient.v2_0.client [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.053 2 DEBUG nova.network.neutron [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Updated VIF entry in instance network info cache for port b4130889-fd6e-44b4-8184-b79693b30d78. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.053 2 DEBUG nova.network.neutron [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Updating instance_info_cache with network_info: [{"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:14:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 6.7 KiB/s rd, 29 KiB/s wr, 9 op/s
Sep 30 18:14:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:54.286 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:54.286 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:14:54.287 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.354 2 DEBUG nova.compute.manager [req-e3ddec00-8628-4ca1-8ac3-dedd05670d15 req-94f6c3c0-38ac-4905-9c6f-d077a8e7bd34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.354 2 DEBUG oslo_concurrency.lockutils [req-e3ddec00-8628-4ca1-8ac3-dedd05670d15 req-94f6c3c0-38ac-4905-9c6f-d077a8e7bd34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.354 2 DEBUG oslo_concurrency.lockutils [req-e3ddec00-8628-4ca1-8ac3-dedd05670d15 req-94f6c3c0-38ac-4905-9c6f-d077a8e7bd34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.355 2 DEBUG oslo_concurrency.lockutils [req-e3ddec00-8628-4ca1-8ac3-dedd05670d15 req-94f6c3c0-38ac-4905-9c6f-d077a8e7bd34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.355 2 DEBUG nova.compute.manager [req-e3ddec00-8628-4ca1-8ac3-dedd05670d15 req-94f6c3c0-38ac-4905-9c6f-d077a8e7bd34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] No waiting events found dispatching network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.355 2 DEBUG nova.compute.manager [req-e3ddec00-8628-4ca1-8ac3-dedd05670d15 req-94f6c3c0-38ac-4905-9c6f-d077a8e7bd34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.470 2 DEBUG nova.network.neutron [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port 50f7398a-769c-4636-b498-5162fce10f7d and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.471 2 DEBUG nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.472 2 DEBUG nova.virt.libvirt.vif [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:12:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-19459247',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-19459247',id=6,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:13:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-i5u830kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:14:33Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=23ad643b-d29f-4fe8-a347-92df178ae0cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.472 2 DEBUG nova.network.os_vif_util [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "50f7398a-769c-4636-b498-5162fce10f7d", "address": "fa:16:3e:d1:86:73", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f7398a-76", "ovs_interfaceid": "50f7398a-769c-4636-b498-5162fce10f7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.473 2 DEBUG nova.network.os_vif_util [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:86:73,bridge_name='br-int',has_traffic_filtering=True,id=50f7398a-769c-4636-b498-5162fce10f7d,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f7398a-76') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.473 2 DEBUG os_vif [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:86:73,bridge_name='br-int',has_traffic_filtering=True,id=50f7398a-769c-4636-b498-5162fce10f7d,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f7398a-76') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.475 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f7398a-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=7dad547f-c386-4e00-ac36-4c93668a6ee8) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.485 2 INFO os_vif [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:86:73,bridge_name='br-int',has_traffic_filtering=True,id=50f7398a-769c-4636-b498-5162fce10f7d,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f7398a-76')
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.485 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.486 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.486 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.486 2 DEBUG nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.487 2 INFO nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Deleting instance files /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd_del
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.487 2 INFO nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Deletion of /var/lib/nova/instances/23ad643b-d29f-4fe8-a347-92df178ae0cd_del complete
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.560 2 DEBUG oslo_concurrency.lockutils [req-04de9f4b-aed9-4064-8434-ddc84bf4162b req-25378f8f-2948-4324-9b75-830fd7b7bf2e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:54 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.921 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.922 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.922 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.922 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.922 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] No waiting events found dispatching network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.923 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.923 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.924 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.924 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.924 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.924 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] No waiting events found dispatching network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.924 2 WARNING nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received unexpected event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d for instance with vm_state active and task_state migrating.
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.925 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.925 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.925 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.925 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.925 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] No waiting events found dispatching network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.925 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-unplugged-50f7398a-769c-4636-b498-5162fce10f7d for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.926 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.926 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.926 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.926 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.926 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] No waiting events found dispatching network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.926 2 WARNING nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received unexpected event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d for instance with vm_state active and task_state migrating.
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.926 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.926 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.927 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.927 2 DEBUG oslo_concurrency.lockutils [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.927 2 DEBUG nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] No waiting events found dispatching network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:14:54 compute-0 nova_compute[265391]: 2025-09-30 18:14:54.927 2 WARNING nova.compute.manager [req-89ae6744-d6fd-4a48-8ccc-c1d21fc63d53 req-c43c3fe9-f12f-4481-b9da-7b5435c6df81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Received unexpected event network-vif-plugged-50f7398a-769c-4636-b498-5162fce10f7d for instance with vm_state active and task_state migrating.
Sep 30 18:14:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:54.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Sep 30 18:14:55 compute-0 ceph-mon[73755]: pgmap v1085: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 6.7 KiB/s rd, 29 KiB/s wr, 9 op/s
Sep 30 18:14:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e141 e141: 2 total, 2 up, 2 in
Sep 30 18:14:55 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e141: 2 total, 2 up, 2 in
Sep 30 18:14:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:55 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:14:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:55.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 7.9 KiB/s rd, 35 KiB/s wr, 11 op/s
Sep 30 18:14:56 compute-0 ceph-mon[73755]: osdmap e141: 2 total, 2 up, 2 in
Sep 30 18:14:56 compute-0 ceph-mon[73755]: pgmap v1087: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 7.9 KiB/s rd, 35 KiB/s wr, 11 op/s
Sep 30 18:14:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:56 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:56.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:14:57.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:14:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3833903488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:14:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:57 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:57.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:14:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 7.9 KiB/s rd, 35 KiB/s wr, 11 op/s
Sep 30 18:14:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/809413651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:14:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/809413651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:14:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3447825556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:14:58 compute-0 ceph-mon[73755]: pgmap v1088: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 7.9 KiB/s rd, 35 KiB/s wr, 11 op/s
Sep 30 18:14:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:58] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:14:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:14:58] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:14:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:58 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:14:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:14:58.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:14:59 compute-0 nova_compute[265391]: 2025-09-30 18:14:59.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:14:59 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:14:59 compute-0 podman[276673]: time="2025-09-30T18:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:14:59 compute-0 nova_compute[265391]: 2025-09-30 18:14:59.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:14:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:14:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10736 "" "Go-http-client/1.1"
Sep 30 18:14:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:14:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:14:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:14:59.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Sep 30 18:15:00 compute-0 nova_compute[265391]: 2025-09-30 18:15:00.183 2 DEBUG nova.compute.manager [req-7ef3efbc-d7af-4cf9-9d1f-26fa4543b221 req-29b25f26-a82b-4576-a599-34ef0558ca6c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:15:00 compute-0 nova_compute[265391]: 2025-09-30 18:15:00.184 2 DEBUG oslo_concurrency.lockutils [req-7ef3efbc-d7af-4cf9-9d1f-26fa4543b221 req-29b25f26-a82b-4576-a599-34ef0558ca6c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:00 compute-0 nova_compute[265391]: 2025-09-30 18:15:00.184 2 DEBUG oslo_concurrency.lockutils [req-7ef3efbc-d7af-4cf9-9d1f-26fa4543b221 req-29b25f26-a82b-4576-a599-34ef0558ca6c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:00 compute-0 nova_compute[265391]: 2025-09-30 18:15:00.184 2 DEBUG oslo_concurrency.lockutils [req-7ef3efbc-d7af-4cf9-9d1f-26fa4543b221 req-29b25f26-a82b-4576-a599-34ef0558ca6c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:00 compute-0 nova_compute[265391]: 2025-09-30 18:15:00.184 2 DEBUG nova.compute.manager [req-7ef3efbc-d7af-4cf9-9d1f-26fa4543b221 req-29b25f26-a82b-4576-a599-34ef0558ca6c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] No waiting events found dispatching network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:15:00 compute-0 nova_compute[265391]: 2025-09-30 18:15:00.184 2 WARNING nova.compute.manager [req-7ef3efbc-d7af-4cf9-9d1f-26fa4543b221 req-29b25f26-a82b-4576-a599-34ef0558ca6c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received unexpected event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 for instance with vm_state active and task_state resize_finish.
Sep 30 18:15:00 compute-0 podman[308965]: 2025-09-30 18:15:00.538399935 +0000 UTC m=+0.067096105 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Sep 30 18:15:00 compute-0 podman[308964]: 2025-09-30 18:15:00.538534898 +0000 UTC m=+0.071422517 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=multipathd)
Sep 30 18:15:00 compute-0 podman[308966]: 2025-09-30 18:15:00.543497037 +0000 UTC m=+0.067970537 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Sep 30 18:15:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:00 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:01 compute-0 ceph-mon[73755]: pgmap v1089: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Sep 30 18:15:01 compute-0 openstack_network_exporter[279566]: ERROR   18:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:15:01 compute-0 openstack_network_exporter[279566]: ERROR   18:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:15:01 compute-0 openstack_network_exporter[279566]: ERROR   18:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:15:01 compute-0 openstack_network_exporter[279566]: ERROR   18:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:15:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:15:01 compute-0 openstack_network_exporter[279566]: ERROR   18:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:15:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:15:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:01 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:01.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Sep 30 18:15:02 compute-0 nova_compute[265391]: 2025-09-30 18:15:02.296 2 DEBUG nova.compute.manager [req-8d956e17-8bf7-4b5c-9958-64a48d45da32 req-631c0d69-65b2-49d8-a6c8-895441989a06 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:15:02 compute-0 nova_compute[265391]: 2025-09-30 18:15:02.297 2 DEBUG oslo_concurrency.lockutils [req-8d956e17-8bf7-4b5c-9958-64a48d45da32 req-631c0d69-65b2-49d8-a6c8-895441989a06 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:02 compute-0 nova_compute[265391]: 2025-09-30 18:15:02.297 2 DEBUG oslo_concurrency.lockutils [req-8d956e17-8bf7-4b5c-9958-64a48d45da32 req-631c0d69-65b2-49d8-a6c8-895441989a06 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:02 compute-0 nova_compute[265391]: 2025-09-30 18:15:02.297 2 DEBUG oslo_concurrency.lockutils [req-8d956e17-8bf7-4b5c-9958-64a48d45da32 req-631c0d69-65b2-49d8-a6c8-895441989a06 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:02 compute-0 nova_compute[265391]: 2025-09-30 18:15:02.297 2 DEBUG nova.compute.manager [req-8d956e17-8bf7-4b5c-9958-64a48d45da32 req-631c0d69-65b2-49d8-a6c8-895441989a06 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] No waiting events found dispatching network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:15:02 compute-0 nova_compute[265391]: 2025-09-30 18:15:02.297 2 WARNING nova.compute.manager [req-8d956e17-8bf7-4b5c-9958-64a48d45da32 req-631c0d69-65b2-49d8-a6c8-895441989a06 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Received unexpected event network-vif-plugged-b4130889-fd6e-44b4-8184-b79693b30d78 for instance with vm_state resized and task_state None.
Sep 30 18:15:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:02 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:03 compute-0 ceph-mon[73755]: pgmap v1090: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Sep 30 18:15:03 compute-0 sudo[309026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:15:03 compute-0 sudo[309026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:03 compute-0 sudo[309026]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:03 compute-0 sudo[309051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:15:03 compute-0 sudo[309051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:03 compute-0 nova_compute[265391]: 2025-09-30 18:15:03.523 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:03 compute-0 nova_compute[265391]: 2025-09-30 18:15:03.524 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:03 compute-0 nova_compute[265391]: 2025-09-30 18:15:03.524 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "23ad643b-d29f-4fe8-a347-92df178ae0cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:03 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:03.693Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:03.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:03 compute-0 sudo[309051]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:04 compute-0 nova_compute[265391]: 2025-09-30 18:15:04.037 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:04 compute-0 nova_compute[265391]: 2025-09-30 18:15:04.037 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:04 compute-0 nova_compute[265391]: 2025-09-30 18:15:04.037 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:04 compute-0 nova_compute[265391]: 2025-09-30 18:15:04.038 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:15:04 compute-0 nova_compute[265391]: 2025-09-30 18:15:04.038 2 DEBUG oslo_concurrency.processutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:15:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 105 op/s
Sep 30 18:15:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:15:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:15:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:15:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:15:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:15:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:15:04 compute-0 sudo[309111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:15:04 compute-0 sudo[309111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:04 compute-0 sudo[309111]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:04 compute-0 sudo[309154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:15:04 compute-0 sudo[309154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:15:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556645831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:04 compute-0 nova_compute[265391]: 2025-09-30 18:15:04.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:04 compute-0 nova_compute[265391]: 2025-09-30 18:15:04.507 2 DEBUG oslo_concurrency.processutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:15:04 compute-0 podman[309225]: 2025-09-30 18:15:04.691917774 +0000 UTC m=+0.050647167 container create 71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:15:04 compute-0 systemd[1]: Started libpod-conmon-71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6.scope.
Sep 30 18:15:04 compute-0 nova_compute[265391]: 2025-09-30 18:15:04.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:15:04 compute-0 podman[309225]: 2025-09-30 18:15:04.667241533 +0000 UTC m=+0.025970926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:15:04 compute-0 podman[309225]: 2025-09-30 18:15:04.782398135 +0000 UTC m=+0.141127518 container init 71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:15:04 compute-0 podman[309225]: 2025-09-30 18:15:04.79028057 +0000 UTC m=+0.149009963 container start 71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Sep 30 18:15:04 compute-0 podman[309225]: 2025-09-30 18:15:04.794844778 +0000 UTC m=+0.153574151 container attach 71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:15:04 compute-0 systemd[1]: libpod-71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6.scope: Deactivated successfully.
Sep 30 18:15:04 compute-0 kind_turing[309241]: 167 167
Sep 30 18:15:04 compute-0 conmon[309241]: conmon 71498285ad103ef57679 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6.scope/container/memory.events
Sep 30 18:15:04 compute-0 podman[309225]: 2025-09-30 18:15:04.800715071 +0000 UTC m=+0.159444434 container died 71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffd55d96bb47e17a3fce40084c1550494d0cd636788e18125b3750e4241ae7c5-merged.mount: Deactivated successfully.
Sep 30 18:15:04 compute-0 podman[309225]: 2025-09-30 18:15:04.842061575 +0000 UTC m=+0.200790928 container remove 71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:15:04 compute-0 systemd[1]: libpod-conmon-71498285ad103ef576793d3821396e49e65c55b7752b6bb487f4e23d3d4c56e6.scope: Deactivated successfully.
Sep 30 18:15:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:04 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:05 compute-0 podman[309264]: 2025-09-30 18:15:05.019337741 +0000 UTC m=+0.053728867 container create 0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_driscoll, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:15:05 compute-0 systemd[1]: Started libpod-conmon-0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee.scope.
Sep 30 18:15:05 compute-0 podman[309264]: 2025-09-30 18:15:04.994275679 +0000 UTC m=+0.028666825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:15:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bc7bca3d07aaf05a59977cf8396e60891d800dd727a885bfe4eaa5029a9433/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bc7bca3d07aaf05a59977cf8396e60891d800dd727a885bfe4eaa5029a9433/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bc7bca3d07aaf05a59977cf8396e60891d800dd727a885bfe4eaa5029a9433/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bc7bca3d07aaf05a59977cf8396e60891d800dd727a885bfe4eaa5029a9433/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10bc7bca3d07aaf05a59977cf8396e60891d800dd727a885bfe4eaa5029a9433/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:05 compute-0 podman[309264]: 2025-09-30 18:15:05.114617486 +0000 UTC m=+0.149008642 container init 0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_driscoll, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:15:05 compute-0 podman[309264]: 2025-09-30 18:15:05.123482206 +0000 UTC m=+0.157873332 container start 0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:15:05 compute-0 podman[309264]: 2025-09-30 18:15:05.127495931 +0000 UTC m=+0.161887057 container attach 0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_driscoll, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:15:05 compute-0 ceph-mon[73755]: pgmap v1091: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 105 op/s
Sep 30 18:15:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3556645831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:05 compute-0 condescending_driscoll[309280]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:15:05 compute-0 condescending_driscoll[309280]: --> All data devices are unavailable
Sep 30 18:15:05 compute-0 systemd[1]: libpod-0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee.scope: Deactivated successfully.
Sep 30 18:15:05 compute-0 podman[309264]: 2025-09-30 18:15:05.499757612 +0000 UTC m=+0.534148738 container died 0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_driscoll, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-10bc7bca3d07aaf05a59977cf8396e60891d800dd727a885bfe4eaa5029a9433-merged.mount: Deactivated successfully.
Sep 30 18:15:05 compute-0 podman[309264]: 2025-09-30 18:15:05.562080681 +0000 UTC m=+0.596471847 container remove 0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.561 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.562 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.566 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.566 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:15:05 compute-0 systemd[1]: libpod-conmon-0c6b8744dd9dc4bce17cf22113a0c82c568be2bfdc35062888b1b183e7b446ee.scope: Deactivated successfully.
Sep 30 18:15:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:05 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:05 compute-0 sudo[309154]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:05 compute-0 sudo[309310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:15:05 compute-0 sudo[309310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:05 compute-0 sudo[309310]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.704 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "28ad2702-2baf-4865-be24-c468842cee03" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.705 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.705 2 DEBUG nova.compute.manager [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Going to confirm migration 3 do_confirm_resize /usr/lib/python3.12/site-packages/nova/compute/manager.py:5283
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.740 2 WARNING nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.742 2 DEBUG oslo_concurrency.processutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:15:05 compute-0 sudo[309335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:15:05 compute-0 sudo[309335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.763 2 DEBUG oslo_concurrency.processutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.764 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4119MB free_disk=39.71887969970703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.764 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:05 compute-0 nova_compute[265391]: 2025-09-30 18:15:05.764 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:05.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.7 KiB/s wr, 96 op/s
Sep 30 18:15:06 compute-0 podman[309405]: 2025-09-30 18:15:06.152390418 +0000 UTC m=+0.037568917 container create f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:15:06 compute-0 systemd[1]: Started libpod-conmon-f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a.scope.
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.217 2 DEBUG nova.objects.instance [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'info_cache' on Instance uuid 28ad2702-2baf-4865-be24-c468842cee03 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:15:06 compute-0 podman[309405]: 2025-09-30 18:15:06.136154646 +0000 UTC m=+0.021333165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:15:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:15:06 compute-0 podman[309405]: 2025-09-30 18:15:06.248911766 +0000 UTC m=+0.134090305 container init f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:15:06 compute-0 podman[309405]: 2025-09-30 18:15:06.260765854 +0000 UTC m=+0.145944353 container start f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hypatia, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 18:15:06 compute-0 gracious_hypatia[309421]: 167 167
Sep 30 18:15:06 compute-0 podman[309405]: 2025-09-30 18:15:06.265831025 +0000 UTC m=+0.151009564 container attach f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:15:06 compute-0 systemd[1]: libpod-f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a.scope: Deactivated successfully.
Sep 30 18:15:06 compute-0 podman[309405]: 2025-09-30 18:15:06.26679969 +0000 UTC m=+0.151978189 container died f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hypatia, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e43c0ae95225eaf336fb5593fa08a41f6ff9c42e74ad5045cb90237e9631e5b-merged.mount: Deactivated successfully.
Sep 30 18:15:06 compute-0 podman[309405]: 2025-09-30 18:15:06.314334835 +0000 UTC m=+0.199513334 container remove f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:15:06 compute-0 systemd[1]: libpod-conmon-f1e9cd15859e091c5051dfa7270048b5d469bdcbacfe8b2459fe70945660825a.scope: Deactivated successfully.
Sep 30 18:15:06 compute-0 podman[309447]: 2025-09-30 18:15:06.497301039 +0000 UTC m=+0.043625014 container create 16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:15:06 compute-0 systemd[1]: Started libpod-conmon-16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7.scope.
Sep 30 18:15:06 compute-0 podman[309447]: 2025-09-30 18:15:06.479541108 +0000 UTC m=+0.025865113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:15:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d709a8c66f4bc976ec4cf9295d29a81672d8e2feb385d747af268d35a6a1e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d709a8c66f4bc976ec4cf9295d29a81672d8e2feb385d747af268d35a6a1e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d709a8c66f4bc976ec4cf9295d29a81672d8e2feb385d747af268d35a6a1e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d709a8c66f4bc976ec4cf9295d29a81672d8e2feb385d747af268d35a6a1e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:06 compute-0 podman[309447]: 2025-09-30 18:15:06.588662393 +0000 UTC m=+0.134986378 container init 16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:15:06 compute-0 podman[309447]: 2025-09-30 18:15:06.598286503 +0000 UTC m=+0.144610488 container start 16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:15:06 compute-0 podman[309447]: 2025-09-30 18:15:06.602122632 +0000 UTC m=+0.148446607 container attach 16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.734 2 WARNING neutronclient.v2_0.client [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.783 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 23ad643b-d29f-4fe8-a347-92df178ae0cd refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.784 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 28ad2702-2baf-4865-be24-c468842cee03 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.852 2 WARNING neutronclient.v2_0.client [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.852 2 WARNING neutronclient.v2_0.client [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]: {
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:     "0": [
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:         {
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "devices": [
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "/dev/loop3"
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             ],
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "lv_name": "ceph_lv0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "lv_size": "21470642176",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "name": "ceph_lv0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "tags": {
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.cluster_name": "ceph",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.crush_device_class": "",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.encrypted": "0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.osd_id": "0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.type": "block",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.vdo": "0",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:                 "ceph.with_tpm": "0"
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             },
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "type": "block",
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:             "vg_name": "ceph_vg0"
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:         }
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]:     ]
Sep 30 18:15:06 compute-0 elegant_ritchie[309464]: }
Sep 30 18:15:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:06 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:06 compute-0 systemd[1]: libpod-16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7.scope: Deactivated successfully.
Sep 30 18:15:06 compute-0 podman[309447]: 2025-09-30 18:15:06.934066997 +0000 UTC m=+0.480391002 container died 16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-95d709a8c66f4bc976ec4cf9295d29a81672d8e2feb385d747af268d35a6a1e1-merged.mount: Deactivated successfully.
Sep 30 18:15:06 compute-0 podman[309447]: 2025-09-30 18:15:06.981714624 +0000 UTC m=+0.528038599 container remove 16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_ritchie, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:15:06 compute-0 systemd[1]: libpod-conmon-16e9f8fc2ec5168ae77a97d95028eb9539866d14a0c63d1cca0a292e2c7803b7.scope: Deactivated successfully.
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.990 2 DEBUG neutronclient.v2_0.client [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port b4130889-fd6e-44b4-8184-b79693b30d78 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.12/site-packages/neutronclient/v2_0/client.py:265
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.991 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.992 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:15:06 compute-0 nova_compute[265391]: 2025-09-30 18:15:06.992 2 DEBUG nova.network.neutron [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:15:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:07 compute-0 sudo[309335]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:07 compute-0 sudo[309484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:15:07 compute-0 sudo[309484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:07 compute-0 sudo[309484]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:07 compute-0 sudo[309509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:15:07 compute-0 sudo[309509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:07.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:07 compute-0 ceph-mon[73755]: pgmap v1092: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.7 KiB/s wr, 96 op/s
Sep 30 18:15:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:15:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.293 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.498 2 WARNING neutronclient.v2_0.client [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:15:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:07 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:07 compute-0 podman[309576]: 2025-09-30 18:15:07.659527215 +0000 UTC m=+0.059700142 container create 394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:15:07 compute-0 systemd[1]: Started libpod-conmon-394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227.scope.
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006834029778765384 of space, bias 1.0, pg target 1.3668059557530767 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.19875870302427234 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006074184411017777 quantized to 16 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.8981826284430553e-05 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011389095770658333 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006453820936706387 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015185461027544442 quantized to 32 (current 32)
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:15:07
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', '.nfs', 'images', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta']
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:15:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:15:07 compute-0 podman[309576]: 2025-09-30 18:15:07.630234973 +0000 UTC m=+0.030407970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:15:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:15:07 compute-0 podman[309576]: 2025-09-30 18:15:07.752468219 +0000 UTC m=+0.152641136 container init 394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:15:07 compute-0 podman[309576]: 2025-09-30 18:15:07.758697481 +0000 UTC m=+0.158870378 container start 394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:15:07 compute-0 podman[309576]: 2025-09-30 18:15:07.761393341 +0000 UTC m=+0.161566238 container attach 394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:15:07 compute-0 silly_volhard[309593]: 167 167
Sep 30 18:15:07 compute-0 systemd[1]: libpod-394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227.scope: Deactivated successfully.
Sep 30 18:15:07 compute-0 podman[309576]: 2025-09-30 18:15:07.766854323 +0000 UTC m=+0.167027220 container died 394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:15:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:07.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4060dcd723ca48dbf50c37911313dc054309e6b23a39203c10d9a54c20af095-merged.mount: Deactivated successfully.
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.802 2 INFO nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Updating resource usage from migration 1ba9a402-a52e-4c0a-9289-488387639d69
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.803 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Starting to track outgoing migration 1ba9a402-a52e-4c0a-9289-488387639d69 with flavor c83dc7f1-0795-47db-adcb-fb90be11684a _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1549
Sep 30 18:15:07 compute-0 podman[309576]: 2025-09-30 18:15:07.804819489 +0000 UTC m=+0.204992386 container remove 394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:15:07 compute-0 systemd[1]: libpod-conmon-394d4372d5cee87e3dee86809af0f41789421413d04ad0751ce55bbcd8e6a227.scope: Deactivated successfully.
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.845 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Instance 761dbb06-6272-4941-a65f-5ad2f8cfbb70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.846 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 8cf4fc6f-b422-4e3e-850a-f9666bba70e7 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.847 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 1ba9a402-a52e-4c0a-9289-488387639d69 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.847 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.847 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=39GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:15:05 up  1:18,  0 user,  load average: 0.76, 0.88, 0.91\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_ddd1f985d8b64b449c79d55b0cbd6422': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:15:07 compute-0 nova_compute[265391]: 2025-09-30 18:15:07.947 2 DEBUG oslo_concurrency.processutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:15:08 compute-0 podman[309621]: 2025-09-30 18:15:08.059802853 +0000 UTC m=+0.063276475 container create 79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:15:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 87 op/s
Sep 30 18:15:08 compute-0 systemd[1]: Started libpod-conmon-79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9.scope.
Sep 30 18:15:08 compute-0 podman[309621]: 2025-09-30 18:15:08.038934091 +0000 UTC m=+0.042407753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:15:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca5261ce605969071d7c759e78ee3189295d9a43957c6ef58ede7beb7f8e28e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca5261ce605969071d7c759e78ee3189295d9a43957c6ef58ede7beb7f8e28e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca5261ce605969071d7c759e78ee3189295d9a43957c6ef58ede7beb7f8e28e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca5261ce605969071d7c759e78ee3189295d9a43957c6ef58ede7beb7f8e28e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:08 compute-0 podman[309621]: 2025-09-30 18:15:08.154297268 +0000 UTC m=+0.157770910 container init 79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldberg, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:15:08 compute-0 podman[309621]: 2025-09-30 18:15:08.160953501 +0000 UTC m=+0.164427113 container start 79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:15:08 compute-0 podman[309621]: 2025-09-30 18:15:08.16823878 +0000 UTC m=+0.171712392 container attach 79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldberg, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:15:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:15:08 compute-0 nova_compute[265391]: 2025-09-30 18:15:08.235 2 WARNING neutronclient.v2_0.client [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:15:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:15:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333474161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:08 compute-0 nova_compute[265391]: 2025-09-30 18:15:08.414 2 DEBUG nova.network.neutron [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 28ad2702-2baf-4865-be24-c468842cee03] Updating instance_info_cache with network_info: [{"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:15:08 compute-0 nova_compute[265391]: 2025-09-30 18:15:08.424 2 DEBUG oslo_concurrency.processutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:15:08 compute-0 nova_compute[265391]: 2025-09-30 18:15:08.431 2 DEBUG nova.compute.provider_tree [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:15:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:08] "GET /metrics HTTP/1.1" 200 46655 "" "Prometheus/2.51.0"
Sep 30 18:15:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:08] "GET /metrics HTTP/1.1" 200 46655 "" "Prometheus/2.51.0"
Sep 30 18:15:08 compute-0 lvm[309733]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:15:08 compute-0 lvm[309733]: VG ceph_vg0 finished
Sep 30 18:15:08 compute-0 unruffled_goldberg[309648]: {}
Sep 30 18:15:08 compute-0 systemd[1]: libpod-79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9.scope: Deactivated successfully.
Sep 30 18:15:08 compute-0 podman[309621]: 2025-09-30 18:15:08.8933963 +0000 UTC m=+0.896869932 container died 79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:15:08 compute-0 systemd[1]: libpod-79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9.scope: Consumed 1.152s CPU time.
Sep 30 18:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fca5261ce605969071d7c759e78ee3189295d9a43957c6ef58ede7beb7f8e28e-merged.mount: Deactivated successfully.
Sep 30 18:15:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:08 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc210001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:08 compute-0 nova_compute[265391]: 2025-09-30 18:15:08.922 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-28ad2702-2baf-4865-be24-c468842cee03" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:15:08 compute-0 nova_compute[265391]: 2025-09-30 18:15:08.923 2 DEBUG nova.objects.instance [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 28ad2702-2baf-4865-be24-c468842cee03 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:15:08 compute-0 podman[309621]: 2025-09-30 18:15:08.932507586 +0000 UTC m=+0.935981198 container remove 79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:15:08 compute-0 nova_compute[265391]: 2025-09-30 18:15:08.938 2 DEBUG nova.scheduler.client.report [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:15:08 compute-0 systemd[1]: libpod-conmon-79ec4b11df4adc31d44d2429027c21c2281e19ffb86f2b93cb97a191acad55d9.scope: Deactivated successfully.
Sep 30 18:15:08 compute-0 sudo[309509]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:15:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:15:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:15:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:15:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:09.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:09 compute-0 sudo[309751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:15:09 compute-0 sudo[309751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:09 compute-0 sudo[309751]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:09 compute-0 ceph-mon[73755]: pgmap v1093: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 87 op/s
Sep 30 18:15:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3333474161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:15:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:15:09 compute-0 nova_compute[265391]: 2025-09-30 18:15:09.429 2 DEBUG nova.objects.base [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Object Instance<28ad2702-2baf-4865-be24-c468842cee03> lazy-loaded attributes: info_cache,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Sep 30 18:15:09 compute-0 nova_compute[265391]: 2025-09-30 18:15:09.480 2 DEBUG nova.compute.resource_tracker [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:15:09 compute-0 nova_compute[265391]: 2025-09-30 18:15:09.480 2 DEBUG oslo_concurrency.lockutils [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.716s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:09 compute-0 nova_compute[265391]: 2025-09-30 18:15:09.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:09 compute-0 nova_compute[265391]: 2025-09-30 18:15:09.549 2 DEBUG nova.storage.rbd_utils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] removing snapshot(nova-resize) on rbd image(28ad2702-2baf-4865-be24-c468842cee03_disk) remove_snap /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:489
Sep 30 18:15:09 compute-0 nova_compute[265391]: 2025-09-30 18:15:09.563 2 INFO nova.compute.manager [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:15:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:09 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:09 compute-0 nova_compute[265391]: 2025-09-30 18:15:09.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:09.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 3.8 KiB/s wr, 88 op/s
Sep 30 18:15:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Sep 30 18:15:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e142 e142: 2 total, 2 up, 2 in
Sep 30 18:15:10 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e142: 2 total, 2 up, 2 in
Sep 30 18:15:10 compute-0 ceph-mon[73755]: pgmap v1094: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 3.8 KiB/s wr, 88 op/s
Sep 30 18:15:10 compute-0 ceph-mon[73755]: osdmap e142: 2 total, 2 up, 2 in
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.332 2 DEBUG nova.virt.libvirt.vif [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-09-30T18:13:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1899978059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1899978059',id=8,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:15:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-1d71qtf9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:15:01Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=28ad2702-2baf-4865-be24-c468842cee03,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.332 2 DEBUG nova.network.os_vif_util [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "b4130889-fd6e-44b4-8184-b79693b30d78", "address": "fa:16:3e:f3:96:49", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4130889-fd", "ovs_interfaceid": "b4130889-fd6e-44b4-8184-b79693b30d78", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.333 2 DEBUG nova.network.os_vif_util [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.333 2 DEBUG os_vif [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4130889-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.337 2 INFO os_vif [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:96:49,bridge_name='br-int',has_traffic_filtering=True,id=b4130889-fd6e-44b4-8184-b79693b30d78,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4130889-fd')
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.337 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.337 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:10 compute-0 sudo[309814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:15:10 compute-0 sudo[309814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:10 compute-0 sudo[309814]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.647 2 INFO nova.scheduler.client.report [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 8cf4fc6f-b422-4e3e-850a-f9666bba70e7
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.649 2 DEBUG nova.virt.libvirt.driver [None req-130589fd-3f3b-4e0f-9ec5-747583c89756 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 23ad643b-d29f-4fe8-a347-92df178ae0cd] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:15:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:10 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc218001270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:10 compute-0 nova_compute[265391]: 2025-09-30 18:15:10.922 2 DEBUG oslo_concurrency.processutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:15:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:11.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:15:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3685291908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3685291908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:11 compute-0 nova_compute[265391]: 2025-09-30 18:15:11.342 2 DEBUG oslo_concurrency.processutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:15:11 compute-0 nova_compute[265391]: 2025-09-30 18:15:11.349 2 DEBUG nova.compute.provider_tree [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:15:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:11 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc20c0046e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:11.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:11 compute-0 nova_compute[265391]: 2025-09-30 18:15:11.860 2 DEBUG nova.scheduler.client.report [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:15:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 87 op/s
Sep 30 18:15:12 compute-0 ceph-mon[73755]: pgmap v1096: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 87 op/s
Sep 30 18:15:12 compute-0 nova_compute[265391]: 2025-09-30 18:15:12.880 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 2.543s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[295034]: 30/09/2025 18:15:12 : epoch 68dc1c3e : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc220002300 fd 38 proxy ignored for local
Sep 30 18:15:12 compute-0 kernel: ganesha.nfsd[308864]: segfault at 50 ip 00007fc2efbab32e sp 00007fc2b3ffe210 error 4 in libntirpc.so.5.8[7fc2efb90000+2c000] likely on CPU 6 (core 0, socket 6)
Sep 30 18:15:12 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 18:15:12 compute-0 systemd[1]: Started Process Core Dump (PID 309863/UID 0).
Sep 30 18:15:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:13.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:13 compute-0 nova_compute[265391]: 2025-09-30 18:15:13.464 2 INFO nova.scheduler.client.report [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 1ba9a402-a52e-4c0a-9289-488387639d69
Sep 30 18:15:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:13.703Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:13.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:13 compute-0 nova_compute[265391]: 2025-09-30 18:15:13.974 2 DEBUG oslo_concurrency.lockutils [None req-fd4cff05-ed85-477d-a9bc-0a283fd85b34 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "28ad2702-2baf-4865-be24-c468842cee03" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 8.269s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 421 KiB/s rd, 17 KiB/s wr, 51 op/s
Sep 30 18:15:14 compute-0 nova_compute[265391]: 2025-09-30 18:15:14.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:14 compute-0 nova_compute[265391]: 2025-09-30 18:15:14.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:14 compute-0 ceph-mon[73755]: pgmap v1097: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 421 KiB/s rd, 17 KiB/s wr, 51 op/s
Sep 30 18:15:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:15 compute-0 systemd-coredump[309864]: Process 295038 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 84:
                                                    #0  0x00007fc2efbab32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 18:15:15 compute-0 systemd[1]: systemd-coredump@14-309863-0.service: Deactivated successfully.
Sep 30 18:15:15 compute-0 systemd[1]: systemd-coredump@14-309863-0.service: Consumed 1.136s CPU time.
Sep 30 18:15:15 compute-0 podman[309872]: 2025-09-30 18:15:15.576608923 +0000 UTC m=+0.032353371 container died a75bc0871d2876441e946a87cea944e2694054e1fcd6c747829e72a413847002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Sep 30 18:15:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-98432b906fea09105c2677e80942597164be1da303b7b5bf2e1eda23e3b30e01-merged.mount: Deactivated successfully.
Sep 30 18:15:15 compute-0 podman[309872]: 2025-09-30 18:15:15.614677542 +0000 UTC m=+0.070421970 container remove a75bc0871d2876441e946a87cea944e2694054e1fcd6c747829e72a413847002 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:15:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 18:15:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Sep 30 18:15:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 e143: 2 total, 2 up, 2 in
Sep 30 18:15:15 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e143: 2 total, 2 up, 2 in
Sep 30 18:15:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 18:15:15 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 2.222s CPU time.
Sep 30 18:15:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 526 KiB/s rd, 22 KiB/s wr, 64 op/s
Sep 30 18:15:16 compute-0 ceph-mon[73755]: osdmap e143: 2 total, 2 up, 2 in
Sep 30 18:15:16 compute-0 ceph-mon[73755]: pgmap v1099: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 526 KiB/s rd, 22 KiB/s wr, 64 op/s
Sep 30 18:15:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:17.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:17.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 526 KiB/s rd, 18 KiB/s wr, 63 op/s
Sep 30 18:15:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:18] "GET /metrics HTTP/1.1" 200 46655 "" "Prometheus/2.51.0"
Sep 30 18:15:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:18] "GET /metrics HTTP/1.1" 200 46655 "" "Prometheus/2.51.0"
Sep 30 18:15:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:19.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:19 compute-0 ceph-mon[73755]: pgmap v1100: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 526 KiB/s rd, 18 KiB/s wr, 63 op/s
Sep 30 18:15:19 compute-0 nova_compute[265391]: 2025-09-30 18:15:19.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:19 compute-0 podman[309920]: 2025-09-30 18:15:19.555249771 +0000 UTC m=+0.082263998 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:15:19 compute-0 podman[309919]: 2025-09-30 18:15:19.571564975 +0000 UTC m=+0.106377064 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:15:19 compute-0 nova_compute[265391]: 2025-09-30 18:15:19.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:19.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 659 KiB/s rd, 15 KiB/s wr, 66 op/s
Sep 30 18:15:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/181520 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:15:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:21.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:21 compute-0 ceph-mon[73755]: pgmap v1101: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 659 KiB/s rd, 15 KiB/s wr, 66 op/s
Sep 30 18:15:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:21.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:15:21.834 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:15:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:15:21.835 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:15:21 compute-0 nova_compute[265391]: 2025-09-30 18:15:21.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 648 KiB/s rd, 15 KiB/s wr, 65 op/s
Sep 30 18:15:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:15:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:15:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:23.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:23 compute-0 ceph-mon[73755]: pgmap v1102: 353 pgs: 353 active+clean; 519 MiB data, 472 MiB used, 40 GiB / 40 GiB avail; 648 KiB/s rd, 15 KiB/s wr, 65 op/s
Sep 30 18:15:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:15:23 compute-0 nova_compute[265391]: 2025-09-30 18:15:23.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:23.704Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:23.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 442 MiB data, 434 MiB used, 40 GiB / 40 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 48 op/s
Sep 30 18:15:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2844608554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:24 compute-0 nova_compute[265391]: 2025-09-30 18:15:24.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:24 compute-0 podman[309975]: 2025-09-30 18:15:24.568119129 +0000 UTC m=+0.093663824 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:15:24 compute-0 nova_compute[265391]: 2025-09-30 18:15:24.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:25.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:25 compute-0 ceph-mon[73755]: pgmap v1103: 353 pgs: 353 active+clean; 442 MiB data, 434 MiB used, 40 GiB / 40 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 48 op/s
Sep 30 18:15:25 compute-0 nova_compute[265391]: 2025-09-30 18:15:25.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:25.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:25 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 15.
Sep 30 18:15:25 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 18:15:25 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 2.222s CPU time.
Sep 30 18:15:25 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 18:15:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 442 MiB data, 434 MiB used, 40 GiB / 40 GiB avail; 240 KiB/s rd, 12 KiB/s wr, 46 op/s
Sep 30 18:15:26 compute-0 podman[310046]: 2025-09-30 18:15:26.215250542 +0000 UTC m=+0.048258965 container create cc4f545686f6a742167a6aa20553cb06d2dd8d75a807947df3eaf52b21deff1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbadb344fc5678ed011abf7011f737b84ba3cc45182d1e6e258b4b84c49bd95/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbadb344fc5678ed011abf7011f737b84ba3cc45182d1e6e258b4b84c49bd95/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbadb344fc5678ed011abf7011f737b84ba3cc45182d1e6e258b4b84c49bd95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbadb344fc5678ed011abf7011f737b84ba3cc45182d1e6e258b4b84c49bd95/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:15:26 compute-0 podman[310046]: 2025-09-30 18:15:26.278511695 +0000 UTC m=+0.111520138 container init cc4f545686f6a742167a6aa20553cb06d2dd8d75a807947df3eaf52b21deff1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:15:26 compute-0 podman[310046]: 2025-09-30 18:15:26.285133117 +0000 UTC m=+0.118141570 container start cc4f545686f6a742167a6aa20553cb06d2dd8d75a807947df3eaf52b21deff1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:15:26 compute-0 podman[310046]: 2025-09-30 18:15:26.195532839 +0000 UTC m=+0.028541292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:15:26 compute-0 bash[310046]: cc4f545686f6a742167a6aa20553cb06d2dd8d75a807947df3eaf52b21deff1c
Sep 30 18:15:26 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 18:15:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 18:15:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 18:15:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 18:15:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 18:15:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 18:15:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 18:15:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 18:15:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:15:26 compute-0 nova_compute[265391]: 2025-09-30 18:15:26.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:26 compute-0 nova_compute[265391]: 2025-09-30 18:15:26.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:26 compute-0 nova_compute[265391]: 2025-09-30 18:15:26.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:26 compute-0 nova_compute[265391]: 2025-09-30 18:15:26.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:26 compute-0 nova_compute[265391]: 2025-09-30 18:15:26.946 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:15:26 compute-0 nova_compute[265391]: 2025-09-30 18:15:26.946 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:15:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:27.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:27.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:27 compute-0 ceph-mon[73755]: pgmap v1104: 353 pgs: 353 active+clean; 442 MiB data, 434 MiB used, 40 GiB / 40 GiB avail; 240 KiB/s rd, 12 KiB/s wr, 46 op/s
Sep 30 18:15:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:15:27 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047398325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:27 compute-0 nova_compute[265391]: 2025-09-30 18:15:27.426 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:15:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:27.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 442 MiB data, 434 MiB used, 40 GiB / 40 GiB avail; 208 KiB/s rd, 11 KiB/s wr, 40 op/s
Sep 30 18:15:28 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4047398325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:28 compute-0 nova_compute[265391]: 2025-09-30 18:15:28.477 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:15:28 compute-0 nova_compute[265391]: 2025-09-30 18:15:28.477 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:15:28 compute-0 nova_compute[265391]: 2025-09-30 18:15:28.642 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:15:28 compute-0 nova_compute[265391]: 2025-09-30 18:15:28.643 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:15:28 compute-0 nova_compute[265391]: 2025-09-30 18:15:28.667 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:15:28 compute-0 nova_compute[265391]: 2025-09-30 18:15:28.668 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4171MB free_disk=39.76438522338867GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:15:28 compute-0 nova_compute[265391]: 2025-09-30 18:15:28.668 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:28 compute-0 nova_compute[265391]: 2025-09-30 18:15:28.668 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:28] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:15:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:28] "GET /metrics HTTP/1.1" 200 46651 "" "Prometheus/2.51.0"
Sep 30 18:15:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:29 compute-0 ceph-mon[73755]: pgmap v1105: 353 pgs: 353 active+clean; 442 MiB data, 434 MiB used, 40 GiB / 40 GiB avail; 208 KiB/s rd, 11 KiB/s wr, 40 op/s
Sep 30 18:15:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2554084427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:29 compute-0 nova_compute[265391]: 2025-09-30 18:15:29.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:29 compute-0 podman[276673]: time="2025-09-30T18:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:15:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:15:29 compute-0 nova_compute[265391]: 2025-09-30 18:15:29.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10724 "" "Go-http-client/1.1"
Sep 30 18:15:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:30 compute-0 nova_compute[265391]: 2025-09-30 18:15:30.032 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 761dbb06-6272-4941-a65f-5ad2f8cfbb70 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:15:30 compute-0 nova_compute[265391]: 2025-09-30 18:15:30.034 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:15:30 compute-0 nova_compute[265391]: 2025-09-30 18:15:30.034 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:15:28 up  1:18,  0 user,  load average: 0.72, 0.86, 0.90\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_ddd1f985d8b64b449c79d55b0cbd6422': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:15:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 361 MiB data, 429 MiB used, 40 GiB / 40 GiB avail; 221 KiB/s rd, 14 KiB/s wr, 59 op/s
Sep 30 18:15:30 compute-0 nova_compute[265391]: 2025-09-30 18:15:30.136 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:15:30 compute-0 ceph-mon[73755]: pgmap v1106: 353 pgs: 353 active+clean; 361 MiB data, 429 MiB used, 40 GiB / 40 GiB avail; 221 KiB/s rd, 14 KiB/s wr, 59 op/s
Sep 30 18:15:30 compute-0 sudo[310152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:15:30 compute-0 sudo[310152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:30 compute-0 sudo[310152]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:15:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181834735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:30 compute-0 nova_compute[265391]: 2025-09-30 18:15:30.642 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:15:30 compute-0 nova_compute[265391]: 2025-09-30 18:15:30.648 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:15:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:31 compute-0 nova_compute[265391]: 2025-09-30 18:15:31.157 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:15:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2181834735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:31 compute-0 openstack_network_exporter[279566]: ERROR   18:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:15:31 compute-0 openstack_network_exporter[279566]: ERROR   18:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:15:31 compute-0 openstack_network_exporter[279566]: ERROR   18:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:15:31 compute-0 openstack_network_exporter[279566]: ERROR   18:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:15:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:15:31 compute-0 openstack_network_exporter[279566]: ERROR   18:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:15:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:15:31 compute-0 unix_chkpwd[310234]: password check failed for user (root)
Sep 30 18:15:31 compute-0 sshd-session[310130]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:15:31 compute-0 podman[310181]: 2025-09-30 18:15:31.528933353 +0000 UTC m=+0.063836539 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Sep 30 18:15:31 compute-0 podman[310180]: 2025-09-30 18:15:31.557493955 +0000 UTC m=+0.093597262 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:15:31 compute-0 podman[310182]: 2025-09-30 18:15:31.56421058 +0000 UTC m=+0.093600973 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Sep 30 18:15:31 compute-0 nova_compute[265391]: 2025-09-30 18:15:31.669 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:15:31 compute-0 nova_compute[265391]: 2025-09-30 18:15:31.670 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:31.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:15:31.837 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:15:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 361 MiB data, 429 MiB used, 40 GiB / 40 GiB avail; 31 KiB/s rd, 14 KiB/s wr, 47 op/s
Sep 30 18:15:32 compute-0 ceph-mon[73755]: pgmap v1107: 353 pgs: 353 active+clean; 361 MiB data, 429 MiB used, 40 GiB / 40 GiB avail; 31 KiB/s rd, 14 KiB/s wr, 47 op/s
Sep 30 18:15:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:15:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:15:32 compute-0 nova_compute[265391]: 2025-09-30 18:15:32.672 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:32 compute-0 nova_compute[265391]: 2025-09-30 18:15:32.673 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:32 compute-0 nova_compute[265391]: 2025-09-30 18:15:32.673 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:32 compute-0 nova_compute[265391]: 2025-09-30 18:15:32.673 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:32 compute-0 nova_compute[265391]: 2025-09-30 18:15:32.673 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:32 compute-0 nova_compute[265391]: 2025-09-30 18:15:32.674 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:15:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:33.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1674352271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:33 compute-0 sshd-session[310244]: Invalid user jia from 14.225.220.107 port 48954
Sep 30 18:15:33 compute-0 sshd-session[310244]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:15:33 compute-0 sshd-session[310244]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:15:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:33.706Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:33.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:33 compute-0 sshd-session[310130]: Failed password for root from 45.252.249.158 port 45570 ssh2
Sep 30 18:15:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 58 op/s
Sep 30 18:15:34 compute-0 ceph-mon[73755]: pgmap v1108: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 58 op/s
Sep 30 18:15:34 compute-0 nova_compute[265391]: 2025-09-30 18:15:34.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:15:34 compute-0 nova_compute[265391]: 2025-09-30 18:15:34.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:34 compute-0 sshd-session[310130]: Received disconnect from 45.252.249.158 port 45570:11: Bye Bye [preauth]
Sep 30 18:15:34 compute-0 sshd-session[310130]: Disconnected from authenticating user root 45.252.249.158 port 45570 [preauth]
Sep 30 18:15:34 compute-0 nova_compute[265391]: 2025-09-30 18:15:34.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4161356519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:35.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 30 op/s
Sep 30 18:15:36 compute-0 sshd-session[310244]: Failed password for invalid user jia from 14.225.220.107 port 48954 ssh2
Sep 30 18:15:36 compute-0 ceph-mon[73755]: pgmap v1109: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 30 op/s
Sep 30 18:15:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:15:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3076294399' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:15:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:15:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3076294399' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:15:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:37.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:37.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:15:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:15:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:15:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:15:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3076294399' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:15:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3076294399' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:15:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:15:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:15:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:15:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:15:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:15:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:37.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 30 op/s
Sep 30 18:15:38 compute-0 sshd-session[310244]: Received disconnect from 14.225.220.107 port 48954:11: Bye Bye [preauth]
Sep 30 18:15:38 compute-0 sshd-session[310244]: Disconnected from invalid user jia 14.225.220.107 port 48954 [preauth]
Sep 30 18:15:38 compute-0 ceph-mon[73755]: pgmap v1110: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 30 op/s
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:15:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:15:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:39 compute-0 nova_compute[265391]: 2025-09-30 18:15:39.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a40016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:39.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:39 compute-0 nova_compute[265391]: 2025-09-30 18:15:39.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 4.9 KiB/s wr, 30 op/s
Sep 30 18:15:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/181540 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 18:15:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:41 compute-0 ceph-mon[73755]: pgmap v1111: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 4.9 KiB/s wr, 30 op/s
Sep 30 18:15:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:41.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 8.1 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Sep 30 18:15:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:43.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:43 compute-0 ceph-mon[73755]: pgmap v1112: 353 pgs: 353 active+clean; 360 MiB data, 383 MiB used, 40 GiB / 40 GiB avail; 8.1 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Sep 30 18:15:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:43.707Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:43.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 281 MiB data, 338 MiB used, 40 GiB / 40 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Sep 30 18:15:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1018186174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:44 compute-0 nova_compute[265391]: 2025-09-30 18:15:44.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:44 compute-0 nova_compute[265391]: 2025-09-30 18:15:44.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03900016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:45 compute-0 ceph-mon[73755]: pgmap v1113: 353 pgs: 353 active+clean; 281 MiB data, 338 MiB used, 40 GiB / 40 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Sep 30 18:15:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 281 MiB data, 338 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:15:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:47.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:47.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:47 compute-0 ceph-mon[73755]: pgmap v1114: 353 pgs: 353 active+clean; 281 MiB data, 338 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:15:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:47.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 281 MiB data, 338 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:15:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:15:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:15:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03900016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:49.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:49 compute-0 ceph-mon[73755]: pgmap v1115: 353 pgs: 353 active+clean; 281 MiB data, 338 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:15:49 compute-0 nova_compute[265391]: 2025-09-30 18:15:49.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:49 compute-0 nova_compute[265391]: 2025-09-30 18:15:49.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 202 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 55 op/s
Sep 30 18:15:50 compute-0 ceph-mon[73755]: pgmap v1116: 353 pgs: 353 active+clean; 202 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 55 op/s
Sep 30 18:15:50 compute-0 podman[310281]: 2025-09-30 18:15:50.54188446 +0000 UTC m=+0.071245846 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:15:50 compute-0 podman[310280]: 2025-09-30 18:15:50.604555222 +0000 UTC m=+0.133421315 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Sep 30 18:15:50 compute-0 sudo[310328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:15:50 compute-0 sudo[310328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:15:50 compute-0 sudo[310328]: pam_unix(sudo:session): session closed for user root
Sep 30 18:15:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:51.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:51 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1071548273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:15:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 202 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:15:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:15:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:15:52 compute-0 ceph-mon[73755]: pgmap v1117: 353 pgs: 353 active+clean; 202 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:15:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:15:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03900016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:53.708Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:53.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 202 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:15:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:15:54.288 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:15:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:15:54.288 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:15:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:15:54.289 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:15:54 compute-0 nova_compute[265391]: 2025-09-30 18:15:54.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:54 compute-0 nova_compute[265391]: 2025-09-30 18:15:54.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:55.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:55 compute-0 ceph-mon[73755]: pgmap v1118: 353 pgs: 353 active+clean; 202 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:15:55 compute-0 podman[310364]: 2025-09-30 18:15:55.529735691 +0000 UTC m=+0.067963061 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:15:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:15:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:55.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 202 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:15:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:15:57.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:15:57 compute-0 ceph-mon[73755]: pgmap v1119: 353 pgs: 353 active+clean; 202 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:15:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:15:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2568044108' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:15:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:15:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2568044108' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:15:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:57.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 202 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:15:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2568044108' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:15:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2568044108' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:15:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:58] "GET /metrics HTTP/1.1" 200 46642 "" "Prometheus/2.51.0"
Sep 30 18:15:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:15:58] "GET /metrics HTTP/1.1" 200 46642 "" "Prometheus/2.51.0"
Sep 30 18:15:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:15:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:15:59.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:15:59 compute-0 ceph-mon[73755]: pgmap v1120: 353 pgs: 353 active+clean; 202 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:15:59 compute-0 nova_compute[265391]: 2025-09-30 18:15:59.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:15:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:15:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:15:59 compute-0 podman[276673]: time="2025-09-30T18:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:15:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:15:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10743 "" "Go-http-client/1.1"
Sep 30 18:15:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:15:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:15:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:15:59.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:15:59 compute-0 nova_compute[265391]: 2025-09-30 18:15:59.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:16:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3441962140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:00 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 18:16:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003730 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:01.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:01 compute-0 ceph-mon[73755]: pgmap v1121: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:16:01 compute-0 openstack_network_exporter[279566]: ERROR   18:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:16:01 compute-0 openstack_network_exporter[279566]: ERROR   18:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:16:01 compute-0 openstack_network_exporter[279566]: ERROR   18:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:16:01 compute-0 openstack_network_exporter[279566]: ERROR   18:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:16:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:16:01 compute-0 openstack_network_exporter[279566]: ERROR   18:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:16:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:16:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c0032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:01.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:02 compute-0 ceph-mon[73755]: pgmap v1122: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:02 compute-0 podman[310393]: 2025-09-30 18:16:02.520973835 +0000 UTC m=+0.058304600 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Sep 30 18:16:02 compute-0 podman[310392]: 2025-09-30 18:16:02.529800574 +0000 UTC m=+0.067977691 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:16:02 compute-0 podman[310391]: 2025-09-30 18:16:02.530426921 +0000 UTC m=+0.068675309 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:16:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.037 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.038 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.038 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.038 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.039 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.050 2 INFO nova.compute.manager [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Terminating instance
Sep 30 18:16:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:16:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:03.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.570 2 DEBUG nova.compute.manager [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:16:03 compute-0 kernel: tap26d718f0-c9 (unregistering): left promiscuous mode
Sep 30 18:16:03 compute-0 NetworkManager[45059]: <info>  [1759256163.6384] device (tap26d718f0-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:16:03 compute-0 ovn_controller[156242]: 2025-09-30T18:16:03Z|00090|binding|INFO|Releasing lport 26d718f0-c921-490f-815d-8221b976f012 from this chassis (sb_readonly=0)
Sep 30 18:16:03 compute-0 ovn_controller[156242]: 2025-09-30T18:16:03Z|00091|binding|INFO|Setting lport 26d718f0-c921-490f-815d-8221b976f012 down in Southbound
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:03 compute-0 ovn_controller[156242]: 2025-09-30T18:16:03Z|00092|binding|INFO|Removing iface tap26d718f0-c9 ovn-installed in OVS
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.654 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:22:1f 10.100.0.7'], port_security=['fa:16:3e:0e:22:1f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '761dbb06-6272-4941-a65f-5ad2f8cfbb70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5fff1904-159a-4b76-8c46-feabf17f29ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddd1f985d8b64b449c79d55b0cbd6422', 'neutron:revision_number': '7', 'neutron:security_group_ids': '34f3cf7b-94cf-408f-b3dc-ae0b57c009fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c18a77-b252-4a3e-a181-b42644879446, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=26d718f0-c921-490f-815d-8221b976f012) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.656 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 26d718f0-c921-490f-815d-8221b976f012 in datapath 5fff1904-159a-4b76-8c46-feabf17f29ab unbound from our chassis
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.657 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5fff1904-159a-4b76-8c46-feabf17f29ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.658 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8278ad8a-58f7-447e-9ef5-915c6ffb5fbd]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.659 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab namespace which is not needed anymore
Sep 30 18:16:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:03 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Sep 30 18:16:03 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 20.545s CPU time.
Sep 30 18:16:03 compute-0 systemd-machined[219917]: Machine qemu-4-instance-00000004 terminated.
Sep 30 18:16:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:03.709Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:03 compute-0 podman[310475]: 2025-09-30 18:16:03.797667877 +0000 UTC m=+0.036908242 container kill 1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:16:03 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[305525]: [NOTICE]   (305557) : haproxy version is 3.0.5-8e879a5
Sep 30 18:16:03 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[305525]: [NOTICE]   (305557) : path to executable is /usr/sbin/haproxy
Sep 30 18:16:03 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[305525]: [WARNING]  (305557) : Exiting Master process...
Sep 30 18:16:03 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[305525]: [ALERT]    (305557) : Current worker (305561) exited with code 143 (Terminated)
Sep 30 18:16:03 compute-0 neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab[305525]: [WARNING]  (305557) : All workers exited. Exiting... (0)
Sep 30 18:16:03 compute-0 systemd[1]: libpod-1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7.scope: Deactivated successfully.
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.809 2 INFO nova.virt.libvirt.driver [-] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Instance destroyed successfully.
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.810 2 DEBUG nova.objects.instance [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lazy-loading 'resources' on Instance uuid 761dbb06-6272-4941-a65f-5ad2f8cfbb70 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:16:03 compute-0 podman[310499]: 2025-09-30 18:16:03.840965644 +0000 UTC m=+0.027089826 container died 1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:16:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-956b7df1d67a8dfff3db6885f1285258ecb09f779be952bb04f304c6a43d2c10-merged.mount: Deactivated successfully.
Sep 30 18:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7-userdata-shm.mount: Deactivated successfully.
Sep 30 18:16:03 compute-0 podman[310499]: 2025-09-30 18:16:03.880819132 +0000 UTC m=+0.066943294 container cleanup 1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Sep 30 18:16:03 compute-0 systemd[1]: libpod-conmon-1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7.scope: Deactivated successfully.
Sep 30 18:16:03 compute-0 podman[310507]: 2025-09-30 18:16:03.898668047 +0000 UTC m=+0.061785800 container remove 1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.903 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e6c6d7-e16a-4c08-be81-7e20d0ec1f71]: (4, ("Tue Sep 30 06:16:03 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab (1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7)\n1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7\nTue Sep 30 06:16:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab (1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7)\n1a934a73fb9e89fc1c5173aaaca5e249298d72c0ef29a71f20c487def4f55eb7\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.905 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b01ba908-1554-4e47-9fef-0a0db223f5d9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.905 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5fff1904-159a-4b76-8c46-feabf17f29ab.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.905 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0861b3-0fed-4149-88fb-b254969b46ba]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.906 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fff1904-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:03 compute-0 kernel: tap5fff1904-10: left promiscuous mode
Sep 30 18:16:03 compute-0 nova_compute[265391]: 2025-09-30 18:16:03.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.927 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6228bf76-a600-4984-8c84-7a7d9938074d]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.967 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff94169-b278-4471-bc8a-f8cab4b3c130]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.969 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0979f475-aa17-4c15-8770-33e0d8ec95fe]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.985 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a65c9c9a-94a4-443f-8fff-f92f10350f17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456725, 'reachable_time': 35237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310533, 'error': None, 'target': 'ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.989 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5fff1904-159a-4b76-8c46-feabf17f29ab deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:16:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:03.989 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[6316d05d-08a7-4efd-80b5-01090bf8194d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d5fff1904\x2d159a\x2d4b76\x2d8c46\x2dfeabf17f29ab.mount: Deactivated successfully.
Sep 30 18:16:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.185 2 DEBUG nova.compute.manager [req-4a86603e-6529-4259-bc50-f5b10a7d1f08 req-5e20850b-c3db-487d-861d-a6797a2777cc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.186 2 DEBUG oslo_concurrency.lockutils [req-4a86603e-6529-4259-bc50-f5b10a7d1f08 req-5e20850b-c3db-487d-861d-a6797a2777cc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.186 2 DEBUG oslo_concurrency.lockutils [req-4a86603e-6529-4259-bc50-f5b10a7d1f08 req-5e20850b-c3db-487d-861d-a6797a2777cc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.186 2 DEBUG oslo_concurrency.lockutils [req-4a86603e-6529-4259-bc50-f5b10a7d1f08 req-5e20850b-c3db-487d-861d-a6797a2777cc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.186 2 DEBUG nova.compute.manager [req-4a86603e-6529-4259-bc50-f5b10a7d1f08 req-5e20850b-c3db-487d-861d-a6797a2777cc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] No waiting events found dispatching network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.186 2 DEBUG nova.compute.manager [req-4a86603e-6529-4259-bc50-f5b10a7d1f08 req-5e20850b-c3db-487d-861d-a6797a2777cc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.316 2 DEBUG nova.virt.libvirt.vif [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1100695055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1100695055',id=4,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:12:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddd1f985d8b64b449c79d55b0cbd6422',ramdisk_id='',reservation_id='r-9gd0nhai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-837729328',owner_user_name='tempest-TestExecuteActionsViaActuator-837729328-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:13:03Z,user_data=None,user_id='dc3bb71c425f484fbc46f90978029403',uuid=761dbb06-6272-4941-a65f-5ad2f8cfbb70,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.316 2 DEBUG nova.network.os_vif_util [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converting VIF {"id": "26d718f0-c921-490f-815d-8221b976f012", "address": "fa:16:3e:0e:22:1f", "network": {"id": "5fff1904-159a-4b76-8c46-feabf17f29ab", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-50673167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250d452565a2459c8481b499c0227183", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d718f0-c9", "ovs_interfaceid": "26d718f0-c921-490f-815d-8221b976f012", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.317 2 DEBUG nova.network.os_vif_util [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.317 2 DEBUG os_vif [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d718f0-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=82434abe-6ab7-45c4-9039-448a91acd1b8) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.327 2 INFO os_vif [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:22:1f,bridge_name='br-int',has_traffic_filtering=True,id=26d718f0-c921-490f-815d-8221b976f012,network=Network(5fff1904-159a-4b76-8c46-feabf17f29ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d718f0-c9')
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.782 2 INFO nova.virt.libvirt.driver [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Deleting instance files /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70_del
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.783 2 INFO nova.virt.libvirt.driver [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Deletion of /var/lib/nova/instances/761dbb06-6272-4941-a65f-5ad2f8cfbb70_del complete
Sep 30 18:16:04 compute-0 nova_compute[265391]: 2025-09-30 18:16:04.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003730 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:05.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:05 compute-0 ceph-mon[73755]: pgmap v1123: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:05 compute-0 nova_compute[265391]: 2025-09-30 18:16:05.296 2 INFO nova.compute.manager [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Took 1.73 seconds to destroy the instance on the hypervisor.
Sep 30 18:16:05 compute-0 nova_compute[265391]: 2025-09-30 18:16:05.297 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:16:05 compute-0 nova_compute[265391]: 2025-09-30 18:16:05.297 2 DEBUG nova.compute.manager [-] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:16:05 compute-0 nova_compute[265391]: 2025-09-30 18:16:05.297 2 DEBUG nova.network.neutron [-] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:16:05 compute-0 nova_compute[265391]: 2025-09-30 18:16:05.298 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:16:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c0032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:05 compute-0 nova_compute[265391]: 2025-09-30 18:16:05.974 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:16:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.253 2 DEBUG nova.compute.manager [req-a9bb1ae8-2140-461b-987b-565fcf091918 req-394e8009-ea3f-4626-84a2-267bb11dc899 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.253 2 DEBUG oslo_concurrency.lockutils [req-a9bb1ae8-2140-461b-987b-565fcf091918 req-394e8009-ea3f-4626-84a2-267bb11dc899 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.254 2 DEBUG oslo_concurrency.lockutils [req-a9bb1ae8-2140-461b-987b-565fcf091918 req-394e8009-ea3f-4626-84a2-267bb11dc899 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.254 2 DEBUG oslo_concurrency.lockutils [req-a9bb1ae8-2140-461b-987b-565fcf091918 req-394e8009-ea3f-4626-84a2-267bb11dc899 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.254 2 DEBUG nova.compute.manager [req-a9bb1ae8-2140-461b-987b-565fcf091918 req-394e8009-ea3f-4626-84a2-267bb11dc899 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] No waiting events found dispatching network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.255 2 DEBUG nova.compute.manager [req-a9bb1ae8-2140-461b-987b-565fcf091918 req-394e8009-ea3f-4626-84a2-267bb11dc899 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-unplugged-26d718f0-c921-490f-815d-8221b976f012 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.286 2 DEBUG nova.compute.manager [req-d3a56a9b-1dab-43e7-9120-b503603f9764 req-33d7944c-8cd9-4d0f-88ba-63f11d8bb6e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Received event network-vif-deleted-26d718f0-c921-490f-815d-8221b976f012 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.287 2 INFO nova.compute.manager [req-d3a56a9b-1dab-43e7-9120-b503603f9764 req-33d7944c-8cd9-4d0f-88ba-63f11d8bb6e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Neutron deleted interface 26d718f0-c921-490f-815d-8221b976f012; detaching it from the instance and deleting it from the info cache
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.287 2 DEBUG nova.network.neutron [req-d3a56a9b-1dab-43e7-9120-b503603f9764 req-33d7944c-8cd9-4d0f-88ba-63f11d8bb6e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.732 2 DEBUG nova.network.neutron [-] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:16:06 compute-0 nova_compute[265391]: 2025-09-30 18:16:06.794 2 DEBUG nova.compute.manager [req-d3a56a9b-1dab-43e7-9120-b503603f9764 req-33d7944c-8cd9-4d0f-88ba-63f11d8bb6e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Detach interface failed, port_id=26d718f0-c921-490f-815d-8221b976f012, reason: Instance 761dbb06-6272-4941-a65f-5ad2f8cfbb70 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Sep 30 18:16:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:07.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:07 compute-0 ceph-mon[73755]: pgmap v1124: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:07 compute-0 nova_compute[265391]: 2025-09-30 18:16:07.241 2 INFO nova.compute.manager [-] [instance: 761dbb06-6272-4941-a65f-5ad2f8cfbb70] Took 1.94 seconds to deallocate network for instance.
Sep 30 18:16:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:16:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:16:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011409126843621802 of space, bias 1.0, pg target 0.22818253687243603 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:16:07
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log', '.nfs']
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:16:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:16:07 compute-0 nova_compute[265391]: 2025-09-30 18:16:07.762 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:07 compute-0 nova_compute[265391]: 2025-09-30 18:16:07.762 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:07 compute-0 nova_compute[265391]: 2025-09-30 18:16:07.815 2 DEBUG oslo_concurrency.processutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:16:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:07.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:16:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:16:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:16:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315580826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:08 compute-0 nova_compute[265391]: 2025-09-30 18:16:08.290 2 DEBUG oslo_concurrency.processutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:08 compute-0 nova_compute[265391]: 2025-09-30 18:16:08.299 2 DEBUG nova.compute.provider_tree [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:16:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:08] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:16:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:08] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:16:08 compute-0 nova_compute[265391]: 2025-09-30 18:16:08.810 2 DEBUG nova.scheduler.client.report [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:16:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:09.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:09 compute-0 ceph-mon[73755]: pgmap v1125: 353 pgs: 353 active+clean; 123 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2315580826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:09 compute-0 sudo[310579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:16:09 compute-0 sudo[310579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:09 compute-0 sudo[310579]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:09 compute-0 nova_compute[265391]: 2025-09-30 18:16:09.322 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.560s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:09 compute-0 nova_compute[265391]: 2025-09-30 18:16:09.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:09 compute-0 nova_compute[265391]: 2025-09-30 18:16:09.357 2 INFO nova.scheduler.client.report [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Deleted allocations for instance 761dbb06-6272-4941-a65f-5ad2f8cfbb70
Sep 30 18:16:09 compute-0 sudo[310604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:16:09 compute-0 sudo[310604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:09.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:09 compute-0 nova_compute[265391]: 2025-09-30 18:16:09.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:10 compute-0 sudo[310604]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:16:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:16:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:16:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:16:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:16:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:16:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:16:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:16:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:16:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:16:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:16:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:16:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:16:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:16:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:16:10 compute-0 sudo[310663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:16:10 compute-0 sudo[310663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:10 compute-0 sudo[310663]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:16:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:16:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:16:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:16:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:16:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:16:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:16:10 compute-0 sudo[310688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:16:10 compute-0 sudo[310688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:10 compute-0 nova_compute[265391]: 2025-09-30 18:16:10.392 2 DEBUG oslo_concurrency.lockutils [None req-cf3cbcf5-3459-42b5-91cf-024029ce6d43 dc3bb71c425f484fbc46f90978029403 ddd1f985d8b64b449c79d55b0cbd6422 - - default default] Lock "761dbb06-6272-4941-a65f-5ad2f8cfbb70" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.354s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:10 compute-0 sudo[310741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:16:10 compute-0 sudo[310741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:10 compute-0 sudo[310741]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:10 compute-0 podman[310777]: 2025-09-30 18:16:10.822163678 +0000 UTC m=+0.040706621 container create 75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:16:10 compute-0 systemd[1]: Started libpod-conmon-75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954.scope.
Sep 30 18:16:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:16:10 compute-0 podman[310777]: 2025-09-30 18:16:10.803214894 +0000 UTC m=+0.021757817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:16:10 compute-0 podman[310777]: 2025-09-30 18:16:10.904117552 +0000 UTC m=+0.122660465 container init 75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ardinghelli, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:16:10 compute-0 podman[310777]: 2025-09-30 18:16:10.910629691 +0000 UTC m=+0.129172594 container start 75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ardinghelli, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:16:10 compute-0 podman[310777]: 2025-09-30 18:16:10.914012869 +0000 UTC m=+0.132555782 container attach 75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:16:10 compute-0 recursing_ardinghelli[310793]: 167 167
Sep 30 18:16:10 compute-0 systemd[1]: libpod-75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954.scope: Deactivated successfully.
Sep 30 18:16:10 compute-0 podman[310777]: 2025-09-30 18:16:10.917746197 +0000 UTC m=+0.136289130 container died 75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3052c6d8a89aeb090b5eaa5539b182595eb3a856b9bf56df0ae8a90bb197090d-merged.mount: Deactivated successfully.
Sep 30 18:16:10 compute-0 podman[310777]: 2025-09-30 18:16:10.953491437 +0000 UTC m=+0.172034340 container remove 75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_ardinghelli, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:16:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:10 compute-0 systemd[1]: libpod-conmon-75e01f6782cdae5b69a324f5472aca483473a9c6eb5ab757c969c66561e6b954.scope: Deactivated successfully.
Sep 30 18:16:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:11.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:11 compute-0 podman[310818]: 2025-09-30 18:16:11.169580014 +0000 UTC m=+0.074134542 container create 68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:16:11 compute-0 systemd[1]: Started libpod-conmon-68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61.scope.
Sep 30 18:16:11 compute-0 podman[310818]: 2025-09-30 18:16:11.13793965 +0000 UTC m=+0.042494208 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:16:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d43c68768584f0fbc20a848c575e16b72ad94390eb3e7a59ac445a1a13c4fa4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d43c68768584f0fbc20a848c575e16b72ad94390eb3e7a59ac445a1a13c4fa4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d43c68768584f0fbc20a848c575e16b72ad94390eb3e7a59ac445a1a13c4fa4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d43c68768584f0fbc20a848c575e16b72ad94390eb3e7a59ac445a1a13c4fa4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d43c68768584f0fbc20a848c575e16b72ad94390eb3e7a59ac445a1a13c4fa4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:11 compute-0 podman[310818]: 2025-09-30 18:16:11.276944659 +0000 UTC m=+0.181499237 container init 68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:16:11 compute-0 ceph-mon[73755]: pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:16:11 compute-0 podman[310818]: 2025-09-30 18:16:11.292946696 +0000 UTC m=+0.197501224 container start 68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:16:11 compute-0 podman[310818]: 2025-09-30 18:16:11.296914599 +0000 UTC m=+0.201469217 container attach 68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:16:11 compute-0 brave_clarke[310834]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:16:11 compute-0 brave_clarke[310834]: --> All data devices are unavailable
Sep 30 18:16:11 compute-0 systemd[1]: libpod-68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61.scope: Deactivated successfully.
Sep 30 18:16:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:11 compute-0 podman[310818]: 2025-09-30 18:16:11.686960285 +0000 UTC m=+0.591514843 container died 68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d43c68768584f0fbc20a848c575e16b72ad94390eb3e7a59ac445a1a13c4fa4-merged.mount: Deactivated successfully.
Sep 30 18:16:11 compute-0 podman[310818]: 2025-09-30 18:16:11.741543066 +0000 UTC m=+0.646097604 container remove 68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:16:11 compute-0 systemd[1]: libpod-conmon-68bb3769356992fdc4c61aa2d5c25bec7034c01a3c4983cadfce30c7f9977f61.scope: Deactivated successfully.
Sep 30 18:16:11 compute-0 sudo[310688]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:16:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:11.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:16:11 compute-0 sudo[310863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:16:11 compute-0 sudo[310863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:11 compute-0 sudo[310863]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:11 compute-0 sudo[310888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:16:11 compute-0 sudo[310888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:12 compute-0 ceph-mon[73755]: pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:12 compute-0 podman[310953]: 2025-09-30 18:16:12.432510697 +0000 UTC m=+0.090901188 container create 13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hermann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:16:12 compute-0 podman[310953]: 2025-09-30 18:16:12.363224103 +0000 UTC m=+0.021614614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:16:12 compute-0 systemd[1]: Started libpod-conmon-13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df.scope.
Sep 30 18:16:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:16:12 compute-0 podman[310953]: 2025-09-30 18:16:12.513969068 +0000 UTC m=+0.172359609 container init 13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:16:12 compute-0 podman[310953]: 2025-09-30 18:16:12.521657648 +0000 UTC m=+0.180048139 container start 13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:16:12 compute-0 podman[310953]: 2025-09-30 18:16:12.52441048 +0000 UTC m=+0.182800971 container attach 13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hermann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:16:12 compute-0 practical_hermann[310969]: 167 167
Sep 30 18:16:12 compute-0 systemd[1]: libpod-13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df.scope: Deactivated successfully.
Sep 30 18:16:12 compute-0 podman[310953]: 2025-09-30 18:16:12.531189146 +0000 UTC m=+0.189579637 container died 13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hermann, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecf365b2ebf70cc5c1e0880b3548a6d596014f7bcb4f842d03e5ae0aee64459f-merged.mount: Deactivated successfully.
Sep 30 18:16:12 compute-0 podman[310953]: 2025-09-30 18:16:12.576773003 +0000 UTC m=+0.235163494 container remove 13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:16:12 compute-0 systemd[1]: libpod-conmon-13d07da35cfc0f0b1ca8d6834f9f0cc30bf44cf4a0b05520576ec7e9af89c2df.scope: Deactivated successfully.
Sep 30 18:16:12 compute-0 podman[310992]: 2025-09-30 18:16:12.811566797 +0000 UTC m=+0.059945262 container create 18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chebyshev, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:16:12 compute-0 systemd[1]: Started libpod-conmon-18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c.scope.
Sep 30 18:16:12 compute-0 podman[310992]: 2025-09-30 18:16:12.788700921 +0000 UTC m=+0.037079476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:16:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b34458360c31829de4f04710d92bff36e69595768b7f2eea077ba0cab6897ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b34458360c31829de4f04710d92bff36e69595768b7f2eea077ba0cab6897ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b34458360c31829de4f04710d92bff36e69595768b7f2eea077ba0cab6897ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b34458360c31829de4f04710d92bff36e69595768b7f2eea077ba0cab6897ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:12 compute-0 podman[310992]: 2025-09-30 18:16:12.906970231 +0000 UTC m=+0.155348736 container init 18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:16:12 compute-0 podman[310992]: 2025-09-30 18:16:12.916359115 +0000 UTC m=+0.164737580 container start 18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chebyshev, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:16:12 compute-0 podman[310992]: 2025-09-30 18:16:12.919585319 +0000 UTC m=+0.167963824 container attach 18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:16:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:13.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]: {
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:     "0": [
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:         {
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "devices": [
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "/dev/loop3"
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             ],
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "lv_name": "ceph_lv0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "lv_size": "21470642176",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "name": "ceph_lv0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "tags": {
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.cluster_name": "ceph",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.crush_device_class": "",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.encrypted": "0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.osd_id": "0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.type": "block",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.vdo": "0",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:                 "ceph.with_tpm": "0"
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             },
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "type": "block",
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:             "vg_name": "ceph_vg0"
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:         }
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]:     ]
Sep 30 18:16:13 compute-0 magical_chebyshev[311009]: }
Sep 30 18:16:13 compute-0 systemd[1]: libpod-18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c.scope: Deactivated successfully.
Sep 30 18:16:13 compute-0 podman[310992]: 2025-09-30 18:16:13.263009981 +0000 UTC m=+0.511388476 container died 18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chebyshev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 18:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b34458360c31829de4f04710d92bff36e69595768b7f2eea077ba0cab6897ab-merged.mount: Deactivated successfully.
Sep 30 18:16:13 compute-0 podman[310992]: 2025-09-30 18:16:13.332023878 +0000 UTC m=+0.580402383 container remove 18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:16:13 compute-0 systemd[1]: libpod-conmon-18c6da9d38431eb92e25825726f6eea2f248b962b74420aa3f33f88e8ab2857c.scope: Deactivated successfully.
Sep 30 18:16:13 compute-0 sudo[310888]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:13 compute-0 sudo[311032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:16:13 compute-0 sudo[311032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:13 compute-0 sudo[311032]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:13 compute-0 sudo[311058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:16:13 compute-0 sudo[311058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:13.711Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:13.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:13 compute-0 podman[311126]: 2025-09-30 18:16:13.982273839 +0000 UTC m=+0.053944195 container create 1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williams, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:16:14 compute-0 systemd[1]: Started libpod-conmon-1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b.scope.
Sep 30 18:16:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:16:14 compute-0 podman[311126]: 2025-09-30 18:16:14.049450498 +0000 UTC m=+0.121120864 container init 1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:16:14 compute-0 podman[311126]: 2025-09-30 18:16:13.957043192 +0000 UTC m=+0.028713578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:16:14 compute-0 podman[311126]: 2025-09-30 18:16:14.059820618 +0000 UTC m=+0.131490964 container start 1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:16:14 compute-0 podman[311126]: 2025-09-30 18:16:14.06374194 +0000 UTC m=+0.135412316 container attach 1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:16:14 compute-0 eager_williams[311142]: 167 167
Sep 30 18:16:14 compute-0 systemd[1]: libpod-1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b.scope: Deactivated successfully.
Sep 30 18:16:14 compute-0 podman[311126]: 2025-09-30 18:16:14.066628226 +0000 UTC m=+0.138298572 container died 1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williams, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-86892a33a465a88700f902ed82c605d5aa379bee586f5982ac5e3891a5ca6307-merged.mount: Deactivated successfully.
Sep 30 18:16:14 compute-0 podman[311126]: 2025-09-30 18:16:14.11403487 +0000 UTC m=+0.185705216 container remove 1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_williams, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:16:14 compute-0 systemd[1]: libpod-conmon-1051d2d37f815d328da517b3b3561d68ee3cfdbc7c6f48196216f3e3bf0b1b6b.scope: Deactivated successfully.
Sep 30 18:16:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:14 compute-0 podman[311165]: 2025-09-30 18:16:14.32297426 +0000 UTC m=+0.046953253 container create 911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:16:14 compute-0 nova_compute[265391]: 2025-09-30 18:16:14.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:14 compute-0 systemd[1]: Started libpod-conmon-911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d.scope.
Sep 30 18:16:14 compute-0 podman[311165]: 2025-09-30 18:16:14.303397581 +0000 UTC m=+0.027376594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:16:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466f429e643961af5be0f75299aae4e168ef2253ae2a438679c3cfb9a97856d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466f429e643961af5be0f75299aae4e168ef2253ae2a438679c3cfb9a97856d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466f429e643961af5be0f75299aae4e168ef2253ae2a438679c3cfb9a97856d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466f429e643961af5be0f75299aae4e168ef2253ae2a438679c3cfb9a97856d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:16:14 compute-0 podman[311165]: 2025-09-30 18:16:14.427243105 +0000 UTC m=+0.151222198 container init 911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:16:14 compute-0 podman[311165]: 2025-09-30 18:16:14.440327446 +0000 UTC m=+0.164306439 container start 911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 18:16:14 compute-0 podman[311165]: 2025-09-30 18:16:14.444142745 +0000 UTC m=+0.168121838 container attach 911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_zhukovsky, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:16:14 compute-0 nova_compute[265391]: 2025-09-30 18:16:14.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:16:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:15.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:16:15 compute-0 lvm[311256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:16:15 compute-0 lvm[311256]: VG ceph_vg0 finished
Sep 30 18:16:15 compute-0 ceph-mon[73755]: pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:15 compute-0 recursing_zhukovsky[311181]: {}
Sep 30 18:16:15 compute-0 systemd[1]: libpod-911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d.scope: Deactivated successfully.
Sep 30 18:16:15 compute-0 podman[311165]: 2025-09-30 18:16:15.299166558 +0000 UTC m=+1.023145591 container died 911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_zhukovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:16:15 compute-0 systemd[1]: libpod-911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d.scope: Consumed 1.401s CPU time.
Sep 30 18:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-466f429e643961af5be0f75299aae4e168ef2253ae2a438679c3cfb9a97856d4-merged.mount: Deactivated successfully.
Sep 30 18:16:15 compute-0 podman[311165]: 2025-09-30 18:16:15.352849346 +0000 UTC m=+1.076828349 container remove 911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:16:15 compute-0 systemd[1]: libpod-conmon-911d8c85e28b70838c6ea66619598d181cc41df6c5204ba8fe82cd10c1582e4d.scope: Deactivated successfully.
Sep 30 18:16:15 compute-0 sudo[311058]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:16:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:16:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:16:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:16:15 compute-0 sudo[311272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:16:15 compute-0 sudo[311272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:15 compute-0 sudo[311272]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:16:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:16:16 compute-0 ceph-mon[73755]: pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:17.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:17.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:17 compute-0 nova_compute[265391]: 2025-09-30 18:16:17.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:17.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:18] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:16:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:18] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:16:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:19.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:19 compute-0 ceph-mon[73755]: pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:19 compute-0 nova_compute[265391]: 2025-09-30 18:16:19.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:19.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:19 compute-0 nova_compute[265391]: 2025-09-30 18:16:19.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:21.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:21 compute-0 ceph-mon[73755]: pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:16:21 compute-0 podman[311304]: 2025-09-30 18:16:21.555859508 +0000 UTC m=+0.078541586 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:16:21 compute-0 podman[311303]: 2025-09-30 18:16:21.635333347 +0000 UTC m=+0.157221394 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:16:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:21.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:22 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:22.265 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:16:22 compute-0 nova_compute[265391]: 2025-09-30 18:16:22.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:22 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:22.266 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:16:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:16:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:16:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:23.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:23 compute-0 ceph-mon[73755]: pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:16:23 compute-0 nova_compute[265391]: 2025-09-30 18:16:23.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:16:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:23.711Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:23.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:24 compute-0 nova_compute[265391]: 2025-09-30 18:16:24.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:24 compute-0 nova_compute[265391]: 2025-09-30 18:16:24.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:25 compute-0 ceph-mon[73755]: pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:26 compute-0 nova_compute[265391]: 2025-09-30 18:16:26.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:16:26 compute-0 podman[311358]: 2025-09-30 18:16:26.518780661 +0000 UTC m=+0.058042962 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_managed=true)
Sep 30 18:16:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:27.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:27 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:27.267 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:27 compute-0 ceph-mon[73755]: pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:27 compute-0 nova_compute[265391]: 2025-09-30 18:16:27.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:16:27 compute-0 nova_compute[265391]: 2025-09-30 18:16:27.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:16:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:27.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:27 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:27.903 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:56:60 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd66b07a980744cd29ee547eb08500706', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afc38829-13e1-4bde-91a7-790387f17ce5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=44f8f232-e480-4bd7-ad4d-10a4684c061b) old=Port_Binding(mac=['fa:16:3e:9f:56:60'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd66b07a980744cd29ee547eb08500706', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:16:27 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:27.905 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 44f8f232-e480-4bd7-ad4d-10a4684c061b in datapath cd077ee2-d26f-4989-8ea7-4aecbac7c636 updated
Sep 30 18:16:27 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:27.906 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd077ee2-d26f-4989-8ea7-4aecbac7c636, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:16:27 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:27.908 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[09808502-cc1a-4235-b502-2449aed068c4]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:27 compute-0 nova_compute[265391]: 2025-09-30 18:16:27.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:27 compute-0 nova_compute[265391]: 2025-09-30 18:16:27.949 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.004s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:27 compute-0 nova_compute[265391]: 2025-09-30 18:16:27.950 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:27 compute-0 nova_compute[265391]: 2025-09-30 18:16:27.950 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:16:27 compute-0 nova_compute[265391]: 2025-09-30 18:16:27.951 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:28 compute-0 ceph-mon[73755]: pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:16:28 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200513826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:28 compute-0 nova_compute[265391]: 2025-09-30 18:16:28.482 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:28 compute-0 nova_compute[265391]: 2025-09-30 18:16:28.675 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:16:28 compute-0 nova_compute[265391]: 2025-09-30 18:16:28.678 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:28 compute-0 nova_compute[265391]: 2025-09-30 18:16:28.712 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:28 compute-0 nova_compute[265391]: 2025-09-30 18:16:28.713 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4459MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:16:28 compute-0 nova_compute[265391]: 2025-09-30 18:16:28.713 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:28 compute-0 nova_compute[265391]: 2025-09-30 18:16:28.714 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:28] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:16:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:28] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:16:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:29.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4200513826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:29 compute-0 nova_compute[265391]: 2025-09-30 18:16:29.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:29 compute-0 podman[276673]: time="2025-09-30T18:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:16:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:16:29 compute-0 nova_compute[265391]: 2025-09-30 18:16:29.761 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:16:29 compute-0 nova_compute[265391]: 2025-09-30 18:16:29.762 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:16:28 up  1:19,  0 user,  load average: 0.36, 0.73, 0.85\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:16:29 compute-0 nova_compute[265391]: 2025-09-30 18:16:29.783 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10274 "" "Go-http-client/1.1"
Sep 30 18:16:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:29 compute-0 nova_compute[265391]: 2025-09-30 18:16:29.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:16:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261227712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:30 compute-0 nova_compute[265391]: 2025-09-30 18:16:30.264 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:30 compute-0 nova_compute[265391]: 2025-09-30 18:16:30.271 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:16:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3412734945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:30 compute-0 ceph-mon[73755]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/261227712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:30 compute-0 nova_compute[265391]: 2025-09-30 18:16:30.778 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:16:30 compute-0 sudo[311430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:16:30 compute-0 sudo[311430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:30 compute-0 sudo[311430]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:31.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:31 compute-0 nova_compute[265391]: 2025-09-30 18:16:31.292 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:16:31 compute-0 nova_compute[265391]: 2025-09-30 18:16:31.293 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.580s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:31 compute-0 openstack_network_exporter[279566]: ERROR   18:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:16:31 compute-0 openstack_network_exporter[279566]: ERROR   18:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:16:31 compute-0 openstack_network_exporter[279566]: ERROR   18:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:16:31 compute-0 openstack_network_exporter[279566]: ERROR   18:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:16:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:16:31 compute-0 openstack_network_exporter[279566]: ERROR   18:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:16:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:16:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:31.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/919231995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:32 compute-0 nova_compute[265391]: 2025-09-30 18:16:32.294 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:16:32 compute-0 nova_compute[265391]: 2025-09-30 18:16:32.294 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:16:32 compute-0 nova_compute[265391]: 2025-09-30 18:16:32.294 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:16:32 compute-0 nova_compute[265391]: 2025-09-30 18:16:32.295 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:16:32 compute-0 nova_compute[265391]: 2025-09-30 18:16:32.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:16:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:33 compute-0 ceph-mon[73755]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:33.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:33 compute-0 podman[311465]: 2025-09-30 18:16:33.547780989 +0000 UTC m=+0.056740098 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter)
Sep 30 18:16:33 compute-0 podman[311459]: 2025-09-30 18:16:33.547727388 +0000 UTC m=+0.068033733 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:16:33 compute-0 podman[311458]: 2025-09-30 18:16:33.558268682 +0000 UTC m=+0.083407243 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=multipathd, io.buildah.version=1.41.4)
Sep 30 18:16:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:33.712Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:33.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:16:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.3 total, 600.0 interval
                                           Cumulative writes: 6864 writes, 30K keys, 6862 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 6864 writes, 6862 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1541 writes, 6469 keys, 1541 commit groups, 1.0 writes per commit group, ingest: 10.98 MB, 0.02 MB/s
                                           Interval WAL: 1541 writes, 1541 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     82.0      0.51              0.11        17    0.030       0      0       0.0       0.0
                                             L6      1/0   10.51 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.2    189.7    160.9      1.09              0.46        16    0.068     82K   8809       0.0       0.0
                                            Sum      1/0   10.51 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.2    129.1    135.7      1.60              0.57        33    0.048     82K   8809       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.0    194.6    193.7      0.22              0.13         6    0.037     18K   2054       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    189.7    160.9      1.09              0.46        16    0.068     82K   8809       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    155.4      0.27              0.11        16    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2401.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.041, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.21 GB write, 0.09 MB/s write, 0.20 GB read, 0.09 MB/s read, 1.6 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 304.00 MB usage: 18.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000187 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1079,17.95 MB,5.90555%) FilterBlock(34,232.86 KB,0.0748032%) IndexBlock(34,413.22 KB,0.132741%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 18:16:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:34 compute-0 nova_compute[265391]: 2025-09-30 18:16:34.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:34 compute-0 nova_compute[265391]: 2025-09-30 18:16:34.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:35 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:35.076 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:1f:d0 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d4d535c5-48f8-49f7-a49c-3e4867725c0f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4d535c5-48f8-49f7-a49c-3e4867725c0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddde596e2d64cec889cb4c4d3642bc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b05dbbe6-2588-4c99-ab02-108967f64b97, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fb03bb2d-775c-44cf-a24f-e3e4a7cac8fd) old=Port_Binding(mac=['fa:16:3e:25:1f:d0'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-d4d535c5-48f8-49f7-a49c-3e4867725c0f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4d535c5-48f8-49f7-a49c-3e4867725c0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddde596e2d64cec889cb4c4d3642bc5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:16:35 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:35.077 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fb03bb2d-775c-44cf-a24f-e3e4a7cac8fd in datapath d4d535c5-48f8-49f7-a49c-3e4867725c0f updated
Sep 30 18:16:35 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:35.078 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d4d535c5-48f8-49f7-a49c-3e4867725c0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:16:35 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:35.079 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[43b9ec58-e438-403f-bb33-2b978764ef65]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:16:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:35.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:35 compute-0 ceph-mon[73755]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:35.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:37.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:37.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:37 compute-0 ceph-mon[73755]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1462816782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:16:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1462816782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:16:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:16:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:16:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:16:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:16:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:16:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:16:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:16:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:16:37 compute-0 unix_chkpwd[311522]: password check failed for user (root)
Sep 30 18:16:37 compute-0 sshd-session[311519]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:16:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:37.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:16:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:38] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:16:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:38] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:16:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:38 compute-0 sshd-session[311519]: Failed password for root from 45.252.249.158 port 60916 ssh2
Sep 30 18:16:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:39.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:39 compute-0 ceph-mon[73755]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:39 compute-0 nova_compute[265391]: 2025-09-30 18:16:39.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:39 compute-0 sshd-session[311524]: Invalid user tim from 14.225.220.107 port 34580
Sep 30 18:16:39 compute-0 sshd-session[311524]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:16:39 compute-0 sshd-session[311524]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:16:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:39.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:39 compute-0 nova_compute[265391]: 2025-09-30 18:16:39.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:40 compute-0 sshd-session[311519]: Received disconnect from 45.252.249.158 port 60916:11: Bye Bye [preauth]
Sep 30 18:16:40 compute-0 sshd-session[311519]: Disconnected from authenticating user root 45.252.249.158 port 60916 [preauth]
Sep 30 18:16:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:41.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:41 compute-0 ceph-mon[73755]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:41 compute-0 sshd-session[311524]: Failed password for invalid user tim from 14.225.220.107 port 34580 ssh2
Sep 30 18:16:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:41.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:42 compute-0 ceph-mon[73755]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.354527) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256202354579, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2106, "num_deletes": 252, "total_data_size": 3732645, "memory_usage": 3787184, "flush_reason": "Manual Compaction"}
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256202372123, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3640207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28988, "largest_seqno": 31093, "table_properties": {"data_size": 3630944, "index_size": 5693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19917, "raw_average_key_size": 20, "raw_value_size": 3612101, "raw_average_value_size": 3704, "num_data_blocks": 248, "num_entries": 975, "num_filter_entries": 975, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759256000, "oldest_key_time": 1759256000, "file_creation_time": 1759256202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 17652 microseconds, and 9620 cpu microseconds.
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.372183) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3640207 bytes OK
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.372209) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.374248) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.374268) EVENT_LOG_v1 {"time_micros": 1759256202374262, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.374293) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3723951, prev total WAL file size 3723951, number of live WAL files 2.
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.376011) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3554KB)], [65(10MB)]
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256202376074, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 14656919, "oldest_snapshot_seqno": -1}
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5993 keys, 12605658 bytes, temperature: kUnknown
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256202427092, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 12605658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12565870, "index_size": 23678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 153085, "raw_average_key_size": 25, "raw_value_size": 12458089, "raw_average_value_size": 2078, "num_data_blocks": 956, "num_entries": 5993, "num_filter_entries": 5993, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759256202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.428896) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 12605658 bytes
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.431176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 286.8 rd, 246.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.5 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 6515, records dropped: 522 output_compression: NoCompression
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.431196) EVENT_LOG_v1 {"time_micros": 1759256202431188, "job": 36, "event": "compaction_finished", "compaction_time_micros": 51098, "compaction_time_cpu_micros": 28405, "output_level": 6, "num_output_files": 1, "total_output_size": 12605658, "num_input_records": 6515, "num_output_records": 5993, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256202431893, "job": 36, "event": "table_file_deletion", "file_number": 67}
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256202433965, "job": 36, "event": "table_file_deletion", "file_number": 65}
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.375864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.434097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.434104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.434107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.434110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:16:42 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:16:42.434113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:16:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:42 compute-0 sshd-session[311524]: Received disconnect from 14.225.220.107 port 34580:11: Bye Bye [preauth]
Sep 30 18:16:42 compute-0 sshd-session[311524]: Disconnected from invalid user tim 14.225.220.107 port 34580 [preauth]
Sep 30 18:16:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:43.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:43.712Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:43.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:44 compute-0 nova_compute[265391]: 2025-09-30 18:16:44.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:44 compute-0 nova_compute[265391]: 2025-09-30 18:16:44.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980014d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:45.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:45 compute-0 ceph-mon[73755]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:45 compute-0 nova_compute[265391]: 2025-09-30 18:16:45.733 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:45 compute-0 nova_compute[265391]: 2025-09-30 18:16:45.734 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:45.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:46 compute-0 nova_compute[265391]: 2025-09-30 18:16:46.239 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:16:46 compute-0 nova_compute[265391]: 2025-09-30 18:16:46.793 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:46 compute-0 nova_compute[265391]: 2025-09-30 18:16:46.794 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:46 compute-0 nova_compute[265391]: 2025-09-30 18:16:46.801 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:16:46 compute-0 nova_compute[265391]: 2025-09-30 18:16:46.802 2 INFO nova.compute.claims [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:16:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:47.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:47.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:47 compute-0 ceph-mon[73755]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:47 compute-0 nova_compute[265391]: 2025-09-30 18:16:47.854 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:47.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:16:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722398944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:48 compute-0 nova_compute[265391]: 2025-09-30 18:16:48.310 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:48 compute-0 nova_compute[265391]: 2025-09-30 18:16:48.316 2 DEBUG nova.compute.provider_tree [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:16:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:48] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:16:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:48] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:16:48 compute-0 nova_compute[265391]: 2025-09-30 18:16:48.828 2 DEBUG nova.scheduler.client.report [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:16:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:49.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:49 compute-0 ceph-mon[73755]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/722398944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:16:49 compute-0 nova_compute[265391]: 2025-09-30 18:16:49.344 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.550s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:49 compute-0 nova_compute[265391]: 2025-09-30 18:16:49.344 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:16:49 compute-0 nova_compute[265391]: 2025-09-30 18:16:49.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:49 compute-0 nova_compute[265391]: 2025-09-30 18:16:49.858 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:16:49 compute-0 nova_compute[265391]: 2025-09-30 18:16:49.858 2 DEBUG nova.network.neutron [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:16:49 compute-0 nova_compute[265391]: 2025-09-30 18:16:49.859 2 WARNING neutronclient.v2_0.client [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:16:49 compute-0 nova_compute[265391]: 2025-09-30 18:16:49.860 2 WARNING neutronclient.v2_0.client [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:16:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:49 compute-0 nova_compute[265391]: 2025-09-30 18:16:49.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:50 compute-0 nova_compute[265391]: 2025-09-30 18:16:50.370 2 INFO nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:16:50 compute-0 ovn_controller[156242]: 2025-09-30T18:16:50Z|00093|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Sep 30 18:16:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:50 compute-0 nova_compute[265391]: 2025-09-30 18:16:50.880 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:16:50 compute-0 sudo[311562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:16:50 compute-0 sudo[311562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:16:50 compute-0 sudo[311562]: pam_unix(sudo:session): session closed for user root
Sep 30 18:16:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:51.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:51 compute-0 ceph-mon[73755]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:16:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:51 compute-0 nova_compute[265391]: 2025-09-30 18:16:51.856 2 DEBUG nova.network.neutron [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Successfully created port: e8db1337-aad5-4a75-89bc-1526d5c83cc6 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:16:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:51 compute-0 nova_compute[265391]: 2025-09-30 18:16:51.905 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:16:51 compute-0 nova_compute[265391]: 2025-09-30 18:16:51.907 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:16:51 compute-0 nova_compute[265391]: 2025-09-30 18:16:51.907 2 INFO nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Creating image(s)
Sep 30 18:16:51 compute-0 nova_compute[265391]: 2025-09-30 18:16:51.937 2 DEBUG nova.storage.rbd_utils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] rbd image 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:16:51 compute-0 nova_compute[265391]: 2025-09-30 18:16:51.970 2 DEBUG nova.storage.rbd_utils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] rbd image 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:16:51 compute-0 nova_compute[265391]: 2025-09-30 18:16:51.998 2 DEBUG nova.storage.rbd_utils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] rbd image 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.002 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.059 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.061 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.062 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.063 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.103 2 DEBUG nova.storage.rbd_utils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] rbd image 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.108 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:16:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:16:52 compute-0 ceph-mon[73755]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 180 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:16:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.401 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.496 2 DEBUG nova.storage.rbd_utils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] resizing rbd image 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:16:52 compute-0 podman[311718]: 2025-09-30 18:16:52.534292346 +0000 UTC m=+0.064510421 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.539 2 DEBUG nova.network.neutron [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Successfully updated port: e8db1337-aad5-4a75-89bc-1526d5c83cc6 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:16:52 compute-0 podman[311701]: 2025-09-30 18:16:52.565914629 +0000 UTC m=+0.097114889 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.621 2 DEBUG nova.compute.manager [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received event network-changed-e8db1337-aad5-4a75-89bc-1526d5c83cc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.621 2 DEBUG nova.compute.manager [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Refreshing instance network info cache due to event network-changed-e8db1337-aad5-4a75-89bc-1526d5c83cc6. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.622 2 DEBUG oslo_concurrency.lockutils [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-26cb264e-21ae-425e-b5db-a6d24a90b6ca" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.622 2 DEBUG oslo_concurrency.lockutils [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-26cb264e-21ae-425e-b5db-a6d24a90b6ca" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.622 2 DEBUG nova.network.neutron [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Refreshing network info cache for port e8db1337-aad5-4a75-89bc-1526d5c83cc6 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.628 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.629 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Ensure instance console log exists: /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.630 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.630 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:52 compute-0 nova_compute[265391]: 2025-09-30 18:16:52.630 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398002280 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:53 compute-0 nova_compute[265391]: 2025-09-30 18:16:53.049 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "refresh_cache-26cb264e-21ae-425e-b5db-a6d24a90b6ca" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:16:53 compute-0 nova_compute[265391]: 2025-09-30 18:16:53.132 2 WARNING neutronclient.v2_0.client [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:16:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:53 compute-0 nova_compute[265391]: 2025-09-30 18:16:53.247 2 DEBUG nova.network.neutron [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:16:53 compute-0 nova_compute[265391]: 2025-09-30 18:16:53.411 2 DEBUG nova.network.neutron [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:16:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:53.712Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009990 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:53.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:53 compute-0 nova_compute[265391]: 2025-09-30 18:16:53.917 2 DEBUG oslo_concurrency.lockutils [req-821efc76-aaa6-4044-bb3e-2a69cffcb247 req-c14f71d5-4149-4147-88b8-92340484d162 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-26cb264e-21ae-425e-b5db-a6d24a90b6ca" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:16:53 compute-0 nova_compute[265391]: 2025-09-30 18:16:53.918 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquired lock "refresh_cache-26cb264e-21ae-425e-b5db-a6d24a90b6ca" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:16:53 compute-0 nova_compute[265391]: 2025-09-30 18:16:53.919 2 DEBUG nova.network.neutron [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:16:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:16:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:54.290 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:54.290 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:16:54.290 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:54 compute-0 nova_compute[265391]: 2025-09-30 18:16:54.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:54 compute-0 ceph-mon[73755]: pgmap v1148: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:16:54 compute-0 nova_compute[265391]: 2025-09-30 18:16:54.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:54 compute-0 nova_compute[265391]: 2025-09-30 18:16:54.984 2 DEBUG nova.network.neutron [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:16:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:55 compute-0 nova_compute[265391]: 2025-09-30 18:16:55.197 2 WARNING neutronclient.v2_0.client [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:16:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:16:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:55 compute-0 nova_compute[265391]: 2025-09-30 18:16:55.978 2 DEBUG nova.network.neutron [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Updating instance_info_cache with network_info: [{"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:16:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.485 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Releasing lock "refresh_cache-26cb264e-21ae-425e-b5db-a6d24a90b6ca" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.486 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Instance network_info: |[{"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.489 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Start _get_guest_xml network_info=[{"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.493 2 WARNING nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.495 2 DEBUG nova.virt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteBasicStrategy-server-1754120251', uuid='26cb264e-21ae-425e-b5db-a6d24a90b6ca'), owner=OwnerMeta(userid='54973270e5a040c8af5ec2225e3caec8', username='tempest-TestExecuteBasicStrategy-1755756413-project-admin', projectid='eddde596e2d64cec889cb4c4d3642bc5', projectname='tempest-TestExecuteBasicStrategy-1755756413'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759256216.494977) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.499 2 DEBUG nova.virt.libvirt.host [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.500 2 DEBUG nova.virt.libvirt.host [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.503 2 DEBUG nova.virt.libvirt.host [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.503 2 DEBUG nova.virt.libvirt.host [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.504 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.504 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.504 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.504 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.505 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.505 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.505 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.505 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.505 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.505 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.506 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.506 2 DEBUG nova.virt.hardware [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:16:56 compute-0 nova_compute[265391]: 2025-09-30 18:16:56.508 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398002280 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:16:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973524139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:16:57 compute-0 nova_compute[265391]: 2025-09-30 18:16:57.015 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:57 compute-0 nova_compute[265391]: 2025-09-30 18:16:57.049 2 DEBUG nova.storage.rbd_utils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] rbd image 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:16:57 compute-0 nova_compute[265391]: 2025-09-30 18:16:57.053 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:16:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:16:57.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:16:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:57.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:57 compute-0 ceph-mon[73755]: pgmap v1149: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:16:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2973524139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:16:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:16:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/118095084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:16:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:16:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/118095084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:16:57 compute-0 podman[311869]: 2025-09-30 18:16:57.537898657 +0000 UTC m=+0.070964889 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:16:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:16:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/800863491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:16:57 compute-0 nova_compute[265391]: 2025-09-30 18:16:57.587 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:16:57 compute-0 nova_compute[265391]: 2025-09-30 18:16:57.589 2 DEBUG nova.virt.libvirt.vif [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1754120251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1754120251',id=10,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddde596e2d64cec889cb4c4d3642bc5',ramdisk_id='',reservation_id='r-ks0ceyno',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteBasicStrategy-1755756413',owner_user_name='tempest-TestExecuteBasicStrategy-1755756413-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:16:50Z,user_data=None,user_id='54973270e5a040c8af5ec2225e3caec8',uuid=26cb264e-21ae-425e-b5db-a6d24a90b6ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:16:57 compute-0 nova_compute[265391]: 2025-09-30 18:16:57.589 2 DEBUG nova.network.os_vif_util [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Converting VIF {"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:16:57 compute-0 nova_compute[265391]: 2025-09-30 18:16:57.590 2 DEBUG nova.network.os_vif_util [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:3e:ec,bridge_name='br-int',has_traffic_filtering=True,id=e8db1337-aad5-4a75-89bc-1526d5c83cc6,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8db1337-aa') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:16:57 compute-0 nova_compute[265391]: 2025-09-30 18:16:57.591 2 DEBUG nova.objects.instance [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 26cb264e-21ae-425e-b5db-a6d24a90b6ca obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:16:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009990 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:57.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.101 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <uuid>26cb264e-21ae-425e-b5db-a6d24a90b6ca</uuid>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <name>instance-0000000a</name>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteBasicStrategy-server-1754120251</nova:name>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:16:56</nova:creationTime>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:16:58 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:16:58 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:user uuid="54973270e5a040c8af5ec2225e3caec8">tempest-TestExecuteBasicStrategy-1755756413-project-admin</nova:user>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:project uuid="eddde596e2d64cec889cb4c4d3642bc5">tempest-TestExecuteBasicStrategy-1755756413</nova:project>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <nova:port uuid="e8db1337-aad5-4a75-89bc-1526d5c83cc6">
Sep 30 18:16:58 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <system>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <entry name="serial">26cb264e-21ae-425e-b5db-a6d24a90b6ca</entry>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <entry name="uuid">26cb264e-21ae-425e-b5db-a6d24a90b6ca</entry>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </system>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <os>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   </os>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <features>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   </features>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk">
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       </source>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk.config">
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       </source>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:16:58 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:a5:3e:ec"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <target dev="tape8db1337-aa"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca/console.log" append="off"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <video>
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </video>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:16:58 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:16:58 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:16:58 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:16:58 compute-0 nova_compute[265391]: </domain>
Sep 30 18:16:58 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.101 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Preparing to wait for external event network-vif-plugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.101 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.102 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.102 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.103 2 DEBUG nova.virt.libvirt.vif [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1754120251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1754120251',id=10,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddde596e2d64cec889cb4c4d3642bc5',ramdisk_id='',reservation_id='r-ks0ceyno',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteBasicStrategy-1755756413',owner_user_name='tempest-TestExecuteBasicStrategy-1755756413-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:16:50Z,user_data=None,user_id='54973270e5a040c8af5ec2225e3caec8',uuid=26cb264e-21ae-425e-b5db-a6d24a90b6ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.103 2 DEBUG nova.network.os_vif_util [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Converting VIF {"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.104 2 DEBUG nova.network.os_vif_util [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:3e:ec,bridge_name='br-int',has_traffic_filtering=True,id=e8db1337-aad5-4a75-89bc-1526d5c83cc6,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8db1337-aa') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.104 2 DEBUG os_vif [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:3e:ec,bridge_name='br-int',has_traffic_filtering=True,id=e8db1337-aad5-4a75-89bc-1526d5c83cc6,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8db1337-aa') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '630a64f5-bc86-5a1d-b597-382243d6f704', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8db1337-aa, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tape8db1337-aa, col_values=(('qos', UUID('f4932a99-3c9f-4f6a-8c7f-223a94cd4eac')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tape8db1337-aa, col_values=(('external_ids', {'iface-id': 'e8db1337-aad5-4a75-89bc-1526d5c83cc6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:3e:ec', 'vm-uuid': '26cb264e-21ae-425e-b5db-a6d24a90b6ca'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:58 compute-0 NetworkManager[45059]: <info>  [1759256218.1764] manager: (tape8db1337-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:16:58 compute-0 nova_compute[265391]: 2025-09-30 18:16:58.185 2 INFO os_vif [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:3e:ec,bridge_name='br-int',has_traffic_filtering=True,id=e8db1337-aad5-4a75-89bc-1526d5c83cc6,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8db1337-aa')
Sep 30 18:16:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/118095084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:16:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/118095084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:16:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/800863491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:16:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:58] "GET /metrics HTTP/1.1" 200 46635 "" "Prometheus/2.51.0"
Sep 30 18:16:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:16:58] "GET /metrics HTTP/1.1" 200 46635 "" "Prometheus/2.51.0"
Sep 30 18:16:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:16:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:16:59.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:16:59 compute-0 ceph-mon[73755]: pgmap v1150: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:16:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:16:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:16:59 compute-0 podman[276673]: time="2025-09-30T18:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:16:59 compute-0 nova_compute[265391]: 2025-09-30 18:16:59.760 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:16:59 compute-0 nova_compute[265391]: 2025-09-30 18:16:59.761 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:16:59 compute-0 nova_compute[265391]: 2025-09-30 18:16:59.761 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] No VIF found with MAC fa:16:3e:a5:3e:ec, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:16:59 compute-0 nova_compute[265391]: 2025-09-30 18:16:59.762 2 INFO nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Using config drive
Sep 30 18:16:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:16:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10274 "" "Go-http-client/1.1"
Sep 30 18:16:59 compute-0 nova_compute[265391]: 2025-09-30 18:16:59.803 2 DEBUG nova.storage.rbd_utils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] rbd image 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:16:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:16:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:16:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:16:59.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:16:59 compute-0 nova_compute[265391]: 2025-09-30 18:16:59.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:17:00 compute-0 nova_compute[265391]: 2025-09-30 18:17:00.319 2 WARNING neutronclient.v2_0.client [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:17:00 compute-0 ceph-mon[73755]: pgmap v1151: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:17:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c001070 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.087 2 INFO nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Creating config drive at /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca/disk.config
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.096 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpc71njz_x execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:17:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:01.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.231 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpc71njz_x" returned: 0 in 0.136s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.261 2 DEBUG nova.storage.rbd_utils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] rbd image 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.264 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca/disk.config 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:17:01 compute-0 openstack_network_exporter[279566]: ERROR   18:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:17:01 compute-0 openstack_network_exporter[279566]: ERROR   18:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:17:01 compute-0 openstack_network_exporter[279566]: ERROR   18:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:17:01 compute-0 openstack_network_exporter[279566]: ERROR   18:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:17:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:17:01 compute-0 openstack_network_exporter[279566]: ERROR   18:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:17:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.449 2 DEBUG oslo_concurrency.processutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca/disk.config 26cb264e-21ae-425e-b5db-a6d24a90b6ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.449 2 INFO nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Deleting local config drive /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca/disk.config because it was imported into RBD.
Sep 30 18:17:01 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 18:17:01 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 18:17:01 compute-0 kernel: tape8db1337-aa: entered promiscuous mode
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:01 compute-0 NetworkManager[45059]: <info>  [1759256221.5500] manager: (tape8db1337-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Sep 30 18:17:01 compute-0 ovn_controller[156242]: 2025-09-30T18:17:01Z|00094|binding|INFO|Claiming lport e8db1337-aad5-4a75-89bc-1526d5c83cc6 for this chassis.
Sep 30 18:17:01 compute-0 ovn_controller[156242]: 2025-09-30T18:17:01Z|00095|binding|INFO|e8db1337-aad5-4a75-89bc-1526d5c83cc6: Claiming fa:16:3e:a5:3e:ec 10.100.0.14
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.591 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:3e:ec 10.100.0.14'], port_security=['fa:16:3e:a5:3e:ec 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '26cb264e-21ae-425e-b5db-a6d24a90b6ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddde596e2d64cec889cb4c4d3642bc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f566abc7-3fe4-4e56-86df-377c1571ec04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afc38829-13e1-4bde-91a7-790387f17ce5, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=e8db1337-aad5-4a75-89bc-1526d5c83cc6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.592 166158 INFO neutron.agent.ovn.metadata.agent [-] Port e8db1337-aad5-4a75-89bc-1526d5c83cc6 in datapath cd077ee2-d26f-4989-8ea7-4aecbac7c636 bound to our chassis
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.594 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd077ee2-d26f-4989-8ea7-4aecbac7c636
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.615 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[de6dcfa7-1265-4867-80f4-8cdee6fac271]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.617 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd077ee2-d1 in ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.620 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd077ee2-d0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.621 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f66ca705-2ebf-46b6-80f7-86e97a4a52af]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.621 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8be5b2-bdf1-4699-91d1-e550fdfce6fb]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 systemd-udevd[311986]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.634 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[24425c2b-add3-4f3f-bbc4-b9fbef5c3000]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 systemd-machined[219917]: New machine qemu-7-instance-0000000a.
Sep 30 18:17:01 compute-0 NetworkManager[45059]: <info>  [1759256221.6451] device (tape8db1337-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:17:01 compute-0 NetworkManager[45059]: <info>  [1759256221.6464] device (tape8db1337-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:01 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000a.
Sep 30 18:17:01 compute-0 ovn_controller[156242]: 2025-09-30T18:17:01Z|00096|binding|INFO|Setting lport e8db1337-aad5-4a75-89bc-1526d5c83cc6 ovn-installed in OVS
Sep 30 18:17:01 compute-0 ovn_controller[156242]: 2025-09-30T18:17:01Z|00097|binding|INFO|Setting lport e8db1337-aad5-4a75-89bc-1526d5c83cc6 up in Southbound
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.655 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7a2e5b-47ec-4ff8-9386-c9c4195cd8d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.694 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[94a9895e-1edb-403d-9ef0-49b4aefe908c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.697 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[258fe412-b37a-485e-871c-b82f05c12c79]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 NetworkManager[45059]: <info>  [1759256221.6998] manager: (tapcd077ee2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.737 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[b490e72c-97cb-4ce1-85de-e67a28a29b89]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.740 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[c92b5903-c953-49d0-80ba-e46cc415bd74]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398002e80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:01 compute-0 NetworkManager[45059]: <info>  [1759256221.7686] device (tapcd077ee2-d0): carrier: link connected
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.777 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[42e1cfd4-92c2-48e2-a51a-fc9e4eb022f8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.794 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[93d58771-97d0-45d0-8ecd-7ea055713d6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd077ee2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:56:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482046, 'reachable_time': 35537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312019, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.810 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bfa31f-5b25-40f6-8a3c-8966ed8d7fea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:5660'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482046, 'tstamp': 482046}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312020, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.832 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[dce12df2-99bf-47d5-b297-07f836a3a34c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd077ee2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:56:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482046, 'reachable_time': 35537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312021, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.875 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c07c3d1c-8939-44c3-bd8f-5a91b8ee5470]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:01.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.972 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[56c4e01a-e2e7-4a87-b993-cddc05179e9a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.974 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd077ee2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.974 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.975 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd077ee2-d0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:01 compute-0 kernel: tapcd077ee2-d0: entered promiscuous mode
Sep 30 18:17:01 compute-0 NetworkManager[45059]: <info>  [1759256221.9795] manager: (tapcd077ee2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:01.981 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd077ee2-d0, col_values=(('external_ids', {'iface-id': '44f8f232-e480-4bd7-ad4d-10a4684c061b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:17:01 compute-0 nova_compute[265391]: 2025-09-30 18:17:01.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:01 compute-0 ovn_controller[156242]: 2025-09-30T18:17:01Z|00098|binding|INFO|Releasing lport 44f8f232-e480-4bd7-ad4d-10a4684c061b from this chassis (sb_readonly=0)
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.015 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[19d7e8dd-4327-4fc0-bc52-6ff4e392cc43]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.016 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.016 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.016 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for cd077ee2-d26f-4989-8ea7-4aecbac7c636 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.017 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.017 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[54f0beac-1ce6-40ba-8f1b-507e47109526]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.017 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.017 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e3380772-bae4-425e-a95b-1fe25a164c72]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.018 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-cd077ee2-d26f-4989-8ea7-4aecbac7c636
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID cd077ee2-d26f-4989-8ea7-4aecbac7c636
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:17:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:02.018 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'env', 'PROCESS_TAG=haproxy-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd077ee2-d26f-4989-8ea7-4aecbac7c636.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.112 2 DEBUG nova.compute.manager [req-d3469527-95cc-4bf4-88e9-5beff4934461 req-dd2322b1-9567-4058-8ff2-ca1440225255 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received event network-vif-plugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.112 2 DEBUG oslo_concurrency.lockutils [req-d3469527-95cc-4bf4-88e9-5beff4934461 req-dd2322b1-9567-4058-8ff2-ca1440225255 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.112 2 DEBUG oslo_concurrency.lockutils [req-d3469527-95cc-4bf4-88e9-5beff4934461 req-dd2322b1-9567-4058-8ff2-ca1440225255 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.112 2 DEBUG oslo_concurrency.lockutils [req-d3469527-95cc-4bf4-88e9-5beff4934461 req-dd2322b1-9567-4058-8ff2-ca1440225255 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.113 2 DEBUG nova.compute.manager [req-d3469527-95cc-4bf4-88e9-5beff4934461 req-dd2322b1-9567-4058-8ff2-ca1440225255 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Processing event network-vif-plugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:17:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:17:02 compute-0 podman[312095]: 2025-09-30 18:17:02.423797133 +0000 UTC m=+0.048051971 container create af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Sep 30 18:17:02 compute-0 systemd[1]: Started libpod-conmon-af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497.scope.
Sep 30 18:17:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:17:02 compute-0 podman[312095]: 2025-09-30 18:17:02.398469385 +0000 UTC m=+0.022724243 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b185b9b0fc9dba73d859564d02c2a33d810753bd08913b604b528682530b49f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:02 compute-0 podman[312095]: 2025-09-30 18:17:02.522731829 +0000 UTC m=+0.146986697 container init af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:17:02 compute-0 podman[312095]: 2025-09-30 18:17:02.530699067 +0000 UTC m=+0.154953915 container start af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:17:02 compute-0 neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636[312110]: [NOTICE]   (312114) : New worker (312116) forked
Sep 30 18:17:02 compute-0 neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636[312110]: [NOTICE]   (312114) : Loading success.
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.831 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.836 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.840 2 INFO nova.virt.libvirt.driver [-] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Instance spawned successfully.
Sep 30 18:17:02 compute-0 nova_compute[265391]: 2025-09-30 18:17:02.841 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:17:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:03.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:03 compute-0 ceph-mon[73755]: pgmap v1152: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.360 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.361 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.362 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.363 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.364 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.365 2 DEBUG nova.virt.libvirt.driver [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:17:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:03.713Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.878 2 INFO nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Took 11.97 seconds to spawn the instance on the hypervisor.
Sep 30 18:17:03 compute-0 nova_compute[265391]: 2025-09-30 18:17:03.879 2 DEBUG nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:17:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:03.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.189 2 DEBUG nova.compute.manager [req-5e5b72f7-1afc-4ad8-b499-5b473e32df05 req-7e7fbb24-0ca4-43a1-8b26-7fe65e8259fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received event network-vif-plugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.190 2 DEBUG oslo_concurrency.lockutils [req-5e5b72f7-1afc-4ad8-b499-5b473e32df05 req-7e7fbb24-0ca4-43a1-8b26-7fe65e8259fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.190 2 DEBUG oslo_concurrency.lockutils [req-5e5b72f7-1afc-4ad8-b499-5b473e32df05 req-7e7fbb24-0ca4-43a1-8b26-7fe65e8259fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.190 2 DEBUG oslo_concurrency.lockutils [req-5e5b72f7-1afc-4ad8-b499-5b473e32df05 req-7e7fbb24-0ca4-43a1-8b26-7fe65e8259fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.191 2 DEBUG nova.compute.manager [req-5e5b72f7-1afc-4ad8-b499-5b473e32df05 req-7e7fbb24-0ca4-43a1-8b26-7fe65e8259fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] No waiting events found dispatching network-vif-plugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.191 2 WARNING nova.compute.manager [req-5e5b72f7-1afc-4ad8-b499-5b473e32df05 req-7e7fbb24-0ca4-43a1-8b26-7fe65e8259fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received unexpected event network-vif-plugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 for instance with vm_state active and task_state None.
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.413 2 INFO nova.compute.manager [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Took 17.66 seconds to build instance.
Sep 30 18:17:04 compute-0 podman[312128]: 2025-09-30 18:17:04.559096382 +0000 UTC m=+0.087542771 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2)
Sep 30 18:17:04 compute-0 podman[312129]: 2025-09-30 18:17:04.563480746 +0000 UTC m=+0.080445526 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Sep 30 18:17:04 compute-0 podman[312127]: 2025-09-30 18:17:04.581589087 +0000 UTC m=+0.111321839 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.919 2 DEBUG oslo_concurrency.lockutils [None req-462e412c-f158-4db0-9a2d-56ab61801c4b 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.186s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:17:04 compute-0 nova_compute[265391]: 2025-09-30 18:17:04.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c001070 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:05 compute-0 ceph-mon[73755]: pgmap v1153: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:17:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c001070 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:05.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 7.6 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:17:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:07.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:07 compute-0 ceph-mon[73755]: pgmap v1154: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 7.6 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:17:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:17:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005229063904082829 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:17:07
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', '.rgw.root', '.nfs', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta']
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:17:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:17:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:07.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 7.6 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:17:08 compute-0 nova_compute[265391]: 2025-09-30 18:17:08.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:17:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:08] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:17:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:08] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:17:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398002e80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:09.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:09 compute-0 ceph-mon[73755]: pgmap v1155: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 7.6 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:17:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c001070 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:09.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:10 compute-0 nova_compute[265391]: 2025-09-30 18:17:10.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:17:10 compute-0 ceph-mon[73755]: pgmap v1156: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:17:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003c90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:11 compute-0 sudo[312189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:17:11 compute-0 sudo[312189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:11 compute-0 sudo[312189]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:11.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:11.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:17:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398002e80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:13 compute-0 nova_compute[265391]: 2025-09-30 18:17:13.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:13.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:13 compute-0 ceph-mon[73755]: pgmap v1157: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:17:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2769658347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:17:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:13.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c001070 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:13.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:17:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:15 compute-0 nova_compute[265391]: 2025-09-30 18:17:15.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:15 compute-0 ceph-mon[73755]: pgmap v1158: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:17:15 compute-0 ovn_controller[156242]: 2025-09-30T18:17:15Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:3e:ec 10.100.0.14
Sep 30 18:17:15 compute-0 ovn_controller[156242]: 2025-09-30T18:17:15Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:3e:ec 10.100.0.14
Sep 30 18:17:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:15 compute-0 sudo[312221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:17:15 compute-0 sudo[312221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:15 compute-0 sudo[312221]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:15 compute-0 sudo[312246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:17:15 compute-0 sudo[312246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 64 op/s
Sep 30 18:17:16 compute-0 sudo[312246]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:17:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:17:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:17:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:17:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:17:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:17:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:17:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:17:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:17:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:17:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:17:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:17:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:17:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:17:16 compute-0 sudo[312301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:17:16 compute-0 sudo[312301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:16 compute-0 sudo[312301]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:16 compute-0 sudo[312326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:17:16 compute-0 sudo[312326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001230 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:17.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:17 compute-0 ceph-mon[73755]: pgmap v1159: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 64 op/s
Sep 30 18:17:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:17:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:17:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:17:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:17:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:17:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:17:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:17:17 compute-0 podman[312390]: 2025-09-30 18:17:17.40747744 +0000 UTC m=+0.056713197 container create a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_nobel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:17:17 compute-0 systemd[1]: Started libpod-conmon-a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464.scope.
Sep 30 18:17:17 compute-0 podman[312390]: 2025-09-30 18:17:17.386251578 +0000 UTC m=+0.035487385 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:17:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:17:17 compute-0 podman[312390]: 2025-09-30 18:17:17.516705154 +0000 UTC m=+0.165940931 container init a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_nobel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:17:17 compute-0 podman[312390]: 2025-09-30 18:17:17.530103093 +0000 UTC m=+0.179338900 container start a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:17:17 compute-0 podman[312390]: 2025-09-30 18:17:17.534127108 +0000 UTC m=+0.183362955 container attach a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:17:17 compute-0 competent_nobel[312407]: 167 167
Sep 30 18:17:17 compute-0 systemd[1]: libpod-a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464.scope: Deactivated successfully.
Sep 30 18:17:17 compute-0 conmon[312407]: conmon a5b4e43d88b990abcb30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464.scope/container/memory.events
Sep 30 18:17:17 compute-0 podman[312390]: 2025-09-30 18:17:17.545043582 +0000 UTC m=+0.194279349 container died a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdb5a3acf1ac3f7c19c3517f92458355cc4445f2de24d676ed7930a8d750c24f-merged.mount: Deactivated successfully.
Sep 30 18:17:17 compute-0 podman[312390]: 2025-09-30 18:17:17.588581366 +0000 UTC m=+0.237817173 container remove a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:17:17 compute-0 systemd[1]: libpod-conmon-a5b4e43d88b990abcb3041756b18a9c36d57322e411954f33a36b9f71461b464.scope: Deactivated successfully.
Sep 30 18:17:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c001070 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:17 compute-0 podman[312431]: 2025-09-30 18:17:17.835908686 +0000 UTC m=+0.081056532 container create d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 18:17:17 compute-0 systemd[1]: Started libpod-conmon-d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5.scope.
Sep 30 18:17:17 compute-0 podman[312431]: 2025-09-30 18:17:17.802873246 +0000 UTC m=+0.048021152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:17:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3ba58e83f8f17d7bf8c184321fd3a3554f78905943a861a883c33f3d2e3edb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3ba58e83f8f17d7bf8c184321fd3a3554f78905943a861a883c33f3d2e3edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3ba58e83f8f17d7bf8c184321fd3a3554f78905943a861a883c33f3d2e3edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3ba58e83f8f17d7bf8c184321fd3a3554f78905943a861a883c33f3d2e3edb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3ba58e83f8f17d7bf8c184321fd3a3554f78905943a861a883c33f3d2e3edb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:17 compute-0 podman[312431]: 2025-09-30 18:17:17.944682958 +0000 UTC m=+0.189830864 container init d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:17:17 compute-0 podman[312431]: 2025-09-30 18:17:17.959214097 +0000 UTC m=+0.204361903 container start d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:17:17 compute-0 podman[312431]: 2025-09-30 18:17:17.962975695 +0000 UTC m=+0.208123521 container attach d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:17:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 64 op/s
Sep 30 18:17:18 compute-0 nova_compute[265391]: 2025-09-30 18:17:18.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:18 compute-0 eager_mccarthy[312448]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:17:18 compute-0 eager_mccarthy[312448]: --> All data devices are unavailable
Sep 30 18:17:18 compute-0 systemd[1]: libpod-d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5.scope: Deactivated successfully.
Sep 30 18:17:18 compute-0 podman[312463]: 2025-09-30 18:17:18.422820608 +0000 UTC m=+0.048674839 container died d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a3ba58e83f8f17d7bf8c184321fd3a3554f78905943a861a883c33f3d2e3edb-merged.mount: Deactivated successfully.
Sep 30 18:17:18 compute-0 podman[312463]: 2025-09-30 18:17:18.484037952 +0000 UTC m=+0.109892133 container remove d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:17:18 compute-0 systemd[1]: libpod-conmon-d9e65c19ab4445f6e974f1cbd90bb10b889997ae26eb502adb15f354f8c56bd5.scope: Deactivated successfully.
Sep 30 18:17:18 compute-0 sudo[312326]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:18 compute-0 sudo[312478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:17:18 compute-0 sudo[312478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:18 compute-0 sudo[312478]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:18 compute-0 sudo[312503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:17:18 compute-0 sudo[312503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:18] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:17:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:18] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:17:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:19 compute-0 podman[312568]: 2025-09-30 18:17:19.206957995 +0000 UTC m=+0.045511556 container create 839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_boyd, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:17:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:19.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:19 compute-0 systemd[1]: Started libpod-conmon-839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8.scope.
Sep 30 18:17:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:17:19 compute-0 ceph-mon[73755]: pgmap v1160: 353 pgs: 353 active+clean; 88 MiB data, 201 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 64 op/s
Sep 30 18:17:19 compute-0 podman[312568]: 2025-09-30 18:17:19.185944198 +0000 UTC m=+0.024497799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:17:19 compute-0 podman[312568]: 2025-09-30 18:17:19.29127036 +0000 UTC m=+0.129823921 container init 839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_boyd, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:17:19 compute-0 podman[312568]: 2025-09-30 18:17:19.298705674 +0000 UTC m=+0.137259275 container start 839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:17:19 compute-0 podman[312568]: 2025-09-30 18:17:19.303247242 +0000 UTC m=+0.141800833 container attach 839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_boyd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 18:17:19 compute-0 competent_boyd[312584]: 167 167
Sep 30 18:17:19 compute-0 systemd[1]: libpod-839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8.scope: Deactivated successfully.
Sep 30 18:17:19 compute-0 podman[312568]: 2025-09-30 18:17:19.304676449 +0000 UTC m=+0.143230040 container died 839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_boyd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 18:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa9944e669f266fe735f8187a6a7947ad57f6705d651985266b628c3d1cef7ad-merged.mount: Deactivated successfully.
Sep 30 18:17:19 compute-0 podman[312568]: 2025-09-30 18:17:19.347805642 +0000 UTC m=+0.186359193 container remove 839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 18:17:19 compute-0 systemd[1]: libpod-conmon-839c4e78473219e79b9a8c043f98d28bdbd15f3a27152b93897e1f30d2edd0f8.scope: Deactivated successfully.
Sep 30 18:17:19 compute-0 podman[312608]: 2025-09-30 18:17:19.561425255 +0000 UTC m=+0.052476308 container create 3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:17:19 compute-0 systemd[1]: Started libpod-conmon-3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7.scope.
Sep 30 18:17:19 compute-0 podman[312608]: 2025-09-30 18:17:19.543809986 +0000 UTC m=+0.034861069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:17:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e13ee6b19076ee824497692c2ca32542e215df80b899257c09330f31ec22714/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e13ee6b19076ee824497692c2ca32542e215df80b899257c09330f31ec22714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e13ee6b19076ee824497692c2ca32542e215df80b899257c09330f31ec22714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e13ee6b19076ee824497692c2ca32542e215df80b899257c09330f31ec22714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:19 compute-0 podman[312608]: 2025-09-30 18:17:19.687506017 +0000 UTC m=+0.178557160 container init 3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:17:19 compute-0 podman[312608]: 2025-09-30 18:17:19.699066208 +0000 UTC m=+0.190117291 container start 3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_aryabhata, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 18:17:19 compute-0 podman[312608]: 2025-09-30 18:17:19.704396997 +0000 UTC m=+0.195448100 container attach 3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:17:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:19.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]: {
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:     "0": [
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:         {
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "devices": [
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "/dev/loop3"
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             ],
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "lv_name": "ceph_lv0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "lv_size": "21470642176",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "name": "ceph_lv0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "tags": {
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.cluster_name": "ceph",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.crush_device_class": "",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.encrypted": "0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.osd_id": "0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.type": "block",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.vdo": "0",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:                 "ceph.with_tpm": "0"
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             },
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "type": "block",
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:             "vg_name": "ceph_vg0"
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:         }
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]:     ]
Sep 30 18:17:20 compute-0 serene_aryabhata[312625]: }
Sep 30 18:17:20 compute-0 nova_compute[265391]: 2025-09-30 18:17:20.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:20 compute-0 systemd[1]: libpod-3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7.scope: Deactivated successfully.
Sep 30 18:17:20 compute-0 podman[312608]: 2025-09-30 18:17:20.063532418 +0000 UTC m=+0.554583541 container died 3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_aryabhata, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:17:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e13ee6b19076ee824497692c2ca32542e215df80b899257c09330f31ec22714-merged.mount: Deactivated successfully.
Sep 30 18:17:20 compute-0 podman[312608]: 2025-09-30 18:17:20.117994796 +0000 UTC m=+0.609045889 container remove 3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_aryabhata, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:17:20 compute-0 systemd[1]: libpod-conmon-3d234244799262bd806825212950fd4f63cf97f577799ebced2f8b4d07ba61b7.scope: Deactivated successfully.
Sep 30 18:17:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Sep 30 18:17:20 compute-0 sudo[312503]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:20 compute-0 sudo[312648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:17:20 compute-0 sudo[312648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:20 compute-0 sudo[312648]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3550832362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:17:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/318708065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:17:20 compute-0 sudo[312673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:17:20 compute-0 sudo[312673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:20 compute-0 podman[312741]: 2025-09-30 18:17:20.80297869 +0000 UTC m=+0.044259853 container create b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hodgkin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:17:20 compute-0 systemd[1]: Started libpod-conmon-b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f.scope.
Sep 30 18:17:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:17:20 compute-0 podman[312741]: 2025-09-30 18:17:20.782801425 +0000 UTC m=+0.024082618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:17:20 compute-0 podman[312741]: 2025-09-30 18:17:20.896701101 +0000 UTC m=+0.137982314 container init b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 18:17:20 compute-0 podman[312741]: 2025-09-30 18:17:20.906637379 +0000 UTC m=+0.147918542 container start b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hodgkin, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 18:17:20 compute-0 podman[312741]: 2025-09-30 18:17:20.911800534 +0000 UTC m=+0.153081747 container attach b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hodgkin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:17:20 compute-0 serene_hodgkin[312757]: 167 167
Sep 30 18:17:20 compute-0 systemd[1]: libpod-b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f.scope: Deactivated successfully.
Sep 30 18:17:20 compute-0 podman[312741]: 2025-09-30 18:17:20.916078395 +0000 UTC m=+0.157359558 container died b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:17:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3116e4e1bc86c7a39b6e7d292697aaf5cb8e4d7b0bfd3537b65732f547b56a66-merged.mount: Deactivated successfully.
Sep 30 18:17:20 compute-0 podman[312741]: 2025-09-30 18:17:20.954455524 +0000 UTC m=+0.195736687 container remove b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:17:20 compute-0 systemd[1]: libpod-conmon-b7dee706f903b24c53dab8a9933ce31f4e0e4c287f6039f471a6a414e422825f.scope: Deactivated successfully.
Sep 30 18:17:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001230 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:21 compute-0 podman[312781]: 2025-09-30 18:17:21.146822783 +0000 UTC m=+0.040082435 container create a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 18:17:21 compute-0 systemd[1]: Started libpod-conmon-a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3.scope.
Sep 30 18:17:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619a8dfafdab5ce74fb186b9f77e39359b23ae35a058410ded2fd2f21cecbeef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619a8dfafdab5ce74fb186b9f77e39359b23ae35a058410ded2fd2f21cecbeef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619a8dfafdab5ce74fb186b9f77e39359b23ae35a058410ded2fd2f21cecbeef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619a8dfafdab5ce74fb186b9f77e39359b23ae35a058410ded2fd2f21cecbeef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:17:21 compute-0 podman[312781]: 2025-09-30 18:17:21.130626111 +0000 UTC m=+0.023885793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:17:21 compute-0 podman[312781]: 2025-09-30 18:17:21.236055727 +0000 UTC m=+0.129315449 container init a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:17:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:21.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:21 compute-0 podman[312781]: 2025-09-30 18:17:21.245653446 +0000 UTC m=+0.138913108 container start a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 18:17:21 compute-0 podman[312781]: 2025-09-30 18:17:21.25117947 +0000 UTC m=+0.144439222 container attach a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_carson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:17:21 compute-0 ceph-mon[73755]: pgmap v1161: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Sep 30 18:17:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:21.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:22 compute-0 lvm[312874]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:17:22 compute-0 lvm[312874]: VG ceph_vg0 finished
Sep 30 18:17:22 compute-0 compassionate_carson[312797]: {}
Sep 30 18:17:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 18:17:22 compute-0 systemd[1]: libpod-a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3.scope: Deactivated successfully.
Sep 30 18:17:22 compute-0 systemd[1]: libpod-a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3.scope: Consumed 1.322s CPU time.
Sep 30 18:17:22 compute-0 podman[312781]: 2025-09-30 18:17:22.057926066 +0000 UTC m=+0.951185768 container died a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_carson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-619a8dfafdab5ce74fb186b9f77e39359b23ae35a058410ded2fd2f21cecbeef-merged.mount: Deactivated successfully.
Sep 30 18:17:22 compute-0 podman[312781]: 2025-09-30 18:17:22.117432576 +0000 UTC m=+1.010692228 container remove a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:17:22 compute-0 systemd[1]: libpod-conmon-a61c4eb20c104192d33560e991719337c44dfe1a44e25a8e3404206b724dfff3.scope: Deactivated successfully.
Sep 30 18:17:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:17:22 compute-0 sudo[312673]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:17:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:17:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:17:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:17:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:17:22 compute-0 sudo[312892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:17:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:17:22 compute-0 sudo[312892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:22 compute-0 sudo[312892]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:23 compute-0 nova_compute[265391]: 2025-09-30 18:17:23.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:23 compute-0 ceph-mon[73755]: pgmap v1162: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:17:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:17:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:17:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:17:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:23.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:23 compute-0 podman[312919]: 2025-09-30 18:17:23.548316232 +0000 UTC m=+0.077222512 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:17:23 compute-0 podman[312918]: 2025-09-30 18:17:23.599542876 +0000 UTC m=+0.128564329 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, org.label-schema.build-date=20250930)
Sep 30 18:17:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:23.715Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:23.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:24.007 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:17:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:24.008 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:17:24 compute-0 nova_compute[265391]: 2025-09-30 18:17:24.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:17:24 compute-0 nova_compute[265391]: 2025-09-30 18:17:24.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001230 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:25.011 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:17:25 compute-0 nova_compute[265391]: 2025-09-30 18:17:25.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:25 compute-0 ceph-mon[73755]: pgmap v1163: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:17:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:25.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:25.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:17:26 compute-0 nova_compute[265391]: 2025-09-30 18:17:26.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:26 compute-0 ceph-mon[73755]: pgmap v1164: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:17:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:27.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:27.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:27.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:17:28 compute-0 nova_compute[265391]: 2025-09-30 18:17:28.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:28 compute-0 nova_compute[265391]: 2025-09-30 18:17:28.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:28 compute-0 nova_compute[265391]: 2025-09-30 18:17:28.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:28 compute-0 podman[312973]: 2025-09-30 18:17:28.528544805 +0000 UTC m=+0.070059765 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Sep 30 18:17:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:28] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:17:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:28] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:17:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840013d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:29 compute-0 nova_compute[265391]: 2025-09-30 18:17:28.998 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:17:29 compute-0 nova_compute[265391]: 2025-09-30 18:17:28.999 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:17:29 compute-0 nova_compute[265391]: 2025-09-30 18:17:29.000 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:17:29 compute-0 nova_compute[265391]: 2025-09-30 18:17:29.000 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:17:29 compute-0 nova_compute[265391]: 2025-09-30 18:17:29.000 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:17:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:29.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:29 compute-0 ceph-mon[73755]: pgmap v1165: 353 pgs: 353 active+clean; 167 MiB data, 248 MiB used, 40 GiB / 40 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:17:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:17:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3474917705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:17:29 compute-0 nova_compute[265391]: 2025-09-30 18:17:29.493 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:17:29 compute-0 podman[276673]: time="2025-09-30T18:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:17:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:17:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10741 "" "Go-http-client/1.1"
Sep 30 18:17:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:29.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Sep 30 18:17:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3474917705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:17:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1931493676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.547 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.548 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:17:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.768 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.770 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.802 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.804 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4218MB free_disk=39.92593765258789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.804 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:17:30 compute-0 nova_compute[265391]: 2025-09-30 18:17:30.805 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:17:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:31 compute-0 sudo[313020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:17:31 compute-0 sudo[313020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:31 compute-0 sudo[313020]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:31.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:31 compute-0 ceph-mon[73755]: pgmap v1166: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Sep 30 18:17:31 compute-0 openstack_network_exporter[279566]: ERROR   18:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:17:31 compute-0 openstack_network_exporter[279566]: ERROR   18:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:17:31 compute-0 openstack_network_exporter[279566]: ERROR   18:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:17:31 compute-0 openstack_network_exporter[279566]: ERROR   18:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:17:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:17:31 compute-0 openstack_network_exporter[279566]: ERROR   18:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:17:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:17:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:31 compute-0 nova_compute[265391]: 2025-09-30 18:17:31.859 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 26cb264e-21ae-425e-b5db-a6d24a90b6ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:17:31 compute-0 nova_compute[265391]: 2025-09-30 18:17:31.860 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:17:31 compute-0 nova_compute[265391]: 2025-09-30 18:17:31.860 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:17:30 up  1:20,  0 user,  load average: 0.66, 0.74, 0.84\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_eddde596e2d64cec889cb4c4d3642bc5': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:17:31 compute-0 nova_compute[265391]: 2025-09-30 18:17:31.900 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:17:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 74 op/s
Sep 30 18:17:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:17:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639538289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:17:32 compute-0 nova_compute[265391]: 2025-09-30 18:17:32.390 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:17:32 compute-0 nova_compute[265391]: 2025-09-30 18:17:32.396 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:17:32 compute-0 nova_compute[265391]: 2025-09-30 18:17:32.905 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:17:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40012b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:33 compute-0 nova_compute[265391]: 2025-09-30 18:17:33.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:33.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:33 compute-0 ceph-mon[73755]: pgmap v1167: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 74 op/s
Sep 30 18:17:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/639538289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:17:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1496519701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:17:33 compute-0 nova_compute[265391]: 2025-09-30 18:17:33.415 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:17:33 compute-0 nova_compute[265391]: 2025-09-30 18:17:33.415 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.610s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:17:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:33.717Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:33.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 74 op/s
Sep 30 18:17:34 compute-0 ceph-mon[73755]: pgmap v1168: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 74 op/s
Sep 30 18:17:34 compute-0 nova_compute[265391]: 2025-09-30 18:17:34.415 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:34 compute-0 nova_compute[265391]: 2025-09-30 18:17:34.416 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:34 compute-0 nova_compute[265391]: 2025-09-30 18:17:34.416 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:34 compute-0 nova_compute[265391]: 2025-09-30 18:17:34.417 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:34 compute-0 nova_compute[265391]: 2025-09-30 18:17:34.417 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:17:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:35 compute-0 nova_compute[265391]: 2025-09-30 18:17:35.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:35 compute-0 podman[313073]: 2025-09-30 18:17:35.52053899 +0000 UTC m=+0.065002914 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Sep 30 18:17:35 compute-0 podman[313075]: 2025-09-30 18:17:35.537929783 +0000 UTC m=+0.071187115 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git)
Sep 30 18:17:35 compute-0 podman[313074]: 2025-09-30 18:17:35.542336468 +0000 UTC m=+0.072017317 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=watcher_latest)
Sep 30 18:17:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:35.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Sep 30 18:17:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:17:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3679108769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:17:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:17:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3679108769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:17:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40012b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:37.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:37 compute-0 ceph-mon[73755]: pgmap v1169: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Sep 30 18:17:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3679108769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:17:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3679108769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:17:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:37.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:17:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:17:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:17:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:17:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:17:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:17:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:17:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:17:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Sep 30 18:17:38 compute-0 nova_compute[265391]: 2025-09-30 18:17:38.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:17:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:17:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:17:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:39 compute-0 sshd-session[313133]: Invalid user test from 45.252.249.158 port 53586
Sep 30 18:17:39 compute-0 sshd-session[313133]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:17:39 compute-0 sshd-session[313133]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:17:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:39.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:39 compute-0 ceph-mon[73755]: pgmap v1170: 353 pgs: 353 active+clean; 167 MiB data, 266 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Sep 30 18:17:39 compute-0 nova_compute[265391]: 2025-09-30 18:17:39.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:17:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:39.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:40 compute-0 nova_compute[265391]: 2025-09-30 18:17:40.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 18:17:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40012b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:41 compute-0 ceph-mon[73755]: pgmap v1171: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 18:17:41 compute-0 sshd-session[313133]: Failed password for invalid user test from 45.252.249.158 port 53586 ssh2
Sep 30 18:17:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:17:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:17:42 compute-0 sshd-session[313133]: Received disconnect from 45.252.249.158 port 53586:11: Bye Bye [preauth]
Sep 30 18:17:42 compute-0 sshd-session[313133]: Disconnected from invalid user test 45.252.249.158 port 53586 [preauth]
Sep 30 18:17:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:17:42 compute-0 ceph-mon[73755]: pgmap v1172: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:17:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:43 compute-0 nova_compute[265391]: 2025-09-30 18:17:43.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:17:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:43.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:17:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:43.718Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:17:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:43.719Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:43.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:17:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4001450 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:45 compute-0 nova_compute[265391]: 2025-09-30 18:17:45.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:45 compute-0 ceph-mon[73755]: pgmap v1173: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:17:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:45.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:17:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:47.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:47 compute-0 ceph-mon[73755]: pgmap v1174: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:17:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:17:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:47.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:17:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:17:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:17:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:17:48 compute-0 nova_compute[265391]: 2025-09-30 18:17:48.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:17:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:17:48 compute-0 sshd-session[313148]: Invalid user violet from 14.225.220.107 port 35710
Sep 30 18:17:48 compute-0 sshd-session[313148]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:17:48 compute-0 sshd-session[313148]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:17:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:49 compute-0 ceph-mon[73755]: pgmap v1175: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:17:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:49.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:49.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:50 compute-0 nova_compute[265391]: 2025-09-30 18:17:50.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:50 compute-0 sshd-session[313148]: Failed password for invalid user violet from 14.225.220.107 port 35710 ssh2
Sep 30 18:17:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:17:50 compute-0 sshd-session[313148]: Received disconnect from 14.225.220.107 port 35710:11: Bye Bye [preauth]
Sep 30 18:17:50 compute-0 sshd-session[313148]: Disconnected from invalid user violet 14.225.220.107 port 35710 [preauth]
Sep 30 18:17:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:51.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:51 compute-0 ceph-mon[73755]: pgmap v1176: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:17:51 compute-0 sudo[313153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:17:51 compute-0 sudo[313153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:17:51 compute-0 sudo[313153]: pam_unix(sudo:session): session closed for user root
Sep 30 18:17:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:51.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:17:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:17:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:17:52 compute-0 ceph-mon[73755]: pgmap v1177: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:17:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:17:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:53 compute-0 nova_compute[265391]: 2025-09-30 18:17:53.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:17:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:53.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:17:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:53.720Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:53 compute-0 ovn_controller[156242]: 2025-09-30T18:17:53Z|00099|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Sep 30 18:17:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:53.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:17:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:54.291 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:17:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:54.291 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:17:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:17:54.292 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:17:54 compute-0 podman[313184]: 2025-09-30 18:17:54.536417079 +0000 UTC m=+0.069772408 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:17:54 compute-0 podman[313183]: 2025-09-30 18:17:54.598192697 +0000 UTC m=+0.135317434 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:17:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:55 compute-0 nova_compute[265391]: 2025-09-30 18:17:55.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:55 compute-0 ceph-mon[73755]: pgmap v1178: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:17:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:55.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:17:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:55.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:17:56 compute-0 nova_compute[265391]: 2025-09-30 18:17:56.239 2 DEBUG nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Creating tmpfile /var/lib/nova/instances/tmpwyoivnak to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Sep 30 18:17:56 compute-0 nova_compute[265391]: 2025-09-30 18:17:56.239 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:17:56 compute-0 nova_compute[265391]: 2025-09-30 18:17:56.334 2 DEBUG nova.compute.manager [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwyoivnak',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Sep 30 18:17:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:17:57.196Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:17:57 compute-0 ceph-mon[73755]: pgmap v1179: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:17:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:57.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:17:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/407820157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:17:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:17:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/407820157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:17:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:57.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:17:58 compute-0 nova_compute[265391]: 2025-09-30 18:17:58.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:17:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/407820157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:17:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/407820157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:17:58 compute-0 nova_compute[265391]: 2025-09-30 18:17:58.375 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:17:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:58] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:17:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:17:58] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:17:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:17:59.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:17:59 compute-0 ceph-mon[73755]: pgmap v1180: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:17:59 compute-0 podman[313237]: 2025-09-30 18:17:59.543726784 +0000 UTC m=+0.074704546 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:17:59 compute-0 podman[276673]: time="2025-09-30T18:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:17:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:17:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10739 "" "Go-http-client/1.1"
Sep 30 18:17:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:17:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:17:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:17:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:17:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:17:59.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:00 compute-0 nova_compute[265391]: 2025-09-30 18:18:00.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:18:00 compute-0 ceph-mon[73755]: pgmap v1181: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:18:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:01.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:01 compute-0 openstack_network_exporter[279566]: ERROR   18:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:18:01 compute-0 openstack_network_exporter[279566]: ERROR   18:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:18:01 compute-0 openstack_network_exporter[279566]: ERROR   18:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:18:01 compute-0 openstack_network_exporter[279566]: ERROR   18:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:18:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:18:01 compute-0 openstack_network_exporter[279566]: ERROR   18:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:18:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:18:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:01.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:18:02 compute-0 nova_compute[265391]: 2025-09-30 18:18:02.189 2 DEBUG nova.compute.manager [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwyoivnak',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7f660b4a-3177-4f85-985d-90a46be506e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Sep 30 18:18:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:03 compute-0 nova_compute[265391]: 2025-09-30 18:18:03.206 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-7f660b4a-3177-4f85-985d-90a46be506e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:18:03 compute-0 nova_compute[265391]: 2025-09-30 18:18:03.207 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-7f660b4a-3177-4f85-985d-90a46be506e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:18:03 compute-0 nova_compute[265391]: 2025-09-30 18:18:03.207 2 DEBUG nova.network.neutron [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:18:03 compute-0 nova_compute[265391]: 2025-09-30 18:18:03.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:03.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:03 compute-0 ceph-mon[73755]: pgmap v1182: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:18:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:03.721Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:03 compute-0 nova_compute[265391]: 2025-09-30 18:18:03.722 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:03.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:18:04 compute-0 ceph-mon[73755]: pgmap v1183: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:18:04 compute-0 nova_compute[265391]: 2025-09-30 18:18:04.392 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:04 compute-0 nova_compute[265391]: 2025-09-30 18:18:04.642 2 DEBUG nova.network.neutron [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Updating instance_info_cache with network_info: [{"id": "57967583-5fed-40cd-bc09-455163ece536", "address": "fa:16:3e:1d:cc:b5", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57967583-5f", "ovs_interfaceid": "57967583-5fed-40cd-bc09-455163ece536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:18:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001ce0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.150 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-7f660b4a-3177-4f85-985d-90a46be506e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.167 2 DEBUG nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwyoivnak',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7f660b4a-3177-4f85-985d-90a46be506e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.168 2 DEBUG nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Creating instance directory: /var/lib/nova/instances/7f660b4a-3177-4f85-985d-90a46be506e6 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.169 2 DEBUG nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Ensure instance console log exists: /var/lib/nova/instances/7f660b4a-3177-4f85-985d-90a46be506e6/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.170 2 DEBUG nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.172 2 DEBUG nova.virt.libvirt.vif [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-09-30T18:17:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1361986283',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1361986283',id=11,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:17:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddde596e2d64cec889cb4c4d3642bc5',ramdisk_id='',reservation_id='r-reelvu4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-1755756413',owner_user_name='tempest-TestExecuteBasicStrategy-1755756413-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:17:26Z,user_data=None,user_id='54973270e5a040c8af5ec2225e3caec8',uuid=7f660b4a-3177-4f85-985d-90a46be506e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57967583-5fed-40cd-bc09-455163ece536", "address": "fa:16:3e:1d:cc:b5", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap57967583-5f", "ovs_interfaceid": "57967583-5fed-40cd-bc09-455163ece536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.173 2 DEBUG nova.network.os_vif_util [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "57967583-5fed-40cd-bc09-455163ece536", "address": "fa:16:3e:1d:cc:b5", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap57967583-5f", "ovs_interfaceid": "57967583-5fed-40cd-bc09-455163ece536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.174 2 DEBUG nova.network.os_vif_util [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cc:b5,bridge_name='br-int',has_traffic_filtering=True,id=57967583-5fed-40cd-bc09-455163ece536,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57967583-5f') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.175 2 DEBUG os_vif [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cc:b5,bridge_name='br-int',has_traffic_filtering=True,id=57967583-5fed-40cd-bc09-455163ece536,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57967583-5f') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.179 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'd75364db-889d-5cdf-a369-e34f5dc79543', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57967583-5f, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap57967583-5f, col_values=(('qos', UUID('8915ac0f-2550-46f3-ab5f-f1baaf1d878d')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap57967583-5f, col_values=(('external_ids', {'iface-id': '57967583-5fed-40cd-bc09-455163ece536', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:cc:b5', 'vm-uuid': '7f660b4a-3177-4f85-985d-90a46be506e6'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:05 compute-0 NetworkManager[45059]: <info>  [1759256285.2325] manager: (tap57967583-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.244 2 INFO os_vif [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cc:b5,bridge_name='br-int',has_traffic_filtering=True,id=57967583-5fed-40cd-bc09-455163ece536,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57967583-5f')
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.245 2 DEBUG nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.245 2 DEBUG nova.compute.manager [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwyoivnak',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7f660b4a-3177-4f85-985d-90a46be506e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Sep 30 18:18:05 compute-0 nova_compute[265391]: 2025-09-30 18:18:05.245 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:05.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c003af0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:05.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:06 compute-0 nova_compute[265391]: 2025-09-30 18:18:06.023 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:18:06 compute-0 podman[313267]: 2025-09-30 18:18:06.548160566 +0000 UTC m=+0.075170788 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS)
Sep 30 18:18:06 compute-0 podman[313268]: 2025-09-30 18:18:06.558163457 +0000 UTC m=+0.080293042 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Sep 30 18:18:06 compute-0 podman[313266]: 2025-09-30 18:18:06.566688949 +0000 UTC m=+0.093079465 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930)
Sep 30 18:18:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:07.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:18:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:18:07 compute-0 ceph-mon[73755]: pgmap v1184: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:18:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:18:07
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['images', 'vms', '.nfs', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'volumes', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002277151451699551 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:18:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:18:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:07.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:08 compute-0 nova_compute[265391]: 2025-09-30 18:18:08.133 2 DEBUG nova.network.neutron [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Port 57967583-5fed-40cd-bc09-455163ece536 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Sep 30 18:18:08 compute-0 nova_compute[265391]: 2025-09-30 18:18:08.150 2 DEBUG nova.compute.manager [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwyoivnak',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7f660b4a-3177-4f85-985d-90a46be506e6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Sep 30 18:18:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:18:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:18:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:08] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:18:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:08] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:18:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001ce0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:09.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:09 compute-0 ceph-mon[73755]: pgmap v1185: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:18:09 compute-0 systemd[1]: Starting libvirt proxy daemon...
Sep 30 18:18:09 compute-0 systemd[1]: Started libvirt proxy daemon.
Sep 30 18:18:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004410 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:09 compute-0 kernel: tap57967583-5f: entered promiscuous mode
Sep 30 18:18:09 compute-0 NetworkManager[45059]: <info>  [1759256289.9984] manager: (tap57967583-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Sep 30 18:18:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:09.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:10 compute-0 nova_compute[265391]: 2025-09-30 18:18:10.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:10 compute-0 ovn_controller[156242]: 2025-09-30T18:18:10Z|00100|binding|INFO|Claiming lport 57967583-5fed-40cd-bc09-455163ece536 for this additional chassis.
Sep 30 18:18:10 compute-0 ovn_controller[156242]: 2025-09-30T18:18:10Z|00101|binding|INFO|57967583-5fed-40cd-bc09-455163ece536: Claiming fa:16:3e:1d:cc:b5 10.100.0.6
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.010 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:cc:b5 10.100.0.6'], port_security=['fa:16:3e:1d:cc:b5 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7f660b4a-3177-4f85-985d-90a46be506e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddde596e2d64cec889cb4c4d3642bc5', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f566abc7-3fe4-4e56-86df-377c1571ec04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afc38829-13e1-4bde-91a7-790387f17ce5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=57967583-5fed-40cd-bc09-455163ece536) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.011 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 57967583-5fed-40cd-bc09-455163ece536 in datapath cd077ee2-d26f-4989-8ea7-4aecbac7c636 unbound from our chassis
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.013 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd077ee2-d26f-4989-8ea7-4aecbac7c636
Sep 30 18:18:10 compute-0 ovn_controller[156242]: 2025-09-30T18:18:10Z|00102|binding|INFO|Setting lport 57967583-5fed-40cd-bc09-455163ece536 ovn-installed in OVS
Sep 30 18:18:10 compute-0 nova_compute[265391]: 2025-09-30 18:18:10.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:10 compute-0 nova_compute[265391]: 2025-09-30 18:18:10.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.031 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6a479fed-bd28-4e11-8647-97889374becf]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:10 compute-0 systemd-machined[219917]: New machine qemu-8-instance-0000000b.
Sep 30 18:18:10 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-0000000b.
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.069 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[6313a1c1-2f0c-4112-a2b3-f99bbb1c2fe8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.073 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[6997bb14-5742-4333-8a92-6d55dbaa96c5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:10 compute-0 systemd-udevd[313366]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:18:10 compute-0 nova_compute[265391]: 2025-09-30 18:18:10.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:10 compute-0 NetworkManager[45059]: <info>  [1759256290.1003] device (tap57967583-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:18:10 compute-0 NetworkManager[45059]: <info>  [1759256290.1022] device (tap57967583-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.124 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[8e576be5-c7ea-4f83-a253-4ebfdd534a01]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.149 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[67b4b339-0dd2-4fe4-a62f-af4cd05a78b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd077ee2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:56:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482046, 'reachable_time': 35537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313375, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.173 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2884e8-255c-4c5b-b3b0-3470282624ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd077ee2-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482060, 'tstamp': 482060}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313377, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd077ee2-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482065, 'tstamp': 482065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313377, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.175 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd077ee2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:10 compute-0 nova_compute[265391]: 2025-09-30 18:18:10.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.178 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd077ee2-d0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.179 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.179 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd077ee2-d0, col_values=(('external_ids', {'iface-id': '44f8f232-e480-4bd7-ad4d-10a4684c061b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.179 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:18:10 compute-0 nova_compute[265391]: 2025-09-30 18:18:10.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:10.181 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2322b9-dd67-46ec-a4fb-3ed68ff6f92f]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cd077ee2-d26f-4989-8ea7-4aecbac7c636\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cd077ee2-d26f-4989-8ea7-4aecbac7c636\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:18:10 compute-0 nova_compute[265391]: 2025-09-30 18:18:10.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:10 compute-0 ceph-mon[73755]: pgmap v1186: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:18:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003cf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:11.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:11 compute-0 sudo[313415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:18:11 compute-0 sudo[313415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:11 compute-0 sudo[313415]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:12.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:18:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001ce0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:13 compute-0 ceph-mon[73755]: pgmap v1187: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:18:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:13.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:13 compute-0 ovn_controller[156242]: 2025-09-30T18:18:13Z|00103|binding|INFO|Claiming lport 57967583-5fed-40cd-bc09-455163ece536 for this chassis.
Sep 30 18:18:13 compute-0 ovn_controller[156242]: 2025-09-30T18:18:13Z|00104|binding|INFO|57967583-5fed-40cd-bc09-455163ece536: Claiming fa:16:3e:1d:cc:b5 10.100.0.6
Sep 30 18:18:13 compute-0 ovn_controller[156242]: 2025-09-30T18:18:13Z|00105|binding|INFO|Setting lport 57967583-5fed-40cd-bc09-455163ece536 up in Southbound
Sep 30 18:18:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:13.723Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004410 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:14.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:18:14 compute-0 nova_compute[265391]: 2025-09-30 18:18:14.739 2 INFO nova.compute.manager [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Post operation of migration started
Sep 30 18:18:14 compute-0 nova_compute[265391]: 2025-09-30 18:18:14.740 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003d10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:15 compute-0 nova_compute[265391]: 2025-09-30 18:18:15.022 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:15 compute-0 nova_compute[265391]: 2025-09-30 18:18:15.023 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:15 compute-0 nova_compute[265391]: 2025-09-30 18:18:15.083 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-7f660b4a-3177-4f85-985d-90a46be506e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:18:15 compute-0 nova_compute[265391]: 2025-09-30 18:18:15.084 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-7f660b4a-3177-4f85-985d-90a46be506e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:18:15 compute-0 nova_compute[265391]: 2025-09-30 18:18:15.084 2 DEBUG nova.network.neutron [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:18:15 compute-0 nova_compute[265391]: 2025-09-30 18:18:15.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:15 compute-0 nova_compute[265391]: 2025-09-30 18:18:15.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:15 compute-0 ceph-mon[73755]: pgmap v1188: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:18:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:15 compute-0 nova_compute[265391]: 2025-09-30 18:18:15.594 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:16.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:16 compute-0 nova_compute[265391]: 2025-09-30 18:18:16.354 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:16 compute-0 nova_compute[265391]: 2025-09-30 18:18:16.519 2 DEBUG nova.network.neutron [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Updating instance_info_cache with network_info: [{"id": "57967583-5fed-40cd-bc09-455163ece536", "address": "fa:16:3e:1d:cc:b5", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57967583-5f", "ovs_interfaceid": "57967583-5fed-40cd-bc09-455163ece536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:18:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384001ce0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:17 compute-0 nova_compute[265391]: 2025-09-30 18:18:17.027 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-7f660b4a-3177-4f85-985d-90a46be506e6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:18:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:17.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:17 compute-0 ceph-mon[73755]: pgmap v1189: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:17 compute-0 nova_compute[265391]: 2025-09-30 18:18:17.555 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:17 compute-0 nova_compute[265391]: 2025-09-30 18:18:17.556 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:17 compute-0 nova_compute[265391]: 2025-09-30 18:18:17.556 2 DEBUG oslo_concurrency.lockutils [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:17 compute-0 nova_compute[265391]: 2025-09-30 18:18:17.560 2 INFO nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Sep 30 18:18:17 compute-0 virtqemud[265263]: Domain id=8 name='instance-0000000b' uuid=7f660b4a-3177-4f85-985d-90a46be506e6 is tainted: custom-monitor
Sep 30 18:18:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004410 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:18.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:18 compute-0 ceph-mon[73755]: pgmap v1190: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:18 compute-0 nova_compute[265391]: 2025-09-30 18:18:18.565 2 INFO nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Sep 30 18:18:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:18] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:18:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:18] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:18:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003d30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:19.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:19 compute-0 nova_compute[265391]: 2025-09-30 18:18:19.573 2 INFO nova.virt.libvirt.driver [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Sep 30 18:18:19 compute-0 nova_compute[265391]: 2025-09-30 18:18:19.577 2 DEBUG nova.compute.manager [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:18:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:20.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:20 compute-0 nova_compute[265391]: 2025-09-30 18:18:20.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:20 compute-0 nova_compute[265391]: 2025-09-30 18:18:20.137 2 DEBUG nova.objects.instance [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Sep 30 18:18:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:20 compute-0 nova_compute[265391]: 2025-09-30 18:18:20.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:21 compute-0 nova_compute[265391]: 2025-09-30 18:18:21.167 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:21 compute-0 nova_compute[265391]: 2025-09-30 18:18:21.249 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:21 compute-0 nova_compute[265391]: 2025-09-30 18:18:21.250 2 WARNING neutronclient.v2_0.client [None req-812acc71-08bf-4d8e-9c71-e0d0e609aa7d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:21 compute-0 ceph-mon[73755]: pgmap v1191: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:21.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:22.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:18:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:18:22 compute-0 sudo[313461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:18:22 compute-0 sudo[313461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:22 compute-0 sudo[313461]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:22 compute-0 sudo[313486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 18:18:22 compute-0 sudo[313486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:23 compute-0 sudo[313486]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:18:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:18:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:23 compute-0 sudo[313533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:18:23 compute-0 sudo[313533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:23 compute-0 sudo[313533]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:18:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:18:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:23 compute-0 sudo[313558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:18:23 compute-0 sudo[313558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:23 compute-0 ceph-mon[73755]: pgmap v1192: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:18:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4051419468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:23.724Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:23 compute-0 sudo[313558]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:24.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:18:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:18:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:18:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:18:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:18:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:18:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:18:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:18:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:18:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:18:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:18:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:18:24 compute-0 sudo[313617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:18:24 compute-0 sudo[313617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:24 compute-0 sudo[313617]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.201 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "7f660b4a-3177-4f85-985d-90a46be506e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.201 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "7f660b4a-3177-4f85-985d-90a46be506e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.202 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.202 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.202 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.224 2 INFO nova.compute.manager [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Terminating instance
Sep 30 18:18:24 compute-0 sudo[313642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:18:24 compute-0 sudo[313642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:18:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:18:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:18:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:18:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:18:24 compute-0 podman[313709]: 2025-09-30 18:18:24.711185929 +0000 UTC m=+0.052446536 container create 559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_margulis, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.740 2 DEBUG nova.compute.manager [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:18:24 compute-0 systemd[1]: Started libpod-conmon-559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6.scope.
Sep 30 18:18:24 compute-0 podman[313709]: 2025-09-30 18:18:24.682865952 +0000 UTC m=+0.024126579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:18:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:18:24 compute-0 kernel: tap57967583-5f (unregistering): left promiscuous mode
Sep 30 18:18:24 compute-0 podman[313709]: 2025-09-30 18:18:24.817267801 +0000 UTC m=+0.158528428 container init 559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_margulis, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:18:24 compute-0 NetworkManager[45059]: <info>  [1759256304.8179] device (tap57967583-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:18:24 compute-0 podman[313709]: 2025-09-30 18:18:24.827526248 +0000 UTC m=+0.168786855 container start 559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_margulis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:24 compute-0 ovn_controller[156242]: 2025-09-30T18:18:24Z|00106|binding|INFO|Releasing lport 57967583-5fed-40cd-bc09-455163ece536 from this chassis (sb_readonly=0)
Sep 30 18:18:24 compute-0 ovn_controller[156242]: 2025-09-30T18:18:24Z|00107|binding|INFO|Setting lport 57967583-5fed-40cd-bc09-455163ece536 down in Southbound
Sep 30 18:18:24 compute-0 ovn_controller[156242]: 2025-09-30T18:18:24Z|00108|binding|INFO|Removing iface tap57967583-5f ovn-installed in OVS
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:24 compute-0 podman[313709]: 2025-09-30 18:18:24.833988956 +0000 UTC m=+0.175249563 container attach 559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:18:24 compute-0 unruffled_margulis[313727]: 167 167
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.836 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:cc:b5 10.100.0.6'], port_security=['fa:16:3e:1d:cc:b5 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7f660b4a-3177-4f85-985d-90a46be506e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddde596e2d64cec889cb4c4d3642bc5', 'neutron:revision_number': '15', 'neutron:security_group_ids': 'f566abc7-3fe4-4e56-86df-377c1571ec04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afc38829-13e1-4bde-91a7-790387f17ce5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=57967583-5fed-40cd-bc09-455163ece536) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:18:24 compute-0 systemd[1]: libpod-559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6.scope: Deactivated successfully.
Sep 30 18:18:24 compute-0 podman[313709]: 2025-09-30 18:18:24.838153154 +0000 UTC m=+0.179413761 container died 559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_margulis, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.839 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 57967583-5fed-40cd-bc09-455163ece536 in datapath cd077ee2-d26f-4989-8ea7-4aecbac7c636 unbound from our chassis
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.840 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd077ee2-d26f-4989-8ea7-4aecbac7c636
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.864 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e96d5ceb-b191-4922-b77c-601443b3960e]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:24 compute-0 podman[313726]: 2025-09-30 18:18:24.867846417 +0000 UTC m=+0.094358847 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:18:24 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Sep 30 18:18:24 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000b.scope: Consumed 3.245s CPU time.
Sep 30 18:18:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-91cd2e30cc39f7e319a676e7fee251e1c2e7eb23cc7f11a7d0f2ae46040cf73c-merged.mount: Deactivated successfully.
Sep 30 18:18:24 compute-0 systemd-machined[219917]: Machine qemu-8-instance-0000000b terminated.
Sep 30 18:18:24 compute-0 podman[313709]: 2025-09-30 18:18:24.904985414 +0000 UTC m=+0.246246021 container remove 559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.908 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[224b8fdf-f5b9-4be1-bc71-b0eeea76788e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.913 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[bb817d69-fa46-487e-9f17-6575c8143cf8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:24 compute-0 systemd[1]: libpod-conmon-559fce002e5353c216486e498a1fb52e44422696bb76cac0aea47559cbbb02f6.scope: Deactivated successfully.
Sep 30 18:18:24 compute-0 podman[313723]: 2025-09-30 18:18:24.935336205 +0000 UTC m=+0.160769727 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=watcher_latest)
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.949 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[b25f16cc-ff57-4426-b27b-a9c6fd117980]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.970 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[015fb3f6-3acc-4914-9868-cadaff910316]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd077ee2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:56:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482046, 'reachable_time': 35537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313806, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.984 2 INFO nova.virt.libvirt.driver [-] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Instance destroyed successfully.
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.985 2 DEBUG nova.objects.instance [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lazy-loading 'resources' on Instance uuid 7f660b4a-3177-4f85-985d-90a46be506e6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.988 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a954ee-2f88-4655-91fa-9750e6e8cb19]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd077ee2-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482060, 'tstamp': 482060}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313816, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd077ee2-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482065, 'tstamp': 482065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313816, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.989 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd077ee2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:24 compute-0 nova_compute[265391]: 2025-09-30 18:18:24.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.997 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd077ee2-d0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.998 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:18:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.998 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd077ee2-d0, col_values=(('external_ids', {'iface-id': '44f8f232-e480-4bd7-ad4d-10a4684c061b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:24.998 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:18:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:25.000 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bceddc4a-9cf9-439a-a128-3056d813a5b7]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cd077ee2-d26f-4989-8ea7-4aecbac7c636\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cd077ee2-d26f-4989-8ea7-4aecbac7c636\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 podman[313824]: 2025-09-30 18:18:25.111399549 +0000 UTC m=+0.055026084 container create 7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 18:18:25 compute-0 systemd[1]: Started libpod-conmon-7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54.scope.
Sep 30 18:18:25 compute-0 podman[313824]: 2025-09-30 18:18:25.081558512 +0000 UTC m=+0.025185097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.200 2 DEBUG nova.compute.manager [req-201965be-ade8-4755-98b2-e3e7a0736e0b req-ab32a6c2-a101-4fdb-b66b-a38baa1cb2b6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Received event network-vif-unplugged-57967583-5fed-40cd-bc09-455163ece536 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.201 2 DEBUG oslo_concurrency.lockutils [req-201965be-ade8-4755-98b2-e3e7a0736e0b req-ab32a6c2-a101-4fdb-b66b-a38baa1cb2b6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.201 2 DEBUG oslo_concurrency.lockutils [req-201965be-ade8-4755-98b2-e3e7a0736e0b req-ab32a6c2-a101-4fdb-b66b-a38baa1cb2b6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.201 2 DEBUG oslo_concurrency.lockutils [req-201965be-ade8-4755-98b2-e3e7a0736e0b req-ab32a6c2-a101-4fdb-b66b-a38baa1cb2b6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.201 2 DEBUG nova.compute.manager [req-201965be-ade8-4755-98b2-e3e7a0736e0b req-ab32a6c2-a101-4fdb-b66b-a38baa1cb2b6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] No waiting events found dispatching network-vif-unplugged-57967583-5fed-40cd-bc09-455163ece536 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.201 2 DEBUG nova.compute.manager [req-201965be-ade8-4755-98b2-e3e7a0736e0b req-ab32a6c2-a101-4fdb-b66b-a38baa1cb2b6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Received event network-vif-unplugged-57967583-5fed-40cd-bc09-455163ece536 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:18:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c516b244207cd77b037c526e419b5bf34d3f9a54b51eb0144f9e2cbfb0b89f6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c516b244207cd77b037c526e419b5bf34d3f9a54b51eb0144f9e2cbfb0b89f6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c516b244207cd77b037c526e419b5bf34d3f9a54b51eb0144f9e2cbfb0b89f6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c516b244207cd77b037c526e419b5bf34d3f9a54b51eb0144f9e2cbfb0b89f6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c516b244207cd77b037c526e419b5bf34d3f9a54b51eb0144f9e2cbfb0b89f6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 podman[313824]: 2025-09-30 18:18:25.239451643 +0000 UTC m=+0.183078168 container init 7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:18:25 compute-0 podman[313824]: 2025-09-30 18:18:25.250812689 +0000 UTC m=+0.194439244 container start 7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:18:25 compute-0 podman[313824]: 2025-09-30 18:18:25.255205083 +0000 UTC m=+0.198831628 container attach 7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_leakey, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:18:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:25.307 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:18:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:25.308 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 ceph-mon[73755]: pgmap v1193: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:18:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:25.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.500 2 DEBUG nova.virt.libvirt.vif [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=1,config_drive='True',created_at=2025-09-30T18:17:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1361986283',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1361986283',id=11,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:17:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddde596e2d64cec889cb4c4d3642bc5',ramdisk_id='',reservation_id='r-reelvu4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',clean_attempts='1',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-1755756413',owner_user_name='tempest-TestExecuteBasicStrategy-1755756413-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:18:20Z,user_data=None,user_id='54973270e5a040c8af5ec2225e3caec8',uuid=7f660b4a-3177-4f85-985d-90a46be506e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57967583-5fed-40cd-bc09-455163ece536", "address": "fa:16:3e:1d:cc:b5", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57967583-5f", "ovs_interfaceid": "57967583-5fed-40cd-bc09-455163ece536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.500 2 DEBUG nova.network.os_vif_util [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Converting VIF {"id": "57967583-5fed-40cd-bc09-455163ece536", "address": "fa:16:3e:1d:cc:b5", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57967583-5f", "ovs_interfaceid": "57967583-5fed-40cd-bc09-455163ece536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.501 2 DEBUG nova.network.os_vif_util [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:cc:b5,bridge_name='br-int',has_traffic_filtering=True,id=57967583-5fed-40cd-bc09-455163ece536,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57967583-5f') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.502 2 DEBUG os_vif [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:cc:b5,bridge_name='br-int',has_traffic_filtering=True,id=57967583-5fed-40cd-bc09-455163ece536,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57967583-5f') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57967583-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=8915ac0f-2550-46f3-ab5f-f1baaf1d878d) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:25 compute-0 nova_compute[265391]: 2025-09-30 18:18:25.513 2 INFO os_vif [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:cc:b5,bridge_name='br-int',has_traffic_filtering=True,id=57967583-5fed-40cd-bc09-455163ece536,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57967583-5f')
Sep 30 18:18:25 compute-0 funny_leakey[313840]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:18:25 compute-0 funny_leakey[313840]: --> All data devices are unavailable
Sep 30 18:18:25 compute-0 systemd[1]: libpod-7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54.scope: Deactivated successfully.
Sep 30 18:18:25 compute-0 podman[313876]: 2025-09-30 18:18:25.705986741 +0000 UTC m=+0.032438846 container died 7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:18:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c516b244207cd77b037c526e419b5bf34d3f9a54b51eb0144f9e2cbfb0b89f6a-merged.mount: Deactivated successfully.
Sep 30 18:18:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:25 compute-0 podman[313876]: 2025-09-30 18:18:25.955640261 +0000 UTC m=+0.282092366 container remove 7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:18:25 compute-0 systemd[1]: libpod-conmon-7d3aad817cb6b14ca3be71eb940828e80eb0f35c3d48bb74673346419037ba54.scope: Deactivated successfully.
Sep 30 18:18:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:26.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:26 compute-0 sudo[313642]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:26 compute-0 sudo[313893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:18:26 compute-0 sudo[313893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:26 compute-0 sudo[313893]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:26 compute-0 sudo[313918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:18:26 compute-0 sudo[313918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:26 compute-0 nova_compute[265391]: 2025-09-30 18:18:26.273 2 INFO nova.virt.libvirt.driver [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Deleting instance files /var/lib/nova/instances/7f660b4a-3177-4f85-985d-90a46be506e6_del
Sep 30 18:18:26 compute-0 nova_compute[265391]: 2025-09-30 18:18:26.275 2 INFO nova.virt.libvirt.driver [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Deletion of /var/lib/nova/instances/7f660b4a-3177-4f85-985d-90a46be506e6_del complete
Sep 30 18:18:26 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1562792068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:26 compute-0 ceph-mon[73755]: pgmap v1194: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:26 compute-0 nova_compute[265391]: 2025-09-30 18:18:26.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:18:26 compute-0 unix_chkpwd[313970]: password check failed for user (root)
Sep 30 18:18:26 compute-0 sshd-session[313457]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.125.120.7  user=root
Sep 30 18:18:26 compute-0 podman[313985]: 2025-09-30 18:18:26.659878058 +0000 UTC m=+0.049776697 container create 33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_babbage, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 18:18:26 compute-0 systemd[1]: Started libpod-conmon-33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9.scope.
Sep 30 18:18:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:18:26 compute-0 podman[313985]: 2025-09-30 18:18:26.64150571 +0000 UTC m=+0.031404369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:18:26 compute-0 podman[313985]: 2025-09-30 18:18:26.754647556 +0000 UTC m=+0.144546225 container init 33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:18:26 compute-0 podman[313985]: 2025-09-30 18:18:26.765103008 +0000 UTC m=+0.155001647 container start 33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 18:18:26 compute-0 podman[313985]: 2025-09-30 18:18:26.771044033 +0000 UTC m=+0.160942692 container attach 33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_babbage, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 18:18:26 compute-0 distracted_babbage[314002]: 167 167
Sep 30 18:18:26 compute-0 podman[313985]: 2025-09-30 18:18:26.772808199 +0000 UTC m=+0.162706848 container died 33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 18:18:26 compute-0 systemd[1]: libpod-33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9.scope: Deactivated successfully.
Sep 30 18:18:26 compute-0 nova_compute[265391]: 2025-09-30 18:18:26.791 2 INFO nova.compute.manager [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Took 2.05 seconds to destroy the instance on the hypervisor.
Sep 30 18:18:26 compute-0 nova_compute[265391]: 2025-09-30 18:18:26.792 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:18:26 compute-0 nova_compute[265391]: 2025-09-30 18:18:26.793 2 DEBUG nova.compute.manager [-] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:18:26 compute-0 nova_compute[265391]: 2025-09-30 18:18:26.793 2 DEBUG nova.network.neutron [-] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:18:26 compute-0 nova_compute[265391]: 2025-09-30 18:18:26.794 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3810fcd021f0be7a4d2a5bda2d51952f2dc7de2df7292c1d79bba12f6b4ec147-merged.mount: Deactivated successfully.
Sep 30 18:18:26 compute-0 podman[313985]: 2025-09-30 18:18:26.828312204 +0000 UTC m=+0.218210843 container remove 33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_babbage, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 18:18:26 compute-0 systemd[1]: libpod-conmon-33f7b5d869d7a6cc77dafacf5efade938959d963bad080e3ec58393ba89439a9.scope: Deactivated successfully.
Sep 30 18:18:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:27 compute-0 nova_compute[265391]: 2025-09-30 18:18:27.056 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:27 compute-0 podman[314026]: 2025-09-30 18:18:27.062242505 +0000 UTC m=+0.068434053 container create abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:18:27 compute-0 systemd[1]: Started libpod-conmon-abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa.scope.
Sep 30 18:18:27 compute-0 podman[314026]: 2025-09-30 18:18:27.038512507 +0000 UTC m=+0.044704065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:18:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a1e962248e259493d0b89891b1363f124d4a52305288b7bafdc23cdc356b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a1e962248e259493d0b89891b1363f124d4a52305288b7bafdc23cdc356b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a1e962248e259493d0b89891b1363f124d4a52305288b7bafdc23cdc356b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a1e962248e259493d0b89891b1363f124d4a52305288b7bafdc23cdc356b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:27 compute-0 podman[314026]: 2025-09-30 18:18:27.161692934 +0000 UTC m=+0.167884542 container init abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 18:18:27 compute-0 podman[314026]: 2025-09-30 18:18:27.169319813 +0000 UTC m=+0.175511381 container start abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:18:27 compute-0 podman[314026]: 2025-09-30 18:18:27.173248005 +0000 UTC m=+0.179439643 container attach abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_fermi, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:18:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:27.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:27 compute-0 nova_compute[265391]: 2025-09-30 18:18:27.275 2 DEBUG nova.compute.manager [req-51a56564-943d-4ade-a314-ca7dff1e11c2 req-8e58f6d2-d91e-4e20-bb21-f57f2758baa6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Received event network-vif-unplugged-57967583-5fed-40cd-bc09-455163ece536 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:18:27 compute-0 nova_compute[265391]: 2025-09-30 18:18:27.276 2 DEBUG oslo_concurrency.lockutils [req-51a56564-943d-4ade-a314-ca7dff1e11c2 req-8e58f6d2-d91e-4e20-bb21-f57f2758baa6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:27 compute-0 nova_compute[265391]: 2025-09-30 18:18:27.276 2 DEBUG oslo_concurrency.lockutils [req-51a56564-943d-4ade-a314-ca7dff1e11c2 req-8e58f6d2-d91e-4e20-bb21-f57f2758baa6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:27 compute-0 nova_compute[265391]: 2025-09-30 18:18:27.277 2 DEBUG oslo_concurrency.lockutils [req-51a56564-943d-4ade-a314-ca7dff1e11c2 req-8e58f6d2-d91e-4e20-bb21-f57f2758baa6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7f660b4a-3177-4f85-985d-90a46be506e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:27 compute-0 nova_compute[265391]: 2025-09-30 18:18:27.277 2 DEBUG nova.compute.manager [req-51a56564-943d-4ade-a314-ca7dff1e11c2 req-8e58f6d2-d91e-4e20-bb21-f57f2758baa6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] No waiting events found dispatching network-vif-unplugged-57967583-5fed-40cd-bc09-455163ece536 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:18:27 compute-0 nova_compute[265391]: 2025-09-30 18:18:27.278 2 DEBUG nova.compute.manager [req-51a56564-943d-4ade-a314-ca7dff1e11c2 req-8e58f6d2-d91e-4e20-bb21-f57f2758baa6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Received event network-vif-unplugged-57967583-5fed-40cd-bc09-455163ece536 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:18:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:27.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:27 compute-0 cranky_fermi[314042]: {
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:     "0": [
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:         {
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "devices": [
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "/dev/loop3"
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             ],
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "lv_name": "ceph_lv0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "lv_size": "21470642176",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "name": "ceph_lv0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "tags": {
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.cluster_name": "ceph",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.crush_device_class": "",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.encrypted": "0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.osd_id": "0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.type": "block",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.vdo": "0",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:                 "ceph.with_tpm": "0"
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             },
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "type": "block",
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:             "vg_name": "ceph_vg0"
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:         }
Sep 30 18:18:27 compute-0 cranky_fermi[314042]:     ]
Sep 30 18:18:27 compute-0 cranky_fermi[314042]: }
Sep 30 18:18:27 compute-0 systemd[1]: libpod-abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa.scope: Deactivated successfully.
Sep 30 18:18:27 compute-0 podman[314026]: 2025-09-30 18:18:27.521851152 +0000 UTC m=+0.528042690 container died abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:18:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d1a1e962248e259493d0b89891b1363f124d4a52305288b7bafdc23cdc356b6-merged.mount: Deactivated successfully.
Sep 30 18:18:27 compute-0 podman[314026]: 2025-09-30 18:18:27.581866815 +0000 UTC m=+0.588058353 container remove abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:18:27 compute-0 systemd[1]: libpod-conmon-abb10c00de4e8eb30917fabfebb4c15253e0d1ccf4e6990bf0fb20d4d574c9fa.scope: Deactivated successfully.
Sep 30 18:18:27 compute-0 sudo[313918]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:27 compute-0 sudo[314066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:18:27 compute-0 sudo[314066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:27 compute-0 sudo[314066]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:27 compute-0 nova_compute[265391]: 2025-09-30 18:18:27.792 2 DEBUG nova.network.neutron [-] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:18:27 compute-0 sudo[314092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:18:27 compute-0 sudo[314092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:18:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 14K writes, 52K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4353 syncs, 3.30 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4735 writes, 17K keys, 4735 commit groups, 1.0 writes per commit group, ingest: 19.06 MB, 0.03 MB/s
                                           Interval WAL: 4735 writes, 1896 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:18:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:28.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:28 compute-0 sshd-session[313457]: Failed password for root from 154.125.120.7 port 48985 ssh2
Sep 30 18:18:28 compute-0 nova_compute[265391]: 2025-09-30 18:18:28.309 2 INFO nova.compute.manager [-] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Took 1.52 seconds to deallocate network for instance.
Sep 30 18:18:28 compute-0 podman[314157]: 2025-09-30 18:18:28.243662266 +0000 UTC m=+0.038946525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:18:28 compute-0 podman[314157]: 2025-09-30 18:18:28.349538602 +0000 UTC m=+0.144822771 container create 61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:18:28 compute-0 systemd[1]: Started libpod-conmon-61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e.scope.
Sep 30 18:18:28 compute-0 nova_compute[265391]: 2025-09-30 18:18:28.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:18:28 compute-0 nova_compute[265391]: 2025-09-30 18:18:28.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:18:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:18:28 compute-0 podman[314157]: 2025-09-30 18:18:28.465016089 +0000 UTC m=+0.260300288 container init 61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_bartik, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:18:28 compute-0 podman[314157]: 2025-09-30 18:18:28.477255867 +0000 UTC m=+0.272540036 container start 61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_bartik, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:18:28 compute-0 serene_bartik[314173]: 167 167
Sep 30 18:18:28 compute-0 systemd[1]: libpod-61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e.scope: Deactivated successfully.
Sep 30 18:18:28 compute-0 podman[314157]: 2025-09-30 18:18:28.555792112 +0000 UTC m=+0.351076391 container attach 61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_bartik, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:18:28 compute-0 podman[314157]: 2025-09-30 18:18:28.556532372 +0000 UTC m=+0.351816541 container died 61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:18:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:28] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:18:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:28] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:18:28 compute-0 nova_compute[265391]: 2025-09-30 18:18:28.873 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:28 compute-0 nova_compute[265391]: 2025-09-30 18:18:28.874 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:28 compute-0 nova_compute[265391]: 2025-09-30 18:18:28.881 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.008s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d93523887012a128168008b117e7e72a6b52a9fe42e020653b609c1aa483220-merged.mount: Deactivated successfully.
Sep 30 18:18:29 compute-0 nova_compute[265391]: 2025-09-30 18:18:29.013 2 INFO nova.scheduler.client.report [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Deleted allocations for instance 7f660b4a-3177-4f85-985d-90a46be506e6
Sep 30 18:18:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:29 compute-0 podman[314157]: 2025-09-30 18:18:29.289300741 +0000 UTC m=+1.084584950 container remove 61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_bartik, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:18:29 compute-0 ceph-mon[73755]: pgmap v1195: 353 pgs: 353 active+clean; 200 MiB data, 326 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:29 compute-0 nova_compute[265391]: 2025-09-30 18:18:29.329 2 DEBUG nova.compute.manager [req-a277ea67-d97d-4409-869a-d335e4ccaa07 req-ebd1fba7-e1b6-4319-9a62-dd7840ff9ab4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7f660b4a-3177-4f85-985d-90a46be506e6] Received event network-vif-deleted-57967583-5fed-40cd-bc09-455163ece536 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:18:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:29.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:29 compute-0 systemd[1]: libpod-conmon-61e166015128359999e785872b86f57e3d4e2883cea2cc1d9888111d1cd9cc5e.scope: Deactivated successfully.
Sep 30 18:18:29 compute-0 sshd-session[313457]: Received disconnect from 154.125.120.7 port 48985:11: Bye Bye [preauth]
Sep 30 18:18:29 compute-0 sshd-session[313457]: Disconnected from authenticating user root 154.125.120.7 port 48985 [preauth]
Sep 30 18:18:29 compute-0 podman[314198]: 2025-09-30 18:18:29.480951211 +0000 UTC m=+0.031865770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:18:29 compute-0 podman[314198]: 2025-09-30 18:18:29.576886579 +0000 UTC m=+0.127801128 container create 687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:18:29 compute-0 systemd[1]: Started libpod-conmon-687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264.scope.
Sep 30 18:18:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0468b6154b6610cadacb25bab7735664f4167a45c6f0895d62fa0aad627d8ab1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0468b6154b6610cadacb25bab7735664f4167a45c6f0895d62fa0aad627d8ab1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0468b6154b6610cadacb25bab7735664f4167a45c6f0895d62fa0aad627d8ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0468b6154b6610cadacb25bab7735664f4167a45c6f0895d62fa0aad627d8ab1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:18:29 compute-0 podman[276673]: time="2025-09-30T18:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:18:29 compute-0 podman[314198]: 2025-09-30 18:18:29.751803104 +0000 UTC m=+0.302717673 container init 687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:18:29 compute-0 podman[314198]: 2025-09-30 18:18:29.76047845 +0000 UTC m=+0.311392989 container start 687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:18:29 compute-0 podman[314198]: 2025-09-30 18:18:29.79120948 +0000 UTC m=+0.342124019 container attach 687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:18:29 compute-0 podman[314212]: 2025-09-30 18:18:29.807326819 +0000 UTC m=+0.182193464 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:18:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44176 "" "Go-http-client/1.1"
Sep 30 18:18:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 11155 "" "Go-http-client/1.1"
Sep 30 18:18:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840030a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.093 2 DEBUG oslo_concurrency.lockutils [None req-c1ecca4c-e3d5-45dc-85b7-73e9ca162766 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "7f660b4a-3177-4f85-985d-90a46be506e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.892s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 121 MiB data, 286 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Sep 30 18:18:30 compute-0 ceph-mon[73755]: pgmap v1196: 353 pgs: 353 active+clean; 121 MiB data, 286 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.509 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.510 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.510 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.510 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.510 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.521 2 INFO nova.compute.manager [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Terminating instance
Sep 30 18:18:30 compute-0 lvm[314309]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:18:30 compute-0 lvm[314309]: VG ceph_vg0 finished
Sep 30 18:18:30 compute-0 sharp_lumiere[314229]: {}
Sep 30 18:18:30 compute-0 systemd[1]: libpod-687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264.scope: Deactivated successfully.
Sep 30 18:18:30 compute-0 systemd[1]: libpod-687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264.scope: Consumed 1.352s CPU time.
Sep 30 18:18:30 compute-0 podman[314312]: 2025-09-30 18:18:30.655856843 +0000 UTC m=+0.031849380 container died 687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 18:18:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0468b6154b6610cadacb25bab7735664f4167a45c6f0895d62fa0aad627d8ab1-merged.mount: Deactivated successfully.
Sep 30 18:18:30 compute-0 podman[314312]: 2025-09-30 18:18:30.712432297 +0000 UTC m=+0.088424834 container remove 687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:18:30 compute-0 systemd[1]: libpod-conmon-687a72a66062e25c91f49e63736937c3e4fd3a8e67a2fe56717b786f0f21a264.scope: Deactivated successfully.
Sep 30 18:18:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:30 compute-0 sudo[314092]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:18:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:18:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:30 compute-0 sudo[314327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:18:30 compute-0 sudo[314327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:30 compute-0 sudo[314327]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.946 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:18:30 compute-0 nova_compute[265391]: 2025-09-30 18:18:30.947 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:18:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.040 2 DEBUG nova.compute.manager [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:18:31 compute-0 kernel: tape8db1337-aa (unregistering): left promiscuous mode
Sep 30 18:18:31 compute-0 NetworkManager[45059]: <info>  [1759256311.1296] device (tape8db1337-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 ovn_controller[156242]: 2025-09-30T18:18:31Z|00109|binding|INFO|Releasing lport e8db1337-aad5-4a75-89bc-1526d5c83cc6 from this chassis (sb_readonly=0)
Sep 30 18:18:31 compute-0 ovn_controller[156242]: 2025-09-30T18:18:31Z|00110|binding|INFO|Setting lport e8db1337-aad5-4a75-89bc-1526d5c83cc6 down in Southbound
Sep 30 18:18:31 compute-0 ovn_controller[156242]: 2025-09-30T18:18:31Z|00111|binding|INFO|Removing iface tape8db1337-aa ovn-installed in OVS
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.147 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:3e:ec 10.100.0.14'], port_security=['fa:16:3e:a5:3e:ec 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '26cb264e-21ae-425e-b5db-a6d24a90b6ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddde596e2d64cec889cb4c4d3642bc5', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f566abc7-3fe4-4e56-86df-377c1571ec04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afc38829-13e1-4bde-91a7-790387f17ce5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=e8db1337-aad5-4a75-89bc-1526d5c83cc6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.148 166158 INFO neutron.agent.ovn.metadata.agent [-] Port e8db1337-aad5-4a75-89bc-1526d5c83cc6 in datapath cd077ee2-d26f-4989-8ea7-4aecbac7c636 unbound from our chassis
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.149 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd077ee2-d26f-4989-8ea7-4aecbac7c636, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.150 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d98389a-1d4b-4d6c-adf3-6ef589d8a0df]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.150 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636 namespace which is not needed anymore
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Sep 30 18:18:31 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000a.scope: Consumed 17.118s CPU time.
Sep 30 18:18:31 compute-0 systemd-machined[219917]: Machine qemu-7-instance-0000000a terminated.
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.286 2 INFO nova.virt.libvirt.driver [-] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Instance destroyed successfully.
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.287 2 DEBUG nova.objects.instance [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lazy-loading 'resources' on Instance uuid 26cb264e-21ae-425e-b5db-a6d24a90b6ca obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:18:31 compute-0 neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636[312110]: [NOTICE]   (312114) : haproxy version is 3.0.5-8e879a5
Sep 30 18:18:31 compute-0 neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636[312110]: [NOTICE]   (312114) : path to executable is /usr/sbin/haproxy
Sep 30 18:18:31 compute-0 neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636[312110]: [WARNING]  (312114) : Exiting Master process...
Sep 30 18:18:31 compute-0 podman[314394]: 2025-09-30 18:18:31.321102415 +0000 UTC m=+0.050550147 container kill af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:18:31 compute-0 neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636[312110]: [ALERT]    (312114) : Current worker (312116) exited with code 143 (Terminated)
Sep 30 18:18:31 compute-0 neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636[312110]: [WARNING]  (312114) : All workers exited. Exiting... (0)
Sep 30 18:18:31 compute-0 systemd[1]: libpod-af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497.scope: Deactivated successfully.
Sep 30 18:18:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:31.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:31 compute-0 podman[314418]: 2025-09-30 18:18:31.363035187 +0000 UTC m=+0.024588201 container died af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest)
Sep 30 18:18:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:18:31 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283538771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497-userdata-shm.mount: Deactivated successfully.
Sep 30 18:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b185b9b0fc9dba73d859564d02c2a33d810753bd08913b604b528682530b49f7-merged.mount: Deactivated successfully.
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.412 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:18:31 compute-0 openstack_network_exporter[279566]: ERROR   18:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:18:31 compute-0 openstack_network_exporter[279566]: ERROR   18:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:18:31 compute-0 openstack_network_exporter[279566]: ERROR   18:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:18:31 compute-0 openstack_network_exporter[279566]: ERROR   18:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:18:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:18:31 compute-0 openstack_network_exporter[279566]: ERROR   18:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:18:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:18:31 compute-0 podman[314418]: 2025-09-30 18:18:31.431967202 +0000 UTC m=+0.093520166 container cleanup af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:18:31 compute-0 systemd[1]: libpod-conmon-af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497.scope: Deactivated successfully.
Sep 30 18:18:31 compute-0 podman[314427]: 2025-09-30 18:18:31.457448695 +0000 UTC m=+0.098760382 container remove af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.472 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d038d10-f8c6-49a3-a36e-2b182659964d]: (4, ("Tue Sep 30 06:18:31 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636 (af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497)\naf68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497\nTue Sep 30 06:18:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636 (af68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497)\naf68064ec1493736b182556d8b6b77d56067e16f7f6aea037ba368e694511497\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.474 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[32cc73c2-0dc4-4047-8a87-0ef510bc95b4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.474 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd077ee2-d26f-4989-8ea7-4aecbac7c636.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.475 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[51e9f59a-cfff-4d32-8e3d-9e4b7ffd1521]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.475 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd077ee2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 kernel: tapcd077ee2-d0: left promiscuous mode
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.515 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3a17f771-6c32-47e2-ba2c-d3862e178c7a]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 sudo[314456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:18:31 compute-0 sudo[314456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:31 compute-0 sudo[314456]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.557 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[496b967b-af68-431b-8a2b-39a4faecaec4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.559 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8cad8616-3de7-479f-abbd-1a6eb248bffe]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.580 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bb51384b-ca56-42cf-a670-9e1b2f880bde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482038, 'reachable_time': 32885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314485, 'error': None, 'target': 'ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.582 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd077ee2-d26f-4989-8ea7-4aecbac7c636 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:18:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:31.582 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[96bd2af1-0022-46ec-8acb-e67b6cbd9985]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dcd077ee2\x2dd26f\x2d4989\x2d8ea7\x2d4aecbac7c636.mount: Deactivated successfully.
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.587 2 DEBUG nova.compute.manager [req-6981448e-bcf2-4fe5-96e5-0b70c43fa031 req-3b86e89c-fe68-4fd9-89b5-31d0540dd7bf 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received event network-vif-unplugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.588 2 DEBUG oslo_concurrency.lockutils [req-6981448e-bcf2-4fe5-96e5-0b70c43fa031 req-3b86e89c-fe68-4fd9-89b5-31d0540dd7bf 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.588 2 DEBUG oslo_concurrency.lockutils [req-6981448e-bcf2-4fe5-96e5-0b70c43fa031 req-3b86e89c-fe68-4fd9-89b5-31d0540dd7bf 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.588 2 DEBUG oslo_concurrency.lockutils [req-6981448e-bcf2-4fe5-96e5-0b70c43fa031 req-3b86e89c-fe68-4fd9-89b5-31d0540dd7bf 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.588 2 DEBUG nova.compute.manager [req-6981448e-bcf2-4fe5-96e5-0b70c43fa031 req-3b86e89c-fe68-4fd9-89b5-31d0540dd7bf 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] No waiting events found dispatching network-vif-unplugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.588 2 DEBUG nova.compute.manager [req-6981448e-bcf2-4fe5-96e5-0b70c43fa031 req-3b86e89c-fe68-4fd9-89b5-31d0540dd7bf 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received event network-vif-unplugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:18:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:18:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3283538771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.813 2 DEBUG nova.virt.libvirt.vif [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1754120251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1754120251',id=10,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:17:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddde596e2d64cec889cb4c4d3642bc5',ramdisk_id='',reservation_id='r-ks0ceyno',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-1755756413',owner_user_name='tempest-TestExecuteBasicStrategy-1755756413-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:17:03Z,user_data=None,user_id='54973270e5a040c8af5ec2225e3caec8',uuid=26cb264e-21ae-425e-b5db-a6d24a90b6ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.813 2 DEBUG nova.network.os_vif_util [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Converting VIF {"id": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "address": "fa:16:3e:a5:3e:ec", "network": {"id": "cd077ee2-d26f-4989-8ea7-4aecbac7c636", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1165176741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d66b07a980744cd29ee547eb08500706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8db1337-aa", "ovs_interfaceid": "e8db1337-aad5-4a75-89bc-1526d5c83cc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.814 2 DEBUG nova.network.os_vif_util [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:3e:ec,bridge_name='br-int',has_traffic_filtering=True,id=e8db1337-aad5-4a75-89bc-1526d5c83cc6,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8db1337-aa') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.814 2 DEBUG os_vif [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:3e:ec,bridge_name='br-int',has_traffic_filtering=True,id=e8db1337-aad5-4a75-89bc-1526d5c83cc6,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8db1337-aa') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.816 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8db1337-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.820 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=f4932a99-3c9f-4f6a-8c7f-223a94cd4eac) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:31 compute-0 nova_compute[265391]: 2025-09-30 18:18:31.824 2 INFO os_vif [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:3e:ec,bridge_name='br-int',has_traffic_filtering=True,id=e8db1337-aad5-4a75-89bc-1526d5c83cc6,network=Network(cd077ee2-d26f-4989-8ea7-4aecbac7c636),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8db1337-aa')
Sep 30 18:18:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 121 MiB data, 286 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.309 2 INFO nova.virt.libvirt.driver [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Deleting instance files /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca_del
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.310 2 INFO nova.virt.libvirt.driver [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Deletion of /var/lib/nova/instances/26cb264e-21ae-425e-b5db-a6d24a90b6ca_del complete
Sep 30 18:18:32 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:32.310 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.464 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.464 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.605 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.607 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.631 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.632 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4378MB free_disk=39.946632385253906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.632 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.632 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1937590608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:32 compute-0 ceph-mon[73755]: pgmap v1197: 353 pgs: 353 active+clean; 121 MiB data, 286 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.824 2 INFO nova.compute.manager [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Took 1.78 seconds to destroy the instance on the hypervisor.
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.825 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.826 2 DEBUG nova.compute.manager [-] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.826 2 DEBUG nova.network.neutron [-] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:18:32 compute-0 nova_compute[265391]: 2025-09-30 18:18:32.826 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.053 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:18:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:33.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.668 2 DEBUG nova.compute.manager [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received event network-vif-unplugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.669 2 DEBUG oslo_concurrency.lockutils [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.669 2 DEBUG oslo_concurrency.lockutils [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.669 2 DEBUG oslo_concurrency.lockutils [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.670 2 DEBUG nova.compute.manager [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] No waiting events found dispatching network-vif-unplugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.670 2 DEBUG nova.compute.manager [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received event network-vif-unplugged-e8db1337-aad5-4a75-89bc-1526d5c83cc6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.671 2 DEBUG nova.compute.manager [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Received event network-vif-deleted-e8db1337-aad5-4a75-89bc-1526d5c83cc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.671 2 INFO nova.compute.manager [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Neutron deleted interface e8db1337-aad5-4a75-89bc-1526d5c83cc6; detaching it from the instance and deleting it from the info cache
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.671 2 DEBUG nova.network.neutron [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.695 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 26cb264e-21ae-425e-b5db-a6d24a90b6ca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.696 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.696 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:18:32 up  1:21,  0 user,  load average: 0.87, 0.76, 0.84\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_deleting': '1', 'num_os_type_None': '1', 'num_proj_eddde596e2d64cec889cb4c4d3642bc5': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:18:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:33.724Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:18:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:33.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.727 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:18:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840041a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:33 compute-0 nova_compute[265391]: 2025-09-30 18:18:33.903 2 DEBUG nova.network.neutron [-] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:18:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:34 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4084878060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:18:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856771135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:34 compute-0 nova_compute[265391]: 2025-09-30 18:18:34.182 2 DEBUG nova.compute.manager [req-8e848b5b-f701-4e7b-bab6-9929a11a24d3 req-f264bfef-232f-4060-908e-f4f78bd359ca 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Detach interface failed, port_id=e8db1337-aad5-4a75-89bc-1526d5c83cc6, reason: Instance 26cb264e-21ae-425e-b5db-a6d24a90b6ca could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Sep 30 18:18:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:18:34 compute-0 nova_compute[265391]: 2025-09-30 18:18:34.201 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:18:34 compute-0 nova_compute[265391]: 2025-09-30 18:18:34.208 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:18:34 compute-0 nova_compute[265391]: 2025-09-30 18:18:34.412 2 INFO nova.compute.manager [-] [instance: 26cb264e-21ae-425e-b5db-a6d24a90b6ca] Took 1.59 seconds to deallocate network for instance.
Sep 30 18:18:34 compute-0 nova_compute[265391]: 2025-09-30 18:18:34.716 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:18:34 compute-0 nova_compute[265391]: 2025-09-30 18:18:34.933 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2856771135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:35 compute-0 ceph-mon[73755]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:18:35 compute-0 nova_compute[265391]: 2025-09-30 18:18:35.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:35 compute-0 nova_compute[265391]: 2025-09-30 18:18:35.236 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:18:35 compute-0 nova_compute[265391]: 2025-09-30 18:18:35.236 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.604s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:35 compute-0 nova_compute[265391]: 2025-09-30 18:18:35.236 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.304s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:35 compute-0 nova_compute[265391]: 2025-09-30 18:18:35.272 2 DEBUG oslo_concurrency.processutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:18:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:35.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:18:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172187211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:35 compute-0 nova_compute[265391]: 2025-09-30 18:18:35.757 2 DEBUG oslo_concurrency.processutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:18:35 compute-0 nova_compute[265391]: 2025-09-30 18:18:35.764 2 DEBUG nova.compute.provider_tree [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:18:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003db0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:36.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4172187211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:18:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:18:36 compute-0 nova_compute[265391]: 2025-09-30 18:18:36.237 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:18:36 compute-0 nova_compute[265391]: 2025-09-30 18:18:36.238 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:18:36 compute-0 nova_compute[265391]: 2025-09-30 18:18:36.238 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:18:36 compute-0 nova_compute[265391]: 2025-09-30 18:18:36.238 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:18:36 compute-0 nova_compute[265391]: 2025-09-30 18:18:36.309 2 DEBUG nova.scheduler.client.report [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:18:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:18:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3601153906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:18:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:18:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3601153906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:18:36 compute-0 nova_compute[265391]: 2025-09-30 18:18:36.819 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.583s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:36 compute-0 nova_compute[265391]: 2025-09-30 18:18:36.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:36 compute-0 nova_compute[265391]: 2025-09-30 18:18:36.849 2 INFO nova.scheduler.client.report [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Deleted allocations for instance 26cb264e-21ae-425e-b5db-a6d24a90b6ca
Sep 30 18:18:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a40008d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:37 compute-0 ceph-mon[73755]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:18:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3601153906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:18:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3601153906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:18:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:37.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:18:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:18:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:37.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:18:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:18:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:18:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:18:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:18:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:18:37 compute-0 podman[314557]: 2025-09-30 18:18:37.549563159 +0000 UTC m=+0.079930672 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:18:37 compute-0 podman[314558]: 2025-09-30 18:18:37.568453921 +0000 UTC m=+0.096522505 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:18:37 compute-0 podman[314559]: 2025-09-30 18:18:37.57995217 +0000 UTC m=+0.102810948 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm)
Sep 30 18:18:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840041a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:37 compute-0 nova_compute[265391]: 2025-09-30 18:18:37.884 2 DEBUG oslo_concurrency.lockutils [None req-7f9deaf7-5c48-49f4-8315-788ef03fece9 54973270e5a040c8af5ec2225e3caec8 eddde596e2d64cec889cb4c4d3642bc5 - - default default] Lock "26cb264e-21ae-425e-b5db-a6d24a90b6ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.375s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:38.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:18:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:18:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:38] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:18:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:38] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:18:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840041a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:39 compute-0 ceph-mon[73755]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:18:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:39.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:40.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:40 compute-0 nova_compute[265391]: 2025-09-30 18:18:40.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:18:40 compute-0 sshd-session[314614]: Invalid user postgres from 45.252.249.158 port 36584
Sep 30 18:18:40 compute-0 sshd-session[314614]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:18:40 compute-0 sshd-session[314614]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:18:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003dd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:41 compute-0 ceph-mon[73755]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:18:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:41.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:41 compute-0 nova_compute[265391]: 2025-09-30 18:18:41.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:42.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:42 compute-0 nova_compute[265391]: 2025-09-30 18:18:42.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:18:42 compute-0 ceph-mon[73755]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:18:42 compute-0 sshd-session[314614]: Failed password for invalid user postgres from 45.252.249.158 port 36584 ssh2
Sep 30 18:18:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004640 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:43.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:43.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:44.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:18:44 compute-0 sshd-session[314614]: Received disconnect from 45.252.249.158 port 36584:11: Bye Bye [preauth]
Sep 30 18:18:44 compute-0 sshd-session[314614]: Disconnected from invalid user postgres 45.252.249.158 port 36584 [preauth]
Sep 30 18:18:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:45 compute-0 nova_compute[265391]: 2025-09-30 18:18:45.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:45 compute-0 ceph-mon[73755]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:18:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:45.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:46.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:46 compute-0 nova_compute[265391]: 2025-09-30 18:18:46.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004640 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:47.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:47 compute-0 ceph-mon[73755]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:47.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:48] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:18:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:48] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:18:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:49 compute-0 ceph-mon[73755]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:49.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003e10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:50 compute-0 nova_compute[265391]: 2025-09-30 18:18:50.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:50 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:50.140 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:2f:79 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fb58e9fa-a47f-46f8-8fc6-4c39220a3c7c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb58e9fa-a47f-46f8-8fc6-4c39220a3c7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4967e90f79a346799e6308bad2720c19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24ff55c2-8d35-4c40-8cdb-b69990431aae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=db72a019-8f7d-4b59-bddf-b93d21d66f5c) old=Port_Binding(mac=['fa:16:3e:ce:2f:79'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-fb58e9fa-a47f-46f8-8fc6-4c39220a3c7c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb58e9fa-a47f-46f8-8fc6-4c39220a3c7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4967e90f79a346799e6308bad2720c19', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:18:50 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:50.141 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port db72a019-8f7d-4b59-bddf-b93d21d66f5c in datapath fb58e9fa-a47f-46f8-8fc6-4c39220a3c7c updated
Sep 30 18:18:50 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:50.142 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb58e9fa-a47f-46f8-8fc6-4c39220a3c7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:18:50 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:50.143 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[acf86d23-4d2b-47bd-9c31-e5284bdda9f4]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:18:50 compute-0 ceph-mon[73755]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:18:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004640 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:51.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:51 compute-0 sudo[314632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:18:51 compute-0 sudo[314632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:18:51 compute-0 sudo[314632]: pam_unix(sudo:session): session closed for user root
Sep 30 18:18:51 compute-0 nova_compute[265391]: 2025-09-30 18:18:51.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03800016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:52.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:18:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:18:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009b30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:53 compute-0 ceph-mon[73755]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:18:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:53.726Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:54.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:18:54 compute-0 sshd-session[314658]: Invalid user minecraft from 14.225.220.107 port 34990
Sep 30 18:18:54 compute-0 sshd-session[314658]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:18:54 compute-0 sshd-session[314658]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:18:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:54.293 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:18:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:54.293 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:18:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:54.293 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:18:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004640 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:55 compute-0 nova_compute[265391]: 2025-09-30 18:18:55.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:55 compute-0 ceph-mon[73755]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:18:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:55.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:55 compute-0 podman[314665]: 2025-09-30 18:18:55.531516724 +0000 UTC m=+0.061192124 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:18:55 compute-0 sshd-session[314658]: Failed password for invalid user minecraft from 14.225.220.107 port 34990 ssh2
Sep 30 18:18:55 compute-0 podman[314664]: 2025-09-30 18:18:55.600674365 +0000 UTC m=+0.134695578 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:18:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:18:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:56 compute-0 ceph-mon[73755]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:56 compute-0 sshd-session[314658]: Received disconnect from 14.225.220.107 port 34990:11: Bye Bye [preauth]
Sep 30 18:18:56 compute-0 sshd-session[314658]: Disconnected from invalid user minecraft 14.225.220.107 port 34990 [preauth]
Sep 30 18:18:56 compute-0 nova_compute[265391]: 2025-09-30 18:18:56.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:18:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009b30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:57.158 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:c3:8a 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-813545df-e959-4c8c-a60c-e9381ec1d1af', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-813545df-e959-4c8c-a60c-e9381ec1d1af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6dd6f492f1f4129bcd0c59ee535a610', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dad94911-5503-46ce-bc71-e5a28f8991cf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d9446fba-5ece-445f-b9a1-52303f9bbf8d) old=Port_Binding(mac=['fa:16:3e:be:c3:8a'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-813545df-e959-4c8c-a60c-e9381ec1d1af', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-813545df-e959-4c8c-a60c-e9381ec1d1af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6dd6f492f1f4129bcd0c59ee535a610', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:18:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:57.159 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d9446fba-5ece-445f-b9a1-52303f9bbf8d in datapath 813545df-e959-4c8c-a60c-e9381ec1d1af updated
Sep 30 18:18:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:57.160 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 813545df-e959-4c8c-a60c-e9381ec1d1af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:18:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:18:57.161 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3408beb5-dac0-4552-9397-d38d7771be99]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:18:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:18:57.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:18:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:57.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:18:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3555197970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:18:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:18:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3555197970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:18:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3555197970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:18:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3555197970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:18:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003e50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:18:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:18:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:18:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:58 compute-0 ceph-mon[73755]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:18:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:58] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:18:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:18:58] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:18:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004640 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:18:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:18:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:18:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:18:59.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:18:59 compute-0 podman[276673]: time="2025-09-30T18:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:18:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:18:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10279 "" "Go-http-client/1.1"
Sep 30 18:18:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:18:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:00.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:00 compute-0 nova_compute[265391]: 2025-09-30 18:19:00.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:00 compute-0 podman[314716]: 2025-09-30 18:19:00.529263734 +0000 UTC m=+0.064391508 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:19:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009b30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:01 compute-0 ceph-mon[73755]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:01.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:01 compute-0 openstack_network_exporter[279566]: ERROR   18:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:19:01 compute-0 openstack_network_exporter[279566]: ERROR   18:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:19:01 compute-0 openstack_network_exporter[279566]: ERROR   18:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:19:01 compute-0 openstack_network_exporter[279566]: ERROR   18:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:19:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:19:01 compute-0 openstack_network_exporter[279566]: ERROR   18:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:19:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:19:01 compute-0 nova_compute[265391]: 2025-09-30 18:19:01.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003e70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:02.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004640 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:03 compute-0 ceph-mon[73755]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:03.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:03.726Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:04.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009b30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:05 compute-0 nova_compute[265391]: 2025-09-30 18:19:05.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:05 compute-0 ceph-mon[73755]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:05.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:06.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:06 compute-0 ceph-mon[73755]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:06 compute-0 nova_compute[265391]: 2025-09-30 18:19:06.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004640 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:07.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:19:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:19:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:19:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:19:07
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data']
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:19:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:19:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:08 compute-0 ceph-mon[73755]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:08 compute-0 podman[314745]: 2025-09-30 18:19:08.547718445 +0000 UTC m=+0.072803537 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:19:08 compute-0 podman[314744]: 2025-09-30 18:19:08.562392817 +0000 UTC m=+0.086914224 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:19:08 compute-0 podman[314746]: 2025-09-30 18:19:08.56288964 +0000 UTC m=+0.075319862 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Sep 30 18:19:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:08] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:19:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:08] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:19:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009b30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:09.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003eb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:10.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:10 compute-0 nova_compute[265391]: 2025-09-30 18:19:10.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004640 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:11 compute-0 ceph-mon[73755]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:11 compute-0 sudo[314807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:19:11 compute-0 sudo[314807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:11 compute-0 sudo[314807]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:11 compute-0 nova_compute[265391]: 2025-09-30 18:19:11.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:12.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a40008d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:13 compute-0 ceph-mon[73755]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:13.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:13.727Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003ed0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:14.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:14 compute-0 ceph-mon[73755]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:14 compute-0 ovn_controller[156242]: 2025-09-30T18:19:14Z|00112|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Sep 30 18:19:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:15 compute-0 nova_compute[265391]: 2025-09-30 18:19:15.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:15.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:16 compute-0 nova_compute[265391]: 2025-09-30 18:19:16.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a40008d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:17.206Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:17 compute-0 ceph-mon[73755]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:17.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003ef0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:18] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:19:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:18] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:19:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:19 compute-0 ceph-mon[73755]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:19.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:20 compute-0 nova_compute[265391]: 2025-09-30 18:19:20.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:20 compute-0 ceph-mon[73755]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a40008d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:21.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:21 compute-0 nova_compute[265391]: 2025-09-30 18:19:21.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003f10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:22.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:19:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:19:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:23 compute-0 ceph-mon[73755]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:19:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:23.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:23.728Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:24.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/181925 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:19:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002b20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:25 compute-0 nova_compute[265391]: 2025-09-30 18:19:25.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:25 compute-0 ceph-mon[73755]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:19:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:25.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003f30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:19:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:19:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:26.264 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:19:26 compute-0 nova_compute[265391]: 2025-09-30 18:19:26.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:26.265 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:19:26 compute-0 podman[314849]: 2025-09-30 18:19:26.526587631 +0000 UTC m=+0.063746700 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:19:26 compute-0 podman[314848]: 2025-09-30 18:19:26.573949645 +0000 UTC m=+0.116118595 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:19:26 compute-0 nova_compute[265391]: 2025-09-30 18:19:26.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:27.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:27 compute-0 ceph-mon[73755]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:27.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:28.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:28.267 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:19:28 compute-0 ceph-mon[73755]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:28 compute-0 nova_compute[265391]: 2025-09-30 18:19:28.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:28.686 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:7f:4d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-443be7ca-f628-4a45-95b6-620d37172d7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '269f60e72ce1460a98da519466c89da6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96eb21b8-879c-4e72-963b-37e37ae3d0c5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=031d2cff-b142-4423-ba99-772183b7a667) old=Port_Binding(mac=['fa:16:3e:c4:7f:4d'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-443be7ca-f628-4a45-95b6-620d37172d7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '269f60e72ce1460a98da519466c89da6', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:19:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:28.687 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 031d2cff-b142-4423-ba99-772183b7a667 in datapath 443be7ca-f628-4a45-95b6-620d37172d7b updated
Sep 30 18:19:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:28.689 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 443be7ca-f628-4a45-95b6-620d37172d7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:19:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:28.690 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf70fb12-288e-4cb1-adc2-c451386804ca]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:19:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:28] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:19:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:28] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:19:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002b20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:29 compute-0 nova_compute[265391]: 2025-09-30 18:19:29.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:29 compute-0 nova_compute[265391]: 2025-09-30 18:19:29.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:29.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:29 compute-0 podman[276673]: time="2025-09-30T18:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:19:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:19:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10273 "" "Go-http-client/1.1"
Sep 30 18:19:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:30 compute-0 sshd-session[314898]: Invalid user pi from 185.156.73.233 port 29906
Sep 30 18:19:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:30.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:30 compute-0 nova_compute[265391]: 2025-09-30 18:19:30.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:30 compute-0 sshd-session[314898]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:19:30 compute-0 sshd-session[314898]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.156.73.233
Sep 30 18:19:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:30 compute-0 nova_compute[265391]: 2025-09-30 18:19:30.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:30 compute-0 nova_compute[265391]: 2025-09-30 18:19:30.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:30 compute-0 nova_compute[265391]: 2025-09-30 18:19:30.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:19:30 compute-0 nova_compute[265391]: 2025-09-30 18:19:30.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:19:30 compute-0 nova_compute[265391]: 2025-09-30 18:19:30.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:19:30 compute-0 nova_compute[265391]: 2025-09-30 18:19:30.939 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:19:30 compute-0 nova_compute[265391]: 2025-09-30 18:19:30.939 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:31 compute-0 sudo[314922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:19:31 compute-0 sudo[314922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:31 compute-0 sudo[314922]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:31 compute-0 ceph-mon[73755]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:19:31 compute-0 podman[314946]: 2025-09-30 18:19:31.318705197 +0000 UTC m=+0.062149949 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Sep 30 18:19:31 compute-0 sudo[314953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:19:31 compute-0 sudo[314953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:19:31 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976751412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:19:31 compute-0 openstack_network_exporter[279566]: ERROR   18:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:19:31 compute-0 openstack_network_exporter[279566]: ERROR   18:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:19:31 compute-0 openstack_network_exporter[279566]: ERROR   18:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:19:31 compute-0 openstack_network_exporter[279566]: ERROR   18:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:19:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:19:31 compute-0 openstack_network_exporter[279566]: ERROR   18:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:19:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:19:31 compute-0 nova_compute[265391]: 2025-09-30 18:19:31.424 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:31 compute-0 nova_compute[265391]: 2025-09-30 18:19:31.584 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:19:31 compute-0 nova_compute[265391]: 2025-09-30 18:19:31.585 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:31.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:31 compute-0 nova_compute[265391]: 2025-09-30 18:19:31.611 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:31 compute-0 nova_compute[265391]: 2025-09-30 18:19:31.612 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4415MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:19:31 compute-0 nova_compute[265391]: 2025-09-30 18:19:31.612 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:19:31 compute-0 nova_compute[265391]: 2025-09-30 18:19:31.612 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:19:31 compute-0 sshd-session[314898]: Failed password for invalid user pi from 185.156.73.233 port 29906 ssh2
Sep 30 18:19:31 compute-0 sudo[315013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:19:31 compute-0 sudo[315013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:31 compute-0 sudo[315013]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:31 compute-0 nova_compute[265391]: 2025-09-30 18:19:31.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:31 compute-0 sudo[314953]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:19:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:19:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:19:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:19:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:19:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:32.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:19:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:19:32 compute-0 sudo[315050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:19:32 compute-0 sudo[315050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:32 compute-0 sudo[315050]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:19:32 compute-0 sudo[315075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:19:32 compute-0 sudo[315075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2976751412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:19:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:19:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:19:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:19:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:19:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:19:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:19:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:19:32 compute-0 sshd-session[314898]: Connection closed by invalid user pi 185.156.73.233 port 29906 [preauth]
Sep 30 18:19:32 compute-0 nova_compute[265391]: 2025-09-30 18:19:32.661 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:19:32 compute-0 nova_compute[265391]: 2025-09-30 18:19:32.662 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:19:31 up  1:22,  0 user,  load average: 0.50, 0.68, 0.81\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:19:32 compute-0 nova_compute[265391]: 2025-09-30 18:19:32.678 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:19:32 compute-0 nova_compute[265391]: 2025-09-30 18:19:32.697 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:19:32 compute-0 nova_compute[265391]: 2025-09-30 18:19:32.698 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:19:32 compute-0 nova_compute[265391]: 2025-09-30 18:19:32.711 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:19:32 compute-0 nova_compute[265391]: 2025-09-30 18:19:32.728 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:19:32 compute-0 nova_compute[265391]: 2025-09-30 18:19:32.743 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:32 compute-0 podman[315141]: 2025-09-30 18:19:32.757462668 +0000 UTC m=+0.061171573 container create ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 18:19:32 compute-0 systemd[1]: Started libpod-conmon-ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170.scope.
Sep 30 18:19:32 compute-0 podman[315141]: 2025-09-30 18:19:32.722713204 +0000 UTC m=+0.026422159 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:19:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:19:32 compute-0 podman[315141]: 2025-09-30 18:19:32.877956505 +0000 UTC m=+0.181665460 container init ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_kare, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:19:32 compute-0 podman[315141]: 2025-09-30 18:19:32.888651614 +0000 UTC m=+0.192360479 container start ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:19:32 compute-0 podman[315141]: 2025-09-30 18:19:32.892510644 +0000 UTC m=+0.196219539 container attach ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_kare, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 18:19:32 compute-0 admiring_kare[315158]: 167 167
Sep 30 18:19:32 compute-0 systemd[1]: libpod-ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170.scope: Deactivated successfully.
Sep 30 18:19:32 compute-0 podman[315141]: 2025-09-30 18:19:32.897303449 +0000 UTC m=+0.201012384 container died ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_kare, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:19:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-57bd41b1c2cb6d5ae864329922006e1106b702b9c384a2c17cfdbebf6d53451f-merged.mount: Deactivated successfully.
Sep 30 18:19:32 compute-0 podman[315141]: 2025-09-30 18:19:32.952785564 +0000 UTC m=+0.256494429 container remove ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_kare, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:19:32 compute-0 systemd[1]: libpod-conmon-ca101e4de3483f802bc251b05626f4b86a3d2290578c2d3cf035505ab60cb170.scope: Deactivated successfully.
Sep 30 18:19:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002b20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:33 compute-0 podman[315201]: 2025-09-30 18:19:33.152427682 +0000 UTC m=+0.041741338 container create 5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_poitras, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:19:33 compute-0 systemd[1]: Started libpod-conmon-5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f.scope.
Sep 30 18:19:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7be7c26ef13fb7f4d70cf091e56c0a44807a420655ad420dde42c33af8b1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7be7c26ef13fb7f4d70cf091e56c0a44807a420655ad420dde42c33af8b1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7be7c26ef13fb7f4d70cf091e56c0a44807a420655ad420dde42c33af8b1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7be7c26ef13fb7f4d70cf091e56c0a44807a420655ad420dde42c33af8b1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff7be7c26ef13fb7f4d70cf091e56c0a44807a420655ad420dde42c33af8b1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:33 compute-0 podman[315201]: 2025-09-30 18:19:33.136907318 +0000 UTC m=+0.026220974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:19:33 compute-0 podman[315201]: 2025-09-30 18:19:33.245952757 +0000 UTC m=+0.135266443 container init 5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:19:33 compute-0 podman[315201]: 2025-09-30 18:19:33.25453275 +0000 UTC m=+0.143846406 container start 5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:19:33 compute-0 podman[315201]: 2025-09-30 18:19:33.258379001 +0000 UTC m=+0.147692687 container attach 5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_poitras, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:19:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:19:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659287100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:19:33 compute-0 ceph-mon[73755]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:19:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2659287100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:19:33 compute-0 nova_compute[265391]: 2025-09-30 18:19:33.328 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:33 compute-0 nova_compute[265391]: 2025-09-30 18:19:33.336 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:19:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:19:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:33.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:33 compute-0 sharp_poitras[315218]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:19:33 compute-0 sharp_poitras[315218]: --> All data devices are unavailable
Sep 30 18:19:33 compute-0 systemd[1]: libpod-5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f.scope: Deactivated successfully.
Sep 30 18:19:33 compute-0 podman[315201]: 2025-09-30 18:19:33.641872736 +0000 UTC m=+0.531186402 container died 5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_poitras, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:19:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-eff7be7c26ef13fb7f4d70cf091e56c0a44807a420655ad420dde42c33af8b1a-merged.mount: Deactivated successfully.
Sep 30 18:19:33 compute-0 podman[315201]: 2025-09-30 18:19:33.681665292 +0000 UTC m=+0.570978958 container remove 5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 18:19:33 compute-0 systemd[1]: libpod-conmon-5a16d67c4d94197e4871fadec5b9795d79b3385bd55882a995b51ca90be2f66f.scope: Deactivated successfully.
Sep 30 18:19:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:33.729Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:19:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:33.730Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:33 compute-0 sudo[315075]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:33 compute-0 sudo[315249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:19:33 compute-0 sudo[315249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:33 compute-0 sudo[315249]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:33 compute-0 nova_compute[265391]: 2025-09-30 18:19:33.853 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:19:33 compute-0 sudo[315274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:19:33 compute-0 sudo[315274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:34.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:19:34 compute-0 podman[315339]: 2025-09-30 18:19:34.305784253 +0000 UTC m=+0.049883830 container create 8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_sutherland, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:19:34 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/299869197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:19:34 compute-0 systemd[1]: Started libpod-conmon-8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089.scope.
Sep 30 18:19:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:19:34 compute-0 nova_compute[265391]: 2025-09-30 18:19:34.363 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:19:34 compute-0 nova_compute[265391]: 2025-09-30 18:19:34.364 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.752s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:19:34 compute-0 nova_compute[265391]: 2025-09-30 18:19:34.364 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:34 compute-0 nova_compute[265391]: 2025-09-30 18:19:34.364 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:19:34 compute-0 podman[315339]: 2025-09-30 18:19:34.378231609 +0000 UTC m=+0.122331216 container init 8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 18:19:34 compute-0 podman[315339]: 2025-09-30 18:19:34.284641322 +0000 UTC m=+0.028740949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:19:34 compute-0 podman[315339]: 2025-09-30 18:19:34.385633582 +0000 UTC m=+0.129733159 container start 8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:19:34 compute-0 magical_sutherland[315355]: 167 167
Sep 30 18:19:34 compute-0 podman[315339]: 2025-09-30 18:19:34.389851232 +0000 UTC m=+0.133950839 container attach 8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_sutherland, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 18:19:34 compute-0 systemd[1]: libpod-8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089.scope: Deactivated successfully.
Sep 30 18:19:34 compute-0 podman[315339]: 2025-09-30 18:19:34.390054467 +0000 UTC m=+0.134154044 container died 8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 18:19:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-21c07120f4c2a4e02050ea53dd8e0f44cdb54354ebfa33abf7aae579e9703996-merged.mount: Deactivated successfully.
Sep 30 18:19:34 compute-0 podman[315339]: 2025-09-30 18:19:34.426516676 +0000 UTC m=+0.170616253 container remove 8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:19:34 compute-0 systemd[1]: libpod-conmon-8a0eed61443991eef8697f8ad0610a4d5b479f2fcd0a3d42d25dfafe36ca6089.scope: Deactivated successfully.
Sep 30 18:19:34 compute-0 podman[315379]: 2025-09-30 18:19:34.573428202 +0000 UTC m=+0.041588744 container create 261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_cray, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Sep 30 18:19:34 compute-0 systemd[1]: Started libpod-conmon-261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600.scope.
Sep 30 18:19:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c916eb0ea2034685a0e63dc72415d67dd61eec1c4b0204800fc6a004bde68a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c916eb0ea2034685a0e63dc72415d67dd61eec1c4b0204800fc6a004bde68a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c916eb0ea2034685a0e63dc72415d67dd61eec1c4b0204800fc6a004bde68a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c916eb0ea2034685a0e63dc72415d67dd61eec1c4b0204800fc6a004bde68a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:34 compute-0 podman[315379]: 2025-09-30 18:19:34.557075896 +0000 UTC m=+0.025236468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:19:34 compute-0 podman[315379]: 2025-09-30 18:19:34.654825601 +0000 UTC m=+0.122986173 container init 261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_cray, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:19:34 compute-0 podman[315379]: 2025-09-30 18:19:34.665119259 +0000 UTC m=+0.133279821 container start 261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:19:34 compute-0 podman[315379]: 2025-09-30 18:19:34.668500017 +0000 UTC m=+0.136660649 container attach 261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:19:34 compute-0 nova_compute[265391]: 2025-09-30 18:19:34.871 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:19:34 compute-0 nova_compute[265391]: 2025-09-30 18:19:34.872 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:34 compute-0 thirsty_cray[315396]: {
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:     "0": [
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:         {
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "devices": [
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "/dev/loop3"
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             ],
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "lv_name": "ceph_lv0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "lv_size": "21470642176",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "name": "ceph_lv0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "tags": {
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.cluster_name": "ceph",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.crush_device_class": "",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.encrypted": "0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.osd_id": "0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.type": "block",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.vdo": "0",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:                 "ceph.with_tpm": "0"
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             },
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "type": "block",
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:             "vg_name": "ceph_vg0"
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:         }
Sep 30 18:19:34 compute-0 thirsty_cray[315396]:     ]
Sep 30 18:19:34 compute-0 thirsty_cray[315396]: }
Sep 30 18:19:34 compute-0 systemd[1]: libpod-261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600.scope: Deactivated successfully.
Sep 30 18:19:34 compute-0 podman[315379]: 2025-09-30 18:19:34.999148856 +0000 UTC m=+0.467309418 container died 261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_cray, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2c916eb0ea2034685a0e63dc72415d67dd61eec1c4b0204800fc6a004bde68a-merged.mount: Deactivated successfully.
Sep 30 18:19:35 compute-0 podman[315379]: 2025-09-30 18:19:35.049439726 +0000 UTC m=+0.517600278 container remove 261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:19:35 compute-0 systemd[1]: libpod-conmon-261c268fd4202567dc12088e718fb70ba1d6495024852ea1ddd7c7ee939ae600.scope: Deactivated successfully.
Sep 30 18:19:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:35 compute-0 sudo[315274]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:35 compute-0 nova_compute[265391]: 2025-09-30 18:19:35.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:35 compute-0 sudo[315417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:19:35 compute-0 sudo[315417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:35 compute-0 sudo[315417]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:35 compute-0 sudo[315442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:19:35 compute-0 sudo[315442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:35 compute-0 ceph-mon[73755]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:19:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/293762595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:19:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:35.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:35 compute-0 podman[315508]: 2025-09-30 18:19:35.715404956 +0000 UTC m=+0.044506650 container create f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swartz, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 18:19:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:35 compute-0 systemd[1]: Started libpod-conmon-f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb.scope.
Sep 30 18:19:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:19:35 compute-0 podman[315508]: 2025-09-30 18:19:35.698419404 +0000 UTC m=+0.027521078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:19:35 compute-0 podman[315508]: 2025-09-30 18:19:35.81309386 +0000 UTC m=+0.142195544 container init f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:19:35 compute-0 podman[315508]: 2025-09-30 18:19:35.821316324 +0000 UTC m=+0.150417988 container start f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swartz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:19:35 compute-0 podman[315508]: 2025-09-30 18:19:35.825674607 +0000 UTC m=+0.154776271 container attach f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swartz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:19:35 compute-0 friendly_swartz[315525]: 167 167
Sep 30 18:19:35 compute-0 systemd[1]: libpod-f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb.scope: Deactivated successfully.
Sep 30 18:19:35 compute-0 podman[315508]: 2025-09-30 18:19:35.829897807 +0000 UTC m=+0.158999471 container died f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 18:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-35260f892cc01de7ad972ba1d42534bd4a01b451c2c613c15865a0c7f32c520c-merged.mount: Deactivated successfully.
Sep 30 18:19:35 compute-0 podman[315508]: 2025-09-30 18:19:35.875224598 +0000 UTC m=+0.204326262 container remove f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:19:35 compute-0 systemd[1]: libpod-conmon-f10b63ba7cfec345c9f3b9ccbf082b1eead538159f57bc4732eee833f43e43eb.scope: Deactivated successfully.
Sep 30 18:19:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:36 compute-0 podman[315548]: 2025-09-30 18:19:36.093055729 +0000 UTC m=+0.047826706 container create addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:19:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:36 compute-0 systemd[1]: Started libpod-conmon-addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c.scope.
Sep 30 18:19:36 compute-0 podman[315548]: 2025-09-30 18:19:36.073375307 +0000 UTC m=+0.028146284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:19:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7f4017cb9f5e89dac357e76bea54b302c21ec55c88d0207a7a2c1af87a2b06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7f4017cb9f5e89dac357e76bea54b302c21ec55c88d0207a7a2c1af87a2b06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7f4017cb9f5e89dac357e76bea54b302c21ec55c88d0207a7a2c1af87a2b06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7f4017cb9f5e89dac357e76bea54b302c21ec55c88d0207a7a2c1af87a2b06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:19:36 compute-0 podman[315548]: 2025-09-30 18:19:36.213051664 +0000 UTC m=+0.167822681 container init addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:19:36 compute-0 podman[315548]: 2025-09-30 18:19:36.228104166 +0000 UTC m=+0.182875173 container start addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 18:19:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:19:36 compute-0 podman[315548]: 2025-09-30 18:19:36.232547161 +0000 UTC m=+0.187318238 container attach addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:19:36 compute-0 ceph-mon[73755]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:19:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:36.369 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:65:37 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e6c80bed-2509-4221-8bbf-987d2791d74d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6c80bed-2509-4221-8bbf-987d2791d74d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af4ef07c582847419a03275af50c6ffc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e767dc28-6aa0-411a-9e2c-d95e473b8f79, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b7aad425-853a-4eec-bc20-e21ec9b6e1a6) old=Port_Binding(mac=['fa:16:3e:30:65:37'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-e6c80bed-2509-4221-8bbf-987d2791d74d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6c80bed-2509-4221-8bbf-987d2791d74d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af4ef07c582847419a03275af50c6ffc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:19:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:36.371 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b7aad425-853a-4eec-bc20-e21ec9b6e1a6 in datapath e6c80bed-2509-4221-8bbf-987d2791d74d updated
Sep 30 18:19:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:36.373 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e6c80bed-2509-4221-8bbf-987d2791d74d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:19:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:36.374 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbeef04-2089-49a7-9273-8630464053b1]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:19:36 compute-0 nova_compute[265391]: 2025-09-30 18:19:36.379 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:36 compute-0 nova_compute[265391]: 2025-09-30 18:19:36.379 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:36 compute-0 nova_compute[265391]: 2025-09-30 18:19:36.379 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:36 compute-0 nova_compute[265391]: 2025-09-30 18:19:36.380 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:19:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:19:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/84034973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:19:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:19:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/84034973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:19:36 compute-0 sshd[192864]: Timeout before authentication for connection from 115.190.39.222 to 38.102.83.202, pid = 313071
Sep 30 18:19:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:19:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:19:36 compute-0 lvm[315639]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:19:36 compute-0 lvm[315639]: VG ceph_vg0 finished
Sep 30 18:19:36 compute-0 nova_compute[265391]: 2025-09-30 18:19:36.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:36 compute-0 nostalgic_robinson[315564]: {}
Sep 30 18:19:36 compute-0 systemd[1]: libpod-addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c.scope: Deactivated successfully.
Sep 30 18:19:36 compute-0 podman[315548]: 2025-09-30 18:19:36.973050421 +0000 UTC m=+0.927821418 container died addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:19:36 compute-0 systemd[1]: libpod-addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c.scope: Consumed 1.179s CPU time.
Sep 30 18:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f7f4017cb9f5e89dac357e76bea54b302c21ec55c88d0207a7a2c1af87a2b06-merged.mount: Deactivated successfully.
Sep 30 18:19:37 compute-0 podman[315548]: 2025-09-30 18:19:37.025192029 +0000 UTC m=+0.979963006 container remove addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:19:37 compute-0 systemd[1]: libpod-conmon-addffea708b39536125d3543c64e8f46226f1b8042618ae4031b63ad8b88649c.scope: Deactivated successfully.
Sep 30 18:19:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003c20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:37 compute-0 sudo[315442]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:19:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:19:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:19:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:19:37 compute-0 sudo[315656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:19:37 compute-0 sudo[315656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:37 compute-0 sudo[315656]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:37.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:19:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:19:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/84034973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:19:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/84034973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:19:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:19:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:19:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:19:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:19:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:19:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:19:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:19:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:19:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:19:37 compute-0 nova_compute[265391]: 2025-09-30 18:19:37.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:37 compute-0 nova_compute[265391]: 2025-09-30 18:19:37.427 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:19:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:37.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003f90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:19:38 compute-0 ceph-mon[73755]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 234 MiB used, 40 GiB / 40 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:19:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:38] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:19:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:38] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:19:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 18:19:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:39.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:39 compute-0 podman[315684]: 2025-09-30 18:19:39.607454066 +0000 UTC m=+0.116924526 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:19:39 compute-0 podman[315686]: 2025-09-30 18:19:39.616321396 +0000 UTC m=+0.117154411 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=)
Sep 30 18:19:39 compute-0 podman[315685]: 2025-09-30 18:19:39.633310419 +0000 UTC m=+0.143053756 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:19:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:40 compute-0 nova_compute[265391]: 2025-09-30 18:19:40.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:41 compute-0 ceph-mon[73755]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:41 compute-0 nova_compute[265391]: 2025-09-30 18:19:41.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003fb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:19:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:42.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:19:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:43 compute-0 ceph-mon[73755]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:43.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:43.731Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:19:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:43.731Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:44.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:19:44 compute-0 ceph-mon[73755]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:19:44 compute-0 nova_compute[265391]: 2025-09-30 18:19:44.794 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:19:44 compute-0 nova_compute[265391]: 2025-09-30 18:19:44.794 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:19:44 compute-0 nova_compute[265391]: 2025-09-30 18:19:44.802 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/181945 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 18:19:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:45 compute-0 nova_compute[265391]: 2025-09-30 18:19:45.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:45 compute-0 nova_compute[265391]: 2025-09-30 18:19:45.300 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:19:45 compute-0 nova_compute[265391]: 2025-09-30 18:19:45.309 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:19:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:45.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:45 compute-0 nova_compute[265391]: 2025-09-30 18:19:45.846 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:19:45 compute-0 nova_compute[265391]: 2025-09-30 18:19:45.847 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:19:45 compute-0 nova_compute[265391]: 2025-09-30 18:19:45.852 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:19:45 compute-0 nova_compute[265391]: 2025-09-30 18:19:45.853 2 INFO nova.compute.claims [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:19:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:46 compute-0 nova_compute[265391]: 2025-09-30 18:19:46.902 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:46 compute-0 nova_compute[265391]: 2025-09-30 18:19:46.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:47.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:47 compute-0 ceph-mon[73755]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:19:47 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1814289124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:19:47 compute-0 nova_compute[265391]: 2025-09-30 18:19:47.378 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:47 compute-0 nova_compute[265391]: 2025-09-30 18:19:47.385 2 DEBUG nova.compute.provider_tree [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:19:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:47.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:47 compute-0 nova_compute[265391]: 2025-09-30 18:19:47.894 2 DEBUG nova.scheduler.client.report [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:19:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:48.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1814289124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:19:48 compute-0 ceph-mon[73755]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:48 compute-0 nova_compute[265391]: 2025-09-30 18:19:48.408 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.561s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:19:48 compute-0 nova_compute[265391]: 2025-09-30 18:19:48.408 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:19:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:48] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:19:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:48] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:19:48 compute-0 nova_compute[265391]: 2025-09-30 18:19:48.919 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:19:48 compute-0 nova_compute[265391]: 2025-09-30 18:19:48.920 2 DEBUG nova.network.neutron [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:19:48 compute-0 nova_compute[265391]: 2025-09-30 18:19:48.920 2 WARNING neutronclient.v2_0.client [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:19:48 compute-0 nova_compute[265391]: 2025-09-30 18:19:48.923 2 WARNING neutronclient.v2_0.client [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:19:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:49 compute-0 nova_compute[265391]: 2025-09-30 18:19:49.432 2 INFO nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:19:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:49.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:49 compute-0 nova_compute[265391]: 2025-09-30 18:19:49.822 2 DEBUG nova.network.neutron [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Successfully created port: ca4e45b7-a42b-4e47-80d8-749194caf98a _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:19:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:49 compute-0 nova_compute[265391]: 2025-09-30 18:19:49.941 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:19:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:50.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:50 compute-0 nova_compute[265391]: 2025-09-30 18:19:50.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.762529) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256390762565, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2140, "num_deletes": 505, "total_data_size": 3548784, "memory_usage": 3626632, "flush_reason": "Manual Compaction"}
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256390775750, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2098906, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31094, "largest_seqno": 33233, "table_properties": {"data_size": 2092011, "index_size": 3200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 20774, "raw_average_key_size": 19, "raw_value_size": 2074751, "raw_average_value_size": 1977, "num_data_blocks": 144, "num_entries": 1049, "num_filter_entries": 1049, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759256204, "oldest_key_time": 1759256204, "file_creation_time": 1759256390, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 13304 microseconds, and 6734 cpu microseconds.
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.775823) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2098906 bytes OK
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.775853) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.777441) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.777455) EVENT_LOG_v1 {"time_micros": 1759256390777450, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.777479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3538855, prev total WAL file size 3538855, number of live WAL files 2.
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.778601) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2049KB)], [68(12MB)]
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256390778706, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 14704564, "oldest_snapshot_seqno": -1}
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6103 keys, 12087933 bytes, temperature: kUnknown
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256390865059, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 12087933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12049028, "index_size": 22539, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 156789, "raw_average_key_size": 25, "raw_value_size": 11940918, "raw_average_value_size": 1956, "num_data_blocks": 909, "num_entries": 6103, "num_filter_entries": 6103, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759256390, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.865425) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 12087933 bytes
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.867256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.0 rd, 139.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 12.0 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(12.8) write-amplify(5.8) OK, records in: 7042, records dropped: 939 output_compression: NoCompression
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.867279) EVENT_LOG_v1 {"time_micros": 1759256390867268, "job": 38, "event": "compaction_finished", "compaction_time_micros": 86478, "compaction_time_cpu_micros": 56676, "output_level": 6, "num_output_files": 1, "total_output_size": 12087933, "num_input_records": 7042, "num_output_records": 6103, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256390868081, "job": 38, "event": "table_file_deletion", "file_number": 70}
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256390871306, "job": 38, "event": "table_file_deletion", "file_number": 68}
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.778314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.871523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.871531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.871533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.871535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:19:50 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:19:50.871543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:19:50 compute-0 nova_compute[265391]: 2025-09-30 18:19:50.965 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:19:50 compute-0 nova_compute[265391]: 2025-09-30 18:19:50.966 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:19:50 compute-0 nova_compute[265391]: 2025-09-30 18:19:50.966 2 INFO nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Creating image(s)
Sep 30 18:19:50 compute-0 sshd-session[315768]: Invalid user abcd from 45.252.249.158 port 54420
Sep 30 18:19:50 compute-0 nova_compute[265391]: 2025-09-30 18:19:50.994 2 DEBUG nova.storage.rbd_utils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image cbbd84c1-d174-40d7-be54-3123704f0e0b_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:19:51 compute-0 sshd-session[315768]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:19:51 compute-0 sshd-session[315768]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.026 2 DEBUG nova.storage.rbd_utils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image cbbd84c1-d174-40d7-be54-3123704f0e0b_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.054 2 DEBUG nova.storage.rbd_utils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image cbbd84c1-d174-40d7-be54-3123704f0e0b_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.059 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.131 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.132 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.133 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.133 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.155 2 DEBUG nova.storage.rbd_utils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image cbbd84c1-d174-40d7-be54-3123704f0e0b_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.159 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 cbbd84c1-d174-40d7-be54-3123704f0e0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:51 compute-0 ceph-mon[73755]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.315 2 DEBUG nova.network.neutron [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Successfully updated port: ca4e45b7-a42b-4e47-80d8-749194caf98a _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.402 2 DEBUG nova.compute.manager [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-changed-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.402 2 DEBUG nova.compute.manager [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Refreshing instance network info cache due to event network-changed-ca4e45b7-a42b-4e47-80d8-749194caf98a. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.403 2 DEBUG oslo_concurrency.lockutils [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.403 2 DEBUG oslo_concurrency.lockutils [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.403 2 DEBUG nova.network.neutron [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Refreshing network info cache for port ca4e45b7-a42b-4e47-80d8-749194caf98a _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.466 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 cbbd84c1-d174-40d7-be54-3123704f0e0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.547 2 DEBUG nova.storage.rbd_utils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] resizing rbd image cbbd84c1-d174-40d7-be54-3123704f0e0b_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:19:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:51.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.664 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.665 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Ensure instance console log exists: /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.665 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.666 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.666 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.831 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:51 compute-0 nova_compute[265391]: 2025-09-30 18:19:51.926 2 WARNING neutronclient.v2_0.client [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:19:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:52 compute-0 sudo[315944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:19:52 compute-0 sudo[315944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:19:52 compute-0 sudo[315944]: pam_unix(sudo:session): session closed for user root
Sep 30 18:19:52 compute-0 nova_compute[265391]: 2025-09-30 18:19:52.086 2 DEBUG nova.network.neutron [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:19:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:19:52 compute-0 nova_compute[265391]: 2025-09-30 18:19:52.262 2 DEBUG nova.network.neutron [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:19:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:19:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:19:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:19:52 compute-0 sshd-session[315768]: Failed password for invalid user abcd from 45.252.249.158 port 54420 ssh2
Sep 30 18:19:52 compute-0 nova_compute[265391]: 2025-09-30 18:19:52.771 2 DEBUG oslo_concurrency.lockutils [req-c29f59ee-9511-45ff-a3b4-c125171e7b00 req-7ee733fa-c3a8-4564-b36e-2fb3814633d1 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:19:52 compute-0 nova_compute[265391]: 2025-09-30 18:19:52.772 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquired lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:19:52 compute-0 nova_compute[265391]: 2025-09-30 18:19:52.772 2 DEBUG nova.network.neutron [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:19:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:53 compute-0 sshd-session[315768]: Received disconnect from 45.252.249.158 port 54420:11: Bye Bye [preauth]
Sep 30 18:19:53 compute-0 sshd-session[315768]: Disconnected from invalid user abcd 45.252.249.158 port 54420 [preauth]
Sep 30 18:19:53 compute-0 ceph-mon[73755]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 233 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:19:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:53.733Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:54 compute-0 nova_compute[265391]: 2025-09-30 18:19:54.084 2 DEBUG nova.network.neutron [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:19:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:54.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:19:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:54.295 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:19:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:54.295 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:19:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:19:54.295 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:19:54 compute-0 nova_compute[265391]: 2025-09-30 18:19:54.303 2 WARNING neutronclient.v2_0.client [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:19:54 compute-0 ceph-mon[73755]: pgmap v1238: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:19:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.160 2 DEBUG nova.network.neutron [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Updating instance_info_cache with network_info: [{"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.668 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Releasing lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.669 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Instance network_info: |[{"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.671 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Start _get_guest_xml network_info=[{"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.676 2 WARNING nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.678 2 DEBUG nova.virt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteHostMaintenanceStrategy-server-1407640112', uuid='cbbd84c1-d174-40d7-be54-3123704f0e0b'), owner=OwnerMeta(userid='57be6c3d2e0d431dae0127ac659de1e0', username='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin', projectid='af4ef07c582847419a03275af50c6ffc', projectname='tempest-TestExecuteHostMaintenanceStrategy-1597156537'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759256395.6779275) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.681 2 DEBUG nova.virt.libvirt.host [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.682 2 DEBUG nova.virt.libvirt.host [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.684 2 DEBUG nova.virt.libvirt.host [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.685 2 DEBUG nova.virt.libvirt.host [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.685 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.685 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.686 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.686 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.686 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.687 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.687 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.687 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.687 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.688 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.688 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.688 2 DEBUG nova.virt.hardware [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:19:55 compute-0 nova_compute[265391]: 2025-09-30 18:19:55.692 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:19:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:19:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:56.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:19:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:19:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2871397625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.175 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2871397625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.215 2 DEBUG nova.storage.rbd_utils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.221 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:19:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:19:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463357897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.766 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.768 2 DEBUG nova.virt.libvirt.vif [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1407640112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1407640112',id=12,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af4ef07c582847419a03275af50c6ffc',ramdisk_id='',reservation_id='r-mj5oq7ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:19:49Z,user_data=None,user_id='57be6c3d2e0d431dae0127ac659de1e0',uuid=cbbd84c1-d174-40d7-be54-3123704f0e0b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.768 2 DEBUG nova.network.os_vif_util [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Converting VIF {"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.769 2 DEBUG nova.network.os_vif_util [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:6b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ca4e45b7-a42b-4e47-80d8-749194caf98a,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4e45b7-a4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.770 2 DEBUG nova.objects.instance [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lazy-loading 'pci_devices' on Instance uuid cbbd84c1-d174-40d7-be54-3123704f0e0b obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:19:56 compute-0 nova_compute[265391]: 2025-09-30 18:19:56.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:19:57.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:19:57 compute-0 ceph-mon[73755]: pgmap v1239: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:19:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3463357897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.277 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <uuid>cbbd84c1-d174-40d7-be54-3123704f0e0b</uuid>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <name>instance-0000000c</name>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1407640112</nova:name>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:19:55</nova:creationTime>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:19:57 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:19:57 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:user uuid="57be6c3d2e0d431dae0127ac659de1e0">tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin</nova:user>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:project uuid="af4ef07c582847419a03275af50c6ffc">tempest-TestExecuteHostMaintenanceStrategy-1597156537</nova:project>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <nova:port uuid="ca4e45b7-a42b-4e47-80d8-749194caf98a">
Sep 30 18:19:57 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <system>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <entry name="serial">cbbd84c1-d174-40d7-be54-3123704f0e0b</entry>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <entry name="uuid">cbbd84c1-d174-40d7-be54-3123704f0e0b</entry>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </system>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <os>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   </os>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <features>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   </features>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/cbbd84c1-d174-40d7-be54-3123704f0e0b_disk">
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       </source>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config">
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       </source>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:19:57 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:7e:6b:bd"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <target dev="tapca4e45b7-a4"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/console.log" append="off"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <video>
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </video>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:19:57 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:19:57 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:19:57 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:19:57 compute-0 nova_compute[265391]: </domain>
Sep 30 18:19:57 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.278 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Preparing to wait for external event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.278 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.278 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.279 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.279 2 DEBUG nova.virt.libvirt.vif [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1407640112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1407640112',id=12,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af4ef07c582847419a03275af50c6ffc',ramdisk_id='',reservation_id='r-mj5oq7ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:19:49Z,user_data=None,user_id='57be6c3d2e0d431dae0127ac659de1e0',uuid=cbbd84c1-d174-40d7-be54-3123704f0e0b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.280 2 DEBUG nova.network.os_vif_util [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Converting VIF {"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.280 2 DEBUG nova.network.os_vif_util [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:6b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ca4e45b7-a42b-4e47-80d8-749194caf98a,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4e45b7-a4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.281 2 DEBUG os_vif [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:6b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ca4e45b7-a42b-4e47-80d8-749194caf98a,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4e45b7-a4') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '05e3604c-43d4-556c-880d-c45ed32da082', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca4e45b7-a4, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapca4e45b7-a4, col_values=(('qos', UUID('84145521-92e3-4b41-83e2-1de977780c23')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapca4e45b7-a4, col_values=(('external_ids', {'iface-id': 'ca4e45b7-a42b-4e47-80d8-749194caf98a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:6b:bd', 'vm-uuid': 'cbbd84c1-d174-40d7-be54-3123704f0e0b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:57 compute-0 NetworkManager[45059]: <info>  [1759256397.3007] manager: (tapca4e45b7-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:19:57 compute-0 nova_compute[265391]: 2025-09-30 18:19:57.309 2 INFO os_vif [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:6b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ca4e45b7-a42b-4e47-80d8-749194caf98a,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4e45b7-a4')
Sep 30 18:19:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:19:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492562645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:19:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:19:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492562645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:19:57 compute-0 podman[316042]: 2025-09-30 18:19:57.53466203 +0000 UTC m=+0.063048789 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:19:57 compute-0 podman[316041]: 2025-09-30 18:19:57.601303392 +0000 UTC m=+0.133740097 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:19:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:57.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:19:58.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:19:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3492562645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:19:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3492562645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:19:58 compute-0 unix_chkpwd[316092]: password check failed for user (root)
Sep 30 18:19:58 compute-0 sshd-session[316036]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 18:19:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:58] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:19:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:19:58] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:19:58 compute-0 nova_compute[265391]: 2025-09-30 18:19:58.858 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:19:58 compute-0 nova_compute[265391]: 2025-09-30 18:19:58.858 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:19:58 compute-0 nova_compute[265391]: 2025-09-30 18:19:58.859 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] No VIF found with MAC fa:16:3e:7e:6b:bd, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:19:58 compute-0 nova_compute[265391]: 2025-09-30 18:19:58.860 2 INFO nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Using config drive
Sep 30 18:19:58 compute-0 nova_compute[265391]: 2025-09-30 18:19:58.901 2 DEBUG nova.storage.rbd_utils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:19:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:59 compute-0 ceph-mon[73755]: pgmap v1240: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:19:59 compute-0 nova_compute[265391]: 2025-09-30 18:19:59.417 2 WARNING neutronclient.v2_0.client [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:19:59 compute-0 nova_compute[265391]: 2025-09-30 18:19:59.560 2 INFO nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Creating config drive at /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/disk.config
Sep 30 18:19:59 compute-0 nova_compute[265391]: 2025-09-30 18:19:59.571 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp3yysfga4 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:19:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:19:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:19:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:19:59 compute-0 nova_compute[265391]: 2025-09-30 18:19:59.723 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp3yysfga4" returned: 0 in 0.152s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:59 compute-0 podman[276673]: time="2025-09-30T18:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:19:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:19:59 compute-0 nova_compute[265391]: 2025-09-30 18:19:59.765 2 DEBUG nova.storage.rbd_utils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:19:59 compute-0 nova_compute[265391]: 2025-09-30 18:19:59.771 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/disk.config cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:19:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10274 "" "Go-http-client/1.1"
Sep 30 18:19:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:19:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:19:59 compute-0 nova_compute[265391]: 2025-09-30 18:19:59.975 2 DEBUG oslo_concurrency.processutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/disk.config cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:19:59 compute-0 nova_compute[265391]: 2025-09-30 18:19:59.976 2 INFO nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Deleting local config drive /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/disk.config because it was imported into RBD.
Sep 30 18:20:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:20:00 compute-0 kernel: tapca4e45b7-a4: entered promiscuous mode
Sep 30 18:20:00 compute-0 NetworkManager[45059]: <info>  [1759256400.0449] manager: (tapca4e45b7-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Sep 30 18:20:00 compute-0 ovn_controller[156242]: 2025-09-30T18:20:00Z|00113|binding|INFO|Claiming lport ca4e45b7-a42b-4e47-80d8-749194caf98a for this chassis.
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 ovn_controller[156242]: 2025-09-30T18:20:00Z|00114|binding|INFO|ca4e45b7-a42b-4e47-80d8-749194caf98a: Claiming fa:16:3e:7e:6b:bd 10.100.0.7
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.061 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:6b:bd 10.100.0.7'], port_security=['fa:16:3e:7e:6b:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cbbd84c1-d174-40d7-be54-3123704f0e0b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-443be7ca-f628-4a45-95b6-620d37172d7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af4ef07c582847419a03275af50c6ffc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '518a9c00-28f9-47ab-a122-e672192eedea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96eb21b8-879c-4e72-963b-37e37ae3d0c5, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=ca4e45b7-a42b-4e47-80d8-749194caf98a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.062 166158 INFO neutron.agent.ovn.metadata.agent [-] Port ca4e45b7-a42b-4e47-80d8-749194caf98a in datapath 443be7ca-f628-4a45-95b6-620d37172d7b bound to our chassis
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.063 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 443be7ca-f628-4a45-95b6-620d37172d7b
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.080 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[359482bf-d19a-4dee-bf9c-c1ee101d7d41]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.081 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap443be7ca-f1 in ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.083 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap443be7ca-f0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.083 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1d91d5-5270-4574-9513-67425ebd2f56]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.084 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f44a6a29-7817-4dbe-87b9-a4fe80e3ccce]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 systemd-machined[219917]: New machine qemu-9-instance-0000000c.
Sep 30 18:20:00 compute-0 systemd-udevd[316167]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.099 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[5a13122c-39f3-4517-9c1a-219b1e79b776]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 NetworkManager[45059]: <info>  [1759256400.1087] device (tapca4e45b7-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:20:00 compute-0 NetworkManager[45059]: <info>  [1759256400.1098] device (tapca4e45b7-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:20:00 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-0000000c.
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.117 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c9452433-5392-4539-a4fc-30f07baf2ff7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:00.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 ovn_controller[156242]: 2025-09-30T18:20:00Z|00115|binding|INFO|Setting lport ca4e45b7-a42b-4e47-80d8-749194caf98a ovn-installed in OVS
Sep 30 18:20:00 compute-0 ovn_controller[156242]: 2025-09-30T18:20:00Z|00116|binding|INFO|Setting lport ca4e45b7-a42b-4e47-80d8-749194caf98a up in Southbound
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.156 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[2887c511-c574-4d58-b6ee-533814e3a249]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.160 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4816e629-d234-4938-b0b5-d1268daa6574]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 NetworkManager[45059]: <info>  [1759256400.1621] manager: (tap443be7ca-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Sep 30 18:20:00 compute-0 systemd-udevd[316171]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.202 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[392f19b3-82d8-405c-b52a-f771720e81ca]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.205 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[94ae3990-0015-4e2e-8dd5-87a9570763d3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 NetworkManager[45059]: <info>  [1759256400.2306] device (tap443be7ca-f0): carrier: link connected
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.237 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[ebce9b7c-88b1-498f-98af-0ef286150047]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.258 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[543e5c5b-bc55-4eee-a625-7b89740f0f08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap443be7ca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:7f:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499892, 'reachable_time': 31043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316199, 'error': None, 'target': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.278 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e9221d42-b725-41da-b4a8-8bff55aedf1e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:7f4d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499892, 'tstamp': 499892}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316200, 'error': None, 'target': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.303 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[75b3dd51-2c36-4636-919d-a89d9ed3cf81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap443be7ca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:7f:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499892, 'reachable_time': 31043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316201, 'error': None, 'target': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ceph-mon[73755]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.332 2 DEBUG nova.compute.manager [req-1a220a21-2871-4f7e-95fb-00e34969158b req-7dc3d0e8-e22f-4b92-8c51-f09fe070ea8b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.333 2 DEBUG oslo_concurrency.lockutils [req-1a220a21-2871-4f7e-95fb-00e34969158b req-7dc3d0e8-e22f-4b92-8c51-f09fe070ea8b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.333 2 DEBUG oslo_concurrency.lockutils [req-1a220a21-2871-4f7e-95fb-00e34969158b req-7dc3d0e8-e22f-4b92-8c51-f09fe070ea8b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.333 2 DEBUG oslo_concurrency.lockutils [req-1a220a21-2871-4f7e-95fb-00e34969158b req-7dc3d0e8-e22f-4b92-8c51-f09fe070ea8b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.333 2 DEBUG nova.compute.manager [req-1a220a21-2871-4f7e-95fb-00e34969158b req-7dc3d0e8-e22f-4b92-8c51-f09fe070ea8b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Processing event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.344 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[99e017fe-f8b4-4109-bd3e-7dd0a54ab9bb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.414 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[57226780-ac98-479f-adcf-47ab8ffd2897]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.415 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap443be7ca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.416 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.417 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap443be7ca-f0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 NetworkManager[45059]: <info>  [1759256400.4198] manager: (tap443be7ca-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Sep 30 18:20:00 compute-0 kernel: tap443be7ca-f0: entered promiscuous mode
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.424 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap443be7ca-f0, col_values=(('external_ids', {'iface-id': '031d2cff-b142-4423-ba99-772183b7a667'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 ovn_controller[156242]: 2025-09-30T18:20:00Z|00117|binding|INFO|Releasing lport 031d2cff-b142-4423-ba99-772183b7a667 from this chassis (sb_readonly=0)
Sep 30 18:20:00 compute-0 nova_compute[265391]: 2025-09-30 18:20:00.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.443 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[90af262a-d9ac-4cd7-a801-3444929c648c]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.444 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.444 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.444 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 443be7ca-f628-4a45-95b6-620d37172d7b disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.444 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.444 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b0213d72-4d68-4bb7-91a7-64e75adab2c7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.445 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.446 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e3092bfa-1c72-45e0-9c0e-1c230d5428ab]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.446 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-443be7ca-f628-4a45-95b6-620d37172d7b
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 443be7ca-f628-4a45-95b6-620d37172d7b
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:20:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:00.446 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'env', 'PROCESS_TAG=haproxy-443be7ca-f628-4a45-95b6-620d37172d7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/443be7ca-f628-4a45-95b6-620d37172d7b.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:20:00 compute-0 sshd-session[316036]: Failed password for root from 14.225.220.107 port 59530 ssh2
Sep 30 18:20:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:00 compute-0 podman[316275]: 2025-09-30 18:20:00.902030118 +0000 UTC m=+0.075653017 container create 103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:20:00 compute-0 podman[316275]: 2025-09-30 18:20:00.853561369 +0000 UTC m=+0.027184298 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:20:00 compute-0 systemd[1]: Started libpod-conmon-103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67.scope.
Sep 30 18:20:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292c7d43b6e9cb949f8a8579c52bb67a026ce99614bf818beb8f37547d4e02c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:01 compute-0 podman[316275]: 2025-09-30 18:20:01.011219436 +0000 UTC m=+0.184842355 container init 103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 18:20:01 compute-0 podman[316275]: 2025-09-30 18:20:01.017251423 +0000 UTC m=+0.190874322 container start 103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true)
Sep 30 18:20:01 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[316290]: [NOTICE]   (316294) : New worker (316296) forked
Sep 30 18:20:01 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[316290]: [NOTICE]   (316294) : Loading success.
Sep 30 18:20:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.086 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.088 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.093 2 INFO nova.virt.libvirt.driver [-] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Instance spawned successfully.
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.093 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:20:01 compute-0 ceph-mon[73755]: pgmap v1241: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:20:01 compute-0 openstack_network_exporter[279566]: ERROR   18:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:20:01 compute-0 openstack_network_exporter[279566]: ERROR   18:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:20:01 compute-0 openstack_network_exporter[279566]: ERROR   18:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:20:01 compute-0 openstack_network_exporter[279566]: ERROR   18:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:20:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:20:01 compute-0 openstack_network_exporter[279566]: ERROR   18:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:20:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:20:01 compute-0 podman[316306]: 2025-09-30 18:20:01.534561477 +0000 UTC m=+0.075340779 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.608 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.608 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.608 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.609 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.609 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:20:01 compute-0 nova_compute[265391]: 2025-09-30 18:20:01.609 2 DEBUG nova.virt.libvirt.driver [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:20:01 compute-0 sshd-session[316036]: Received disconnect from 14.225.220.107 port 59530:11: Bye Bye [preauth]
Sep 30 18:20:01 compute-0 sshd-session[316036]: Disconnected from authenticating user root 14.225.220.107 port 59530 [preauth]
Sep 30 18:20:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:01.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.118 2 INFO nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Took 11.15 seconds to spawn the instance on the hypervisor.
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.118 2 DEBUG nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:20:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:02.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:02 compute-0 ceph-mon[73755]: pgmap v1242: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.409 2 DEBUG nova.compute.manager [req-3c9506ab-940d-40ba-a8f8-e70015d5d62a req-71445010-394e-4a23-8865-5d0833168124 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.409 2 DEBUG oslo_concurrency.lockutils [req-3c9506ab-940d-40ba-a8f8-e70015d5d62a req-71445010-394e-4a23-8865-5d0833168124 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.410 2 DEBUG oslo_concurrency.lockutils [req-3c9506ab-940d-40ba-a8f8-e70015d5d62a req-71445010-394e-4a23-8865-5d0833168124 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.410 2 DEBUG oslo_concurrency.lockutils [req-3c9506ab-940d-40ba-a8f8-e70015d5d62a req-71445010-394e-4a23-8865-5d0833168124 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.410 2 DEBUG nova.compute.manager [req-3c9506ab-940d-40ba-a8f8-e70015d5d62a req-71445010-394e-4a23-8865-5d0833168124 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] No waiting events found dispatching network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.410 2 WARNING nova.compute.manager [req-3c9506ab-940d-40ba-a8f8-e70015d5d62a req-71445010-394e-4a23-8865-5d0833168124 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received unexpected event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a for instance with vm_state active and task_state None.
Sep 30 18:20:02 compute-0 nova_compute[265391]: 2025-09-30 18:20:02.660 2 INFO nova.compute.manager [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Took 16.85 seconds to build instance.
Sep 30 18:20:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:03 compute-0 nova_compute[265391]: 2025-09-30 18:20:03.168 2 DEBUG oslo_concurrency.lockutils [None req-ee62fd23-64b3-4ea6-a0df-c1d60b722d4b 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.374s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:03.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:03.733Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:04.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:20:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:05 compute-0 nova_compute[265391]: 2025-09-30 18:20:05.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:05 compute-0 ceph-mon[73755]: pgmap v1243: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:20:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:05.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:06.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:20:06 compute-0 ceph-mon[73755]: pgmap v1244: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:20:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:07.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:20:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:20:07 compute-0 nova_compute[265391]: 2025-09-30 18:20:07.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:20:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:20:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:07.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005229063904082829 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:20:07
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.nfs', 'backups', 'volumes', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data']
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:20:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:20:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:08.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:20:08 compute-0 ceph-mon[73755]: pgmap v1245: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:20:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:08] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:20:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:08] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:20:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4170402657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:20:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:09.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:10.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:10 compute-0 nova_compute[265391]: 2025-09-30 18:20:10.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:20:10 compute-0 ceph-mon[73755]: pgmap v1246: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:20:10 compute-0 podman[316336]: 2025-09-30 18:20:10.54036327 +0000 UTC m=+0.074650411 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:20:10 compute-0 podman[316335]: 2025-09-30 18:20:10.542468425 +0000 UTC m=+0.077824174 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=multipathd)
Sep 30 18:20:10 compute-0 podman[316337]: 2025-09-30 18:20:10.559069106 +0000 UTC m=+0.081025366 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 18:20:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Sep 30 18:20:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Sep 30 18:20:10 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Sep 30 18:20:11 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Sep 30 18:20:11 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Sep 30 18:20:11 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Sep 30 18:20:11 compute-0 radosgw[96126]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Sep 30 18:20:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:11.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:12 compute-0 sudo[316395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:20:12 compute-0 sudo[316395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:12 compute-0 sudo[316395]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.003000078s ======
Sep 30 18:20:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:12.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Sep 30 18:20:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.320115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256412320149, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 456, "num_deletes": 251, "total_data_size": 381210, "memory_usage": 389928, "flush_reason": "Manual Compaction"}
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256412324871, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 375774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33234, "largest_seqno": 33689, "table_properties": {"data_size": 373204, "index_size": 606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6304, "raw_average_key_size": 18, "raw_value_size": 368112, "raw_average_value_size": 1098, "num_data_blocks": 27, "num_entries": 335, "num_filter_entries": 335, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759256391, "oldest_key_time": 1759256391, "file_creation_time": 1759256412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 4799 microseconds, and 2425 cpu microseconds.
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.324908) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 375774 bytes OK
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.324933) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.326576) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.326595) EVENT_LOG_v1 {"time_micros": 1759256412326589, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.326615) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 378487, prev total WAL file size 378487, number of live WAL files 2.
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.327090) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(366KB)], [71(11MB)]
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256412327120, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 12463707, "oldest_snapshot_seqno": -1}
Sep 30 18:20:12 compute-0 nova_compute[265391]: 2025-09-30 18:20:12.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5928 keys, 10451596 bytes, temperature: kUnknown
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256412395768, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10451596, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10415219, "index_size": 20459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 153839, "raw_average_key_size": 25, "raw_value_size": 10311504, "raw_average_value_size": 1739, "num_data_blocks": 815, "num_entries": 5928, "num_filter_entries": 5928, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759256412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.396031) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10451596 bytes
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.397468) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.3 rd, 152.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 11.5 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(61.0) write-amplify(27.8) OK, records in: 6438, records dropped: 510 output_compression: NoCompression
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.397489) EVENT_LOG_v1 {"time_micros": 1759256412397480, "job": 40, "event": "compaction_finished", "compaction_time_micros": 68730, "compaction_time_cpu_micros": 37021, "output_level": 6, "num_output_files": 1, "total_output_size": 10451596, "num_input_records": 6438, "num_output_records": 5928, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256412397718, "job": 40, "event": "table_file_deletion", "file_number": 73}
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256412400918, "job": 40, "event": "table_file_deletion", "file_number": 71}
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.327002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.400971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.400976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.400978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.400980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:20:12 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:20:12.400982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:20:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:13 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Sep 30 18:20:13 compute-0 ceph-mon[73755]: pgmap v1247: 353 pgs: 353 active+clean; 88 MiB data, 254 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:20:13 compute-0 ovn_controller[156242]: 2025-09-30T18:20:13Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:6b:bd 10.100.0.7
Sep 30 18:20:13 compute-0 ovn_controller[156242]: 2025-09-30T18:20:13Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:6b:bd 10.100.0.7
Sep 30 18:20:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:13.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:13.734Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:14.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 159 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 243 op/s
Sep 30 18:20:14 compute-0 ceph-mon[73755]: pgmap v1248: 353 pgs: 353 active+clean; 159 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 243 op/s
Sep 30 18:20:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:15 compute-0 nova_compute[265391]: 2025-09-30 18:20:15.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:15.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004170 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:16.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 159 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 364 KiB/s rd, 3.8 MiB/s wr, 169 op/s
Sep 30 18:20:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:17.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:17 compute-0 ceph-mon[73755]: pgmap v1249: 353 pgs: 353 active+clean; 159 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 364 KiB/s rd, 3.8 MiB/s wr, 169 op/s
Sep 30 18:20:17 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3613281835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:20:17 compute-0 nova_compute[265391]: 2025-09-30 18:20:17.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:17.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:18.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 159 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 364 KiB/s rd, 3.8 MiB/s wr, 170 op/s
Sep 30 18:20:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1623189252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:20:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:18] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:20:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:18] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:20:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:19 compute-0 ceph-mon[73755]: pgmap v1250: 353 pgs: 353 active+clean; 159 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 364 KiB/s rd, 3.8 MiB/s wr, 170 op/s
Sep 30 18:20:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:20:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:19.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:20:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004310 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:20.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:20 compute-0 nova_compute[265391]: 2025-09-30 18:20:20.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 167 MiB data, 305 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 199 op/s
Sep 30 18:20:20 compute-0 ceph-mon[73755]: pgmap v1251: 353 pgs: 353 active+clean; 167 MiB data, 305 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 199 op/s
Sep 30 18:20:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:21.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:21.712 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:20:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:21.713 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:20:21 compute-0 nova_compute[265391]: 2025-09-30 18:20:21.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:21.715 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:20:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:22.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 167 MiB data, 305 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 199 op/s
Sep 30 18:20:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:20:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:20:22 compute-0 nova_compute[265391]: 2025-09-30 18:20:22.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:22 compute-0 ceph-mon[73755]: pgmap v1252: 353 pgs: 353 active+clean; 167 MiB data, 305 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 199 op/s
Sep 30 18:20:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:20:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002160 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:23.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:23.735Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004310 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:24.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 684 KiB/s rd, 3.9 MiB/s wr, 219 op/s
Sep 30 18:20:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:25 compute-0 nova_compute[265391]: 2025-09-30 18:20:25.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:25 compute-0 ceph-mon[73755]: pgmap v1253: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 684 KiB/s rd, 3.9 MiB/s wr, 219 op/s
Sep 30 18:20:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:25.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:20:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:20:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 320 KiB/s rd, 112 KiB/s wr, 49 op/s
Sep 30 18:20:26 compute-0 ceph-mon[73755]: pgmap v1254: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 320 KiB/s rd, 112 KiB/s wr, 49 op/s
Sep 30 18:20:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:27.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:27 compute-0 nova_compute[265391]: 2025-09-30 18:20:27.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:27.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004310 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:28.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 320 KiB/s rd, 112 KiB/s wr, 49 op/s
Sep 30 18:20:28 compute-0 podman[316439]: 2025-09-30 18:20:28.530311524 +0000 UTC m=+0.059725324 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:20:28 compute-0 podman[316438]: 2025-09-30 18:20:28.577984543 +0000 UTC m=+0.112209078 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:20:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:28] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:20:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:28] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:20:28 compute-0 nova_compute[265391]: 2025-09-30 18:20:28.936 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:20:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004310 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:29 compute-0 ceph-mon[73755]: pgmap v1255: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 320 KiB/s rd, 112 KiB/s wr, 49 op/s
Sep 30 18:20:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:29.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:29 compute-0 podman[276673]: time="2025-09-30T18:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:20:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:20:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10743 "" "Go-http-client/1.1"
Sep 30 18:20:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:30.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:30 compute-0 nova_compute[265391]: 2025-09-30 18:20:30.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 112 KiB/s wr, 103 op/s
Sep 30 18:20:30 compute-0 ceph-mon[73755]: pgmap v1256: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 112 KiB/s wr, 103 op/s
Sep 30 18:20:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40021c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:31 compute-0 openstack_network_exporter[279566]: ERROR   18:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:20:31 compute-0 openstack_network_exporter[279566]: ERROR   18:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:20:31 compute-0 openstack_network_exporter[279566]: ERROR   18:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:20:31 compute-0 openstack_network_exporter[279566]: ERROR   18:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:20:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:20:31 compute-0 openstack_network_exporter[279566]: ERROR   18:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:20:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.510 2 DEBUG nova.compute.manager [None req-31439350-bf08-4f24-8e16-c686131de110 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:635
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.670 2 DEBUG nova.compute.provider_tree [None req-31439350-bf08-4f24-8e16-c686131de110 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 17 to 19 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:20:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:31.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.949 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.950 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.951 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.951 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:20:31 compute-0 nova_compute[265391]: 2025-09-30 18:20:31.951 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:20:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:32.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:32 compute-0 sudo[316514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:20:32 compute-0 sudo[316514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Sep 30 18:20:32 compute-0 sudo[316514]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:32 compute-0 podman[316538]: 2025-09-30 18:20:32.330475111 +0000 UTC m=+0.065615237 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:20:32 compute-0 nova_compute[265391]: 2025-09-30 18:20:32.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:20:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1890354514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:20:32 compute-0 nova_compute[265391]: 2025-09-30 18:20:32.446 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:20:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004310 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:33 compute-0 ceph-mon[73755]: pgmap v1257: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Sep 30 18:20:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1890354514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:20:33 compute-0 nova_compute[265391]: 2025-09-30 18:20:33.501 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:20:33 compute-0 nova_compute[265391]: 2025-09-30 18:20:33.501 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:20:33 compute-0 nova_compute[265391]: 2025-09-30 18:20:33.662 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:20:33 compute-0 nova_compute[265391]: 2025-09-30 18:20:33.666 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:20:33 compute-0 nova_compute[265391]: 2025-09-30 18:20:33.697 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:20:33 compute-0 nova_compute[265391]: 2025-09-30 18:20:33.698 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4193MB free_disk=39.925777435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:20:33 compute-0 nova_compute[265391]: 2025-09-30 18:20:33.698 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:33 compute-0 nova_compute[265391]: 2025-09-30 18:20:33.698 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:33.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:33.737Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:20:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:33.737Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:34 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Sep 30 18:20:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:34.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 75 op/s
Sep 30 18:20:34 compute-0 ceph-mon[73755]: pgmap v1258: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 75 op/s
Sep 30 18:20:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40021e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:35 compute-0 nova_compute[265391]: 2025-09-30 18:20:35.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:35 compute-0 nova_compute[265391]: 2025-09-30 18:20:35.259 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 75d9dfb1-2eb5-41de-bab1-af56e2a3c1a7 has allocations against this compute host but is not found in the database.
Sep 30 18:20:35 compute-0 nova_compute[265391]: 2025-09-30 18:20:35.259 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:20:35 compute-0 nova_compute[265391]: 2025-09-30 18:20:35.260 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:20:33 up  1:23,  0 user,  load average: 1.12, 0.84, 0.86\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_af4ef07c582847419a03275af50c6ffc': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:20:35 compute-0 nova_compute[265391]: 2025-09-30 18:20:35.288 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:20:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4144592548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:20:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:35.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:20:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/477551565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:20:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:35 compute-0 nova_compute[265391]: 2025-09-30 18:20:35.781 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:20:35 compute-0 nova_compute[265391]: 2025-09-30 18:20:35.791 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:20:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:36.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 1.6 MiB/s rd, 1023 B/s wr, 54 op/s
Sep 30 18:20:36 compute-0 nova_compute[265391]: 2025-09-30 18:20:36.299 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:20:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/477551565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:20:36 compute-0 ceph-mon[73755]: pgmap v1259: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 1.6 MiB/s rd, 1023 B/s wr, 54 op/s
Sep 30 18:20:36 compute-0 nova_compute[265391]: 2025-09-30 18:20:36.817 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:20:36 compute-0 nova_compute[265391]: 2025-09-30 18:20:36.817 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.119s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390004310 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:37.215Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:20:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:37.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:20:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:20:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:20:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:20:37 compute-0 nova_compute[265391]: 2025-09-30 18:20:37.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:20:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:20:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:20:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:20:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:20:37 compute-0 sudo[316587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:20:37 compute-0 sudo[316587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:37 compute-0 sudo[316587]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:37 compute-0 sudo[316613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:20:37 compute-0 sudo[316613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:37.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:38 compute-0 sudo[316613]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:38.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:20:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:20:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:20:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:20:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:20:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 1.6 MiB/s rd, 1023 B/s wr, 54 op/s
Sep 30 18:20:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:20:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:20:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:20:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:20:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:20:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:20:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:20:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:20:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:20:38 compute-0 sudo[316671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:20:38 compute-0 sudo[316671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:38 compute-0 sudo[316671]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:38 compute-0 sudo[316696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:20:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:20:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:20:38 compute-0 ceph-mon[73755]: pgmap v1260: 353 pgs: 353 active+clean; 167 MiB data, 309 MiB used, 40 GiB / 40 GiB avail; 1.6 MiB/s rd, 1023 B/s wr, 54 op/s
Sep 30 18:20:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:20:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:20:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:20:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:20:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:20:38 compute-0 sudo[316696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:38] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:20:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:38] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:20:38 compute-0 podman[316763]: 2025-09-30 18:20:38.979181682 +0000 UTC m=+0.074655521 container create 7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:20:39 compute-0 podman[316763]: 2025-09-30 18:20:38.93025152 +0000 UTC m=+0.025725339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:20:39 compute-0 systemd[1]: Started libpod-conmon-7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650.scope.
Sep 30 18:20:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:20:39 compute-0 podman[316763]: 2025-09-30 18:20:39.093948965 +0000 UTC m=+0.189422784 container init 7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:20:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:39 compute-0 podman[316763]: 2025-09-30 18:20:39.10299334 +0000 UTC m=+0.198467139 container start 7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:20:39 compute-0 podman[316763]: 2025-09-30 18:20:39.106643685 +0000 UTC m=+0.202117664 container attach 7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:20:39 compute-0 systemd[1]: libpod-7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650.scope: Deactivated successfully.
Sep 30 18:20:39 compute-0 quizzical_hypatia[316780]: 167 167
Sep 30 18:20:39 compute-0 conmon[316780]: conmon 7d5f9e7f31f7fe20c58b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650.scope/container/memory.events
Sep 30 18:20:39 compute-0 podman[316763]: 2025-09-30 18:20:39.113448511 +0000 UTC m=+0.208922320 container died 7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:20:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b83bd6a5bede2927dfd86661cfbb4abc95dd5924b408696efa530cc265f2ff1a-merged.mount: Deactivated successfully.
Sep 30 18:20:39 compute-0 podman[316763]: 2025-09-30 18:20:39.16342276 +0000 UTC m=+0.258896559 container remove 7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hypatia, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:20:39 compute-0 systemd[1]: libpod-conmon-7d5f9e7f31f7fe20c58bf9044738d22ab02017f3f2e996cd29d1125c44306650.scope: Deactivated successfully.
Sep 30 18:20:39 compute-0 podman[316805]: 2025-09-30 18:20:39.392095504 +0000 UTC m=+0.062047754 container create e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:20:39 compute-0 systemd[1]: Started libpod-conmon-e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569.scope.
Sep 30 18:20:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2844711224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:20:39 compute-0 podman[316805]: 2025-09-30 18:20:39.373053359 +0000 UTC m=+0.043005639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:20:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db39dfe08d429fa874bcef4514f077d2912e1f8b9c0014540997939acebec026/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db39dfe08d429fa874bcef4514f077d2912e1f8b9c0014540997939acebec026/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db39dfe08d429fa874bcef4514f077d2912e1f8b9c0014540997939acebec026/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db39dfe08d429fa874bcef4514f077d2912e1f8b9c0014540997939acebec026/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db39dfe08d429fa874bcef4514f077d2912e1f8b9c0014540997939acebec026/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:39 compute-0 podman[316805]: 2025-09-30 18:20:39.493115729 +0000 UTC m=+0.163067979 container init e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_golick, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:20:39 compute-0 podman[316805]: 2025-09-30 18:20:39.505240594 +0000 UTC m=+0.175192844 container start e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 18:20:39 compute-0 podman[316805]: 2025-09-30 18:20:39.510198953 +0000 UTC m=+0.180151203 container attach e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:20:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:39.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:39 compute-0 nova_compute[265391]: 2025-09-30 18:20:39.816 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:20:39 compute-0 nova_compute[265391]: 2025-09-30 18:20:39.818 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:20:39 compute-0 nova_compute[265391]: 2025-09-30 18:20:39.818 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:20:39 compute-0 nova_compute[265391]: 2025-09-30 18:20:39.824 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Check if temp file /var/lib/nova/instances/tmpczj9za1h exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:20:39 compute-0 nova_compute[265391]: 2025-09-30 18:20:39.830 2 DEBUG nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpczj9za1h',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cbbd84c1-d174-40d7-be54-3123704f0e0b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:20:39 compute-0 hardcore_golick[316822]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:20:39 compute-0 hardcore_golick[316822]: --> All data devices are unavailable
Sep 30 18:20:39 compute-0 systemd[1]: libpod-e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569.scope: Deactivated successfully.
Sep 30 18:20:39 compute-0 podman[316805]: 2025-09-30 18:20:39.875930549 +0000 UTC m=+0.545882799 container died e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_golick, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:20:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-db39dfe08d429fa874bcef4514f077d2912e1f8b9c0014540997939acebec026-merged.mount: Deactivated successfully.
Sep 30 18:20:39 compute-0 podman[316805]: 2025-09-30 18:20:39.92559679 +0000 UTC m=+0.595549040 container remove e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_golick, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:20:39 compute-0 systemd[1]: libpod-conmon-e30241246ea9039438055261ade0cf959c055abb4a91737f3e9d69489a53e569.scope: Deactivated successfully.
Sep 30 18:20:39 compute-0 sudo[316696]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:40 compute-0 sudo[316855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:20:40 compute-0 sudo[316855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:40 compute-0 sudo[316855]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:40 compute-0 sudo[316880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:20:40 compute-0 sudo[316880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:40.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:40 compute-0 nova_compute[265391]: 2025-09-30 18:20:40.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Sep 30 18:20:40 compute-0 ceph-mon[73755]: pgmap v1261: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Sep 30 18:20:40 compute-0 podman[316947]: 2025-09-30 18:20:40.588594561 +0000 UTC m=+0.058731947 container create 7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:20:40 compute-0 systemd[1]: Started libpod-conmon-7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc.scope.
Sep 30 18:20:40 compute-0 podman[316947]: 2025-09-30 18:20:40.565325857 +0000 UTC m=+0.035463333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:20:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:20:40 compute-0 podman[316947]: 2025-09-30 18:20:40.703834656 +0000 UTC m=+0.173972052 container init 7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:20:40 compute-0 podman[316947]: 2025-09-30 18:20:40.719143814 +0000 UTC m=+0.189281200 container start 7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_merkle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 18:20:40 compute-0 podman[316947]: 2025-09-30 18:20:40.722741408 +0000 UTC m=+0.192878844 container attach 7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:20:40 compute-0 affectionate_merkle[316966]: 167 167
Sep 30 18:20:40 compute-0 podman[316947]: 2025-09-30 18:20:40.730617922 +0000 UTC m=+0.200755308 container died 7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:20:40 compute-0 systemd[1]: libpod-7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc.scope: Deactivated successfully.
Sep 30 18:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-806803b0662a88f73c78a5cd700464f3563790b40f515ae4f614c2ae917117b2-merged.mount: Deactivated successfully.
Sep 30 18:20:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:40 compute-0 podman[316947]: 2025-09-30 18:20:40.776050352 +0000 UTC m=+0.246187738 container remove 7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_merkle, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:20:40 compute-0 podman[316961]: 2025-09-30 18:20:40.783820924 +0000 UTC m=+0.116826516 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:20:40 compute-0 systemd[1]: libpod-conmon-7916c3b9c612c5bbf090f3af38ff5ab0691553330d5f8fc5d9258ae154df3bcc.scope: Deactivated successfully.
Sep 30 18:20:40 compute-0 podman[316965]: 2025-09-30 18:20:40.794267266 +0000 UTC m=+0.127435122 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm)
Sep 30 18:20:40 compute-0 podman[316964]: 2025-09-30 18:20:40.802144171 +0000 UTC m=+0.135320777 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Sep 30 18:20:40 compute-0 podman[317044]: 2025-09-30 18:20:40.966180764 +0000 UTC m=+0.052049004 container create cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 18:20:41 compute-0 systemd[1]: Started libpod-conmon-cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab.scope.
Sep 30 18:20:41 compute-0 podman[317044]: 2025-09-30 18:20:40.944573352 +0000 UTC m=+0.030441612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:20:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73399ae3748a6c30e4b2e5f5ed7d7249e3ad02487c185388cbb8418003979531/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73399ae3748a6c30e4b2e5f5ed7d7249e3ad02487c185388cbb8418003979531/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73399ae3748a6c30e4b2e5f5ed7d7249e3ad02487c185388cbb8418003979531/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73399ae3748a6c30e4b2e5f5ed7d7249e3ad02487c185388cbb8418003979531/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:41 compute-0 podman[317044]: 2025-09-30 18:20:41.064206512 +0000 UTC m=+0.150074812 container init cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cori, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:20:41 compute-0 podman[317044]: 2025-09-30 18:20:41.073537444 +0000 UTC m=+0.159405684 container start cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:20:41 compute-0 podman[317044]: 2025-09-30 18:20:41.077681022 +0000 UTC m=+0.163549272 container attach cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:20:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:41 compute-0 epic_cori[317061]: {
Sep 30 18:20:41 compute-0 epic_cori[317061]:     "0": [
Sep 30 18:20:41 compute-0 epic_cori[317061]:         {
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "devices": [
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "/dev/loop3"
Sep 30 18:20:41 compute-0 epic_cori[317061]:             ],
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "lv_name": "ceph_lv0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "lv_size": "21470642176",
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "name": "ceph_lv0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "tags": {
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.cluster_name": "ceph",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.crush_device_class": "",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.encrypted": "0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.osd_id": "0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.type": "block",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.vdo": "0",
Sep 30 18:20:41 compute-0 epic_cori[317061]:                 "ceph.with_tpm": "0"
Sep 30 18:20:41 compute-0 epic_cori[317061]:             },
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "type": "block",
Sep 30 18:20:41 compute-0 epic_cori[317061]:             "vg_name": "ceph_vg0"
Sep 30 18:20:41 compute-0 epic_cori[317061]:         }
Sep 30 18:20:41 compute-0 epic_cori[317061]:     ]
Sep 30 18:20:41 compute-0 epic_cori[317061]: }
Sep 30 18:20:41 compute-0 systemd[1]: libpod-cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab.scope: Deactivated successfully.
Sep 30 18:20:41 compute-0 podman[317044]: 2025-09-30 18:20:41.429240949 +0000 UTC m=+0.515109219 container died cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cori, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:20:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-73399ae3748a6c30e4b2e5f5ed7d7249e3ad02487c185388cbb8418003979531-merged.mount: Deactivated successfully.
Sep 30 18:20:41 compute-0 podman[317044]: 2025-09-30 18:20:41.488427537 +0000 UTC m=+0.574295797 container remove cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 18:20:41 compute-0 systemd[1]: libpod-conmon-cb83c2a1f24a549b0f479808267091168e633e5ca42b0e2724b18b0737b429ab.scope: Deactivated successfully.
Sep 30 18:20:41 compute-0 sudo[316880]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:41 compute-0 sudo[317083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:20:41 compute-0 sudo[317083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:41 compute-0 sudo[317083]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:41 compute-0 sudo[317108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:20:41 compute-0 sudo[317108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:41.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:42 compute-0 podman[317173]: 2025-09-30 18:20:42.164371665 +0000 UTC m=+0.065896403 container create 81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:20:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:42 compute-0 systemd[1]: Started libpod-conmon-81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229.scope.
Sep 30 18:20:42 compute-0 podman[317173]: 2025-09-30 18:20:42.14105899 +0000 UTC m=+0.042583748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:20:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:20:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:20:42 compute-0 podman[317173]: 2025-09-30 18:20:42.267006643 +0000 UTC m=+0.168531401 container init 81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_aryabhata, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:20:42 compute-0 podman[317173]: 2025-09-30 18:20:42.277010033 +0000 UTC m=+0.178534771 container start 81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:20:42 compute-0 podman[317173]: 2025-09-30 18:20:42.280653848 +0000 UTC m=+0.182178596 container attach 81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:20:42 compute-0 awesome_aryabhata[317189]: 167 167
Sep 30 18:20:42 compute-0 systemd[1]: libpod-81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229.scope: Deactivated successfully.
Sep 30 18:20:42 compute-0 podman[317173]: 2025-09-30 18:20:42.286055408 +0000 UTC m=+0.187580156 container died 81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f66411efc0f7753c46e2126014ea7b1b32ea7f298ff6bafa4ad2f56ae0bf7c1b-merged.mount: Deactivated successfully.
Sep 30 18:20:42 compute-0 podman[317173]: 2025-09-30 18:20:42.325722089 +0000 UTC m=+0.227246827 container remove 81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:20:42 compute-0 systemd[1]: libpod-conmon-81a8cd55ef037a421728ac195c2d4e08a2661ef92c6a8ed0f9309ce968d71229.scope: Deactivated successfully.
Sep 30 18:20:42 compute-0 nova_compute[265391]: 2025-09-30 18:20:42.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:42 compute-0 podman[317213]: 2025-09-30 18:20:42.508155331 +0000 UTC m=+0.049324983 container create 77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:20:42 compute-0 systemd[1]: Started libpod-conmon-77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71.scope.
Sep 30 18:20:42 compute-0 podman[317213]: 2025-09-30 18:20:42.488536011 +0000 UTC m=+0.029705683 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:20:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a44e0bea65ee3ceb504fd311449855dbe0eb9ac3901b7f1ffc4bc0040a7987/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a44e0bea65ee3ceb504fd311449855dbe0eb9ac3901b7f1ffc4bc0040a7987/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a44e0bea65ee3ceb504fd311449855dbe0eb9ac3901b7f1ffc4bc0040a7987/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a44e0bea65ee3ceb504fd311449855dbe0eb9ac3901b7f1ffc4bc0040a7987/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:20:42 compute-0 podman[317213]: 2025-09-30 18:20:42.606190789 +0000 UTC m=+0.147360461 container init 77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:20:42 compute-0 podman[317213]: 2025-09-30 18:20:42.614092704 +0000 UTC m=+0.155262356 container start 77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:20:42 compute-0 podman[317213]: 2025-09-30 18:20:42.617668997 +0000 UTC m=+0.158838699 container attach 77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:20:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:43 compute-0 ceph-mon[73755]: pgmap v1262: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:20:43 compute-0 lvm[317304]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:20:43 compute-0 lvm[317304]: VG ceph_vg0 finished
Sep 30 18:20:43 compute-0 elegant_hellman[317230]: {}
Sep 30 18:20:43 compute-0 systemd[1]: libpod-77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71.scope: Deactivated successfully.
Sep 30 18:20:43 compute-0 systemd[1]: libpod-77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71.scope: Consumed 1.224s CPU time.
Sep 30 18:20:43 compute-0 podman[317213]: 2025-09-30 18:20:43.40231919 +0000 UTC m=+0.943488882 container died 77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 18:20:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-91a44e0bea65ee3ceb504fd311449855dbe0eb9ac3901b7f1ffc4bc0040a7987-merged.mount: Deactivated successfully.
Sep 30 18:20:43 compute-0 podman[317213]: 2025-09-30 18:20:43.473070719 +0000 UTC m=+1.014240401 container remove 77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:20:43 compute-0 systemd[1]: libpod-conmon-77563b00f489c923a6402278a196baf6a06a74213cec03ab960a2fd7b1093c71.scope: Deactivated successfully.
Sep 30 18:20:43 compute-0 sudo[317108]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:20:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:20:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:20:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:20:43 compute-0 sudo[317321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:20:43 compute-0 sudo[317321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:43 compute-0 sudo[317321]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:43.739Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:44.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Sep 30 18:20:44 compute-0 nova_compute[265391]: 2025-09-30 18:20:44.289 2 DEBUG nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Preparing to wait for external event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:20:44 compute-0 nova_compute[265391]: 2025-09-30 18:20:44.289 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:44 compute-0 nova_compute[265391]: 2025-09-30 18:20:44.289 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:44 compute-0 nova_compute[265391]: 2025-09-30 18:20:44.290 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:20:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:20:44 compute-0 ceph-mon[73755]: pgmap v1263: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Sep 30 18:20:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:45 compute-0 nova_compute[265391]: 2025-09-30 18:20:45.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:45.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:46.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:20:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:47.216Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:47 compute-0 ceph-mon[73755]: pgmap v1264: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:20:47 compute-0 nova_compute[265391]: 2025-09-30 18:20:47.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:47.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:48.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:20:48 compute-0 ceph-mon[73755]: pgmap v1265: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:20:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:48] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:20:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:48] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:20:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002200 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:49 compute-0 nova_compute[265391]: 2025-09-30 18:20:49.153 2 DEBUG nova.compute.manager [req-a0808d4c-7717-45c0-bb37-2bc8899a74d3 req-dbc755ec-c692-46d9-b331-b86798eca75b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:49 compute-0 nova_compute[265391]: 2025-09-30 18:20:49.154 2 DEBUG oslo_concurrency.lockutils [req-a0808d4c-7717-45c0-bb37-2bc8899a74d3 req-dbc755ec-c692-46d9-b331-b86798eca75b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:49 compute-0 nova_compute[265391]: 2025-09-30 18:20:49.154 2 DEBUG oslo_concurrency.lockutils [req-a0808d4c-7717-45c0-bb37-2bc8899a74d3 req-dbc755ec-c692-46d9-b331-b86798eca75b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:49 compute-0 nova_compute[265391]: 2025-09-30 18:20:49.154 2 DEBUG oslo_concurrency.lockutils [req-a0808d4c-7717-45c0-bb37-2bc8899a74d3 req-dbc755ec-c692-46d9-b331-b86798eca75b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:49 compute-0 nova_compute[265391]: 2025-09-30 18:20:49.154 2 DEBUG nova.compute.manager [req-a0808d4c-7717-45c0-bb37-2bc8899a74d3 req-dbc755ec-c692-46d9-b331-b86798eca75b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] No event matching network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a in dict_keys([('network-vif-plugged', 'ca4e45b7-a42b-4e47-80d8-749194caf98a')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:20:49 compute-0 nova_compute[265391]: 2025-09-30 18:20:49.154 2 DEBUG nova.compute.manager [req-a0808d4c-7717-45c0-bb37-2bc8899a74d3 req-dbc755ec-c692-46d9-b331-b86798eca75b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:20:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:49.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:50.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:50 compute-0 nova_compute[265391]: 2025-09-30 18:20:50.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:20:50 compute-0 nova_compute[265391]: 2025-09-30 18:20:50.308 2 INFO nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Took 6.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:20:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.204 2 DEBUG nova.compute.manager [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.204 2 DEBUG oslo_concurrency.lockutils [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.205 2 DEBUG oslo_concurrency.lockutils [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.205 2 DEBUG oslo_concurrency.lockutils [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.205 2 DEBUG nova.compute.manager [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Processing event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.205 2 DEBUG nova.compute.manager [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-changed-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.206 2 DEBUG nova.compute.manager [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Refreshing instance network info cache due to event network-changed-ca4e45b7-a42b-4e47-80d8-749194caf98a. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.206 2 DEBUG oslo_concurrency.lockutils [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.206 2 DEBUG oslo_concurrency.lockutils [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.206 2 DEBUG nova.network.neutron [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Refreshing network info cache for port ca4e45b7-a42b-4e47-80d8-749194caf98a _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.208 2 DEBUG nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:20:51 compute-0 ceph-mon[73755]: pgmap v1266: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:20:51 compute-0 ovn_controller[156242]: 2025-09-30T18:20:51Z|00118|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.713 2 WARNING neutronclient.v2_0.client [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.721 2 DEBUG nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpczj9za1h',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='cbbd84c1-d174-40d7-be54-3123704f0e0b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(75d9dfb1-2eb5-41de-bab1-af56e2a3c1a7),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.727 2 DEBUG nova.objects.instance [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid cbbd84c1-d174-40d7-be54-3123704f0e0b obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.729 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.731 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:20:51 compute-0 nova_compute[265391]: 2025-09-30 18:20:51.731 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:20:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:51.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:52.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.233 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.234 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.243 2 DEBUG nova.virt.libvirt.vif [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1407640112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1407640112',id=12,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:20:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='af4ef07c582847419a03275af50c6ffc',ramdisk_id='',reservation_id='r-mj5oq7ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:20:02Z,user_data=None,user_id='57be6c3d2e0d431dae0127ac659de1e0',uuid=cbbd84c1-d174-40d7-be54-3123704f0e0b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.243 2 DEBUG nova.network.os_vif_util [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.244 2 DEBUG nova.network.os_vif_util [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:6b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ca4e45b7-a42b-4e47-80d8-749194caf98a,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4e45b7-a4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.244 2 DEBUG nova.virt.libvirt.migration [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:7e:6b:bd"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <target dev="tapca4e45b7-a4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]: </interface>
Sep 30 18:20:52 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.245 2 DEBUG nova.virt.libvirt.migration [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <name>instance-0000000c</name>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <uuid>cbbd84c1-d174-40d7-be54-3123704f0e0b</uuid>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1407640112</nova:name>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:19:55</nova:creationTime>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:user uuid="57be6c3d2e0d431dae0127ac659de1e0">tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin</nova:user>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:project uuid="af4ef07c582847419a03275af50c6ffc">tempest-TestExecuteHostMaintenanceStrategy-1597156537</nova:project>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:port uuid="ca4e45b7-a42b-4e47-80d8-749194caf98a">
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <system>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="serial">cbbd84c1-d174-40d7-be54-3123704f0e0b</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="uuid">cbbd84c1-d174-40d7-be54-3123704f0e0b</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </system>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <os>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </os>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <features>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </features>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/cbbd84c1-d174-40d7-be54-3123704f0e0b_disk">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:7e:6b:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapca4e45b7-a4"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/console.log" append="off"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </target>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/console.log" append="off"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </console>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </input>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <video>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </video>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]: </domain>
Sep 30 18:20:52 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.251 2 DEBUG nova.virt.libvirt.migration [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <name>instance-0000000c</name>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <uuid>cbbd84c1-d174-40d7-be54-3123704f0e0b</uuid>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1407640112</nova:name>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:19:55</nova:creationTime>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:user uuid="57be6c3d2e0d431dae0127ac659de1e0">tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin</nova:user>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:project uuid="af4ef07c582847419a03275af50c6ffc">tempest-TestExecuteHostMaintenanceStrategy-1597156537</nova:project>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:port uuid="ca4e45b7-a42b-4e47-80d8-749194caf98a">
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <system>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="serial">cbbd84c1-d174-40d7-be54-3123704f0e0b</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="uuid">cbbd84c1-d174-40d7-be54-3123704f0e0b</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </system>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <os>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </os>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <features>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </features>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/cbbd84c1-d174-40d7-be54-3123704f0e0b_disk">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:7e:6b:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapca4e45b7-a4"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/console.log" append="off"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </target>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/console.log" append="off"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </console>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </input>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <video>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </video>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]: </domain>
Sep 30 18:20:52 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.252 2 DEBUG nova.virt.libvirt.migration [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <name>instance-0000000c</name>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <uuid>cbbd84c1-d174-40d7-be54-3123704f0e0b</uuid>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1407640112</nova:name>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:19:55</nova:creationTime>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:user uuid="57be6c3d2e0d431dae0127ac659de1e0">tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin</nova:user>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:project uuid="af4ef07c582847419a03275af50c6ffc">tempest-TestExecuteHostMaintenanceStrategy-1597156537</nova:project>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <nova:port uuid="ca4e45b7-a42b-4e47-80d8-749194caf98a">
Sep 30 18:20:52 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <system>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="serial">cbbd84c1-d174-40d7-be54-3123704f0e0b</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="uuid">cbbd84c1-d174-40d7-be54-3123704f0e0b</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </system>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <os>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </os>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <features>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </features>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/cbbd84c1-d174-40d7-be54-3123704f0e0b_disk">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/cbbd84c1-d174-40d7-be54-3123704f0e0b_disk.config">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:7e:6b:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapca4e45b7-a4"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/console.log" append="off"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:20:52 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       </target>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b/console.log" append="off"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </console>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </input>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <video>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </video>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:20:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:20:52 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:20:52 compute-0 nova_compute[265391]: </domain>
Sep 30 18:20:52 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.252 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:20:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 14 KiB/s wr, 1 op/s
Sep 30 18:20:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:20:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:20:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:20:52 compute-0 sudo[317355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:20:52 compute-0 sudo[317355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:20:52 compute-0 sudo[317355]: pam_unix(sudo:session): session closed for user root
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.515 2 WARNING neutronclient.v2_0.client [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.686 2 DEBUG nova.network.neutron [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Updated VIF entry in instance network info cache for port ca4e45b7-a42b-4e47-80d8-749194caf98a. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.686 2 DEBUG nova.network.neutron [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Updating instance_info_cache with network_info: [{"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.737 2 DEBUG nova.virt.libvirt.migration [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:20:52 compute-0 nova_compute[265391]: 2025-09-30 18:20:52.738 2 INFO nova.virt.libvirt.migration [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:20:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:53 compute-0 nova_compute[265391]: 2025-09-30 18:20:53.194 2 DEBUG oslo_concurrency.lockutils [req-ba1b137b-a810-4442-bfd2-9480a2d65bc3 req-69cfd470-aeda-4a40-bed8-70d7c29deaee 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-cbbd84c1-d174-40d7-be54-3123704f0e0b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:20:53 compute-0 ceph-mon[73755]: pgmap v1267: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 14 KiB/s wr, 1 op/s
Sep 30 18:20:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:53.740Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:20:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:53.741Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:53 compute-0 nova_compute[265391]: 2025-09-30 18:20:53.754 2 INFO nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:20:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:20:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:54.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.259 2 DEBUG nova.virt.libvirt.migration [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.260 2 DEBUG nova.virt.libvirt.migration [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:20:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 14 KiB/s wr, 6 op/s
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.296 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.297 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.297 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:54 compute-0 ceph-mon[73755]: pgmap v1268: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 14 KiB/s wr, 6 op/s
Sep 30 18:20:54 compute-0 kernel: tapca4e45b7-a4 (unregistering): left promiscuous mode
Sep 30 18:20:54 compute-0 NetworkManager[45059]: <info>  [1759256454.6312] device (tapca4e45b7-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:54 compute-0 ovn_controller[156242]: 2025-09-30T18:20:54Z|00119|binding|INFO|Releasing lport ca4e45b7-a42b-4e47-80d8-749194caf98a from this chassis (sb_readonly=0)
Sep 30 18:20:54 compute-0 ovn_controller[156242]: 2025-09-30T18:20:54Z|00120|binding|INFO|Setting lport ca4e45b7-a42b-4e47-80d8-749194caf98a down in Southbound
Sep 30 18:20:54 compute-0 ovn_controller[156242]: 2025-09-30T18:20:54Z|00121|binding|INFO|Removing iface tapca4e45b7-a4 ovn-installed in OVS
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.651 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:6b:bd 10.100.0.7'], port_security=['fa:16:3e:7e:6b:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cbbd84c1-d174-40d7-be54-3123704f0e0b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-443be7ca-f628-4a45-95b6-620d37172d7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af4ef07c582847419a03275af50c6ffc', 'neutron:revision_number': '10', 'neutron:security_group_ids': '518a9c00-28f9-47ab-a122-e672192eedea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96eb21b8-879c-4e72-963b-37e37ae3d0c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=ca4e45b7-a42b-4e47-80d8-749194caf98a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.652 166158 INFO neutron.agent.ovn.metadata.agent [-] Port ca4e45b7-a42b-4e47-80d8-749194caf98a in datapath 443be7ca-f628-4a45-95b6-620d37172d7b unbound from our chassis
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.654 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 443be7ca-f628-4a45-95b6-620d37172d7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.656 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b854bf59-83a2-4fc8-92f0-5f8c93dd587f]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.657 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b namespace which is not needed anymore
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:54 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Sep 30 18:20:54 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000c.scope: Consumed 15.057s CPU time.
Sep 30 18:20:54 compute-0 systemd-machined[219917]: Machine qemu-9-instance-0000000c terminated.
Sep 30 18:20:54 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on cbbd84c1-d174-40d7-be54-3123704f0e0b_disk: No such file or directory
Sep 30 18:20:54 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on cbbd84c1-d174-40d7-be54-3123704f0e0b_disk: No such file or directory
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.801 2 DEBUG nova.virt.libvirt.guest [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.802 2 INFO nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Migration operation has completed
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.803 2 INFO nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] _post_live_migration() is started..
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.806 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.807 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.808 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.820 2 WARNING neutronclient.v2_0.client [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.821 2 WARNING neutronclient.v2_0.client [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:20:54 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[316290]: [NOTICE]   (316294) : haproxy version is 3.0.5-8e879a5
Sep 30 18:20:54 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[316290]: [NOTICE]   (316294) : path to executable is /usr/sbin/haproxy
Sep 30 18:20:54 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[316290]: [WARNING]  (316294) : Exiting Master process...
Sep 30 18:20:54 compute-0 podman[317411]: 2025-09-30 18:20:54.83621336 +0000 UTC m=+0.041287294 container kill 103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:20:54 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[316290]: [ALERT]    (316294) : Current worker (316296) exited with code 143 (Terminated)
Sep 30 18:20:54 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[316290]: [WARNING]  (316294) : All workers exited. Exiting... (0)
Sep 30 18:20:54 compute-0 systemd[1]: libpod-103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67.scope: Deactivated successfully.
Sep 30 18:20:54 compute-0 podman[317432]: 2025-09-30 18:20:54.88699512 +0000 UTC m=+0.029758984 container died 103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Sep 30 18:20:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67-userdata-shm.mount: Deactivated successfully.
Sep 30 18:20:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-292c7d43b6e9cb949f8a8579c52bb67a026ce99614bf818beb8f37547d4e02c3-merged.mount: Deactivated successfully.
Sep 30 18:20:54 compute-0 podman[317432]: 2025-09-30 18:20:54.939172006 +0000 UTC m=+0.081935800 container cleanup 103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Sep 30 18:20:54 compute-0 systemd[1]: libpod-conmon-103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67.scope: Deactivated successfully.
Sep 30 18:20:54 compute-0 podman[317434]: 2025-09-30 18:20:54.961566448 +0000 UTC m=+0.095126703 container remove 103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.967 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[654b9602-ef7e-49af-b331-5c5d81d78b8d]: (4, ("Tue Sep 30 06:20:54 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b (103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67)\n103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67\nTue Sep 30 06:20:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b (103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67)\n103b01bdf4342bd3cc2ccef06d8d243d6042b302ce88ef931c8fb9b351336f67\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.969 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c84cc237-80b8-4aa4-8fe2-87a908808528]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.969 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.969 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2426725c-f2ad-42ef-a25e-309c5fcefbdd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.970 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap443be7ca-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:54 compute-0 kernel: tap443be7ca-f0: left promiscuous mode
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:54 compute-0 nova_compute[265391]: 2025-09-30 18:20:54.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:54.994 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[489d3a3c-4102-4c51-9c7a-e6364fb18b89]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:55.021 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6fe2e8-bc32-4ae7-bb36-31b7164bdedf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:55.023 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[821dd5e2-0b2d-49e5-9d24-a82ba28cf160]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:55.042 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c1481e6d-3a91-4d60-ae16-3a9414c3f162]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499884, 'reachable_time': 25345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317468, 'error': None, 'target': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:55.046 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:20:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:20:55.046 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[638c3b6b-8bac-4a97-a2b7-5046e0f35c1c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:20:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d443be7ca\x2df628\x2d4a45\x2d95b6\x2d620d37172d7b.mount: Deactivated successfully.
Sep 30 18:20:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.303 2 DEBUG nova.compute.manager [req-3794baa0-f4cb-425a-a1ea-0def2bfbbceb req-faf9d4f9-0241-42fb-b4a9-b731820693ba 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.303 2 DEBUG oslo_concurrency.lockutils [req-3794baa0-f4cb-425a-a1ea-0def2bfbbceb req-faf9d4f9-0241-42fb-b4a9-b731820693ba 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.304 2 DEBUG oslo_concurrency.lockutils [req-3794baa0-f4cb-425a-a1ea-0def2bfbbceb req-faf9d4f9-0241-42fb-b4a9-b731820693ba 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.304 2 DEBUG oslo_concurrency.lockutils [req-3794baa0-f4cb-425a-a1ea-0def2bfbbceb req-faf9d4f9-0241-42fb-b4a9-b731820693ba 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.304 2 DEBUG nova.compute.manager [req-3794baa0-f4cb-425a-a1ea-0def2bfbbceb req-faf9d4f9-0241-42fb-b4a9-b731820693ba 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] No waiting events found dispatching network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.305 2 DEBUG nova.compute.manager [req-3794baa0-f4cb-425a-a1ea-0def2bfbbceb req-faf9d4f9-0241-42fb-b4a9-b731820693ba 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.547 2 DEBUG nova.network.neutron [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port ca4e45b7-a42b-4e47-80d8-749194caf98a and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.548 2 DEBUG nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.549 2 DEBUG nova.virt.libvirt.vif [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1407640112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1407640112',id=12,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:20:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='af4ef07c582847419a03275af50c6ffc',ramdisk_id='',reservation_id='r-mj5oq7ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:20:33Z,user_data=None,user_id='57be6c3d2e0d431dae0127ac659de1e0',uuid=cbbd84c1-d174-40d7-be54-3123704f0e0b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.550 2 DEBUG nova.network.os_vif_util [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "address": "fa:16:3e:7e:6b:bd", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4e45b7-a4", "ovs_interfaceid": "ca4e45b7-a42b-4e47-80d8-749194caf98a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.550 2 DEBUG nova.network.os_vif_util [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:6b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ca4e45b7-a42b-4e47-80d8-749194caf98a,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4e45b7-a4') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.551 2 DEBUG os_vif [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:6b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ca4e45b7-a42b-4e47-80d8-749194caf98a,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4e45b7-a4') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.554 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca4e45b7-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.560 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=84145521-92e3-4b41-83e2-1de977780c23) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.563 2 INFO os_vif [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:6b:bd,bridge_name='br-int',has_traffic_filtering=True,id=ca4e45b7-a42b-4e47-80d8-749194caf98a,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4e45b7-a4')
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.564 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.564 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.564 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.565 2 DEBUG nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.565 2 INFO nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Deleting instance files /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b_del
Sep 30 18:20:55 compute-0 nova_compute[265391]: 2025-09-30 18:20:55.565 2 INFO nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Deletion of /var/lib/nova/instances/cbbd84c1-d174-40d7-be54-3123704f0e0b_del complete
Sep 30 18:20:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:20:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03980029e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.4 KiB/s rd, 2.1 KiB/s wr, 5 op/s
Sep 30 18:20:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:57.217Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:20:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:20:57.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:20:57 compute-0 ceph-mon[73755]: pgmap v1269: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.4 KiB/s rd, 2.1 KiB/s wr, 5 op/s
Sep 30 18:20:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:20:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2873177461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:20:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:20:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2873177461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.582 2 DEBUG nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.582 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.582 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.582 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.583 2 DEBUG nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] No waiting events found dispatching network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.583 2 WARNING nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received unexpected event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a for instance with vm_state active and task_state migrating.
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.583 2 DEBUG nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.583 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.583 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.584 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.584 2 DEBUG nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] No waiting events found dispatching network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.584 2 DEBUG nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-unplugged-ca4e45b7-a42b-4e47-80d8-749194caf98a for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.584 2 DEBUG nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.585 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.585 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.585 2 DEBUG oslo_concurrency.lockutils [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.585 2 DEBUG nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] No waiting events found dispatching network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:20:57 compute-0 nova_compute[265391]: 2025-09-30 18:20:57.586 2 WARNING nova.compute.manager [req-40b56e0c-7888-42bc-b1a4-b4c672fb4dbf req-68289bf1-b1b5-4272-9e0b-7ccda9ad4ecd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Received unexpected event network-vif-plugged-ca4e45b7-a42b-4e47-80d8-749194caf98a for instance with vm_state active and task_state migrating.
Sep 30 18:20:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:57.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:20:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:20:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:20:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.4 KiB/s rd, 2.1 KiB/s wr, 5 op/s
Sep 30 18:20:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2873177461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:20:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2873177461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:20:58 compute-0 ceph-mon[73755]: pgmap v1270: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.4 KiB/s rd, 2.1 KiB/s wr, 5 op/s
Sep 30 18:20:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:58] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:20:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:20:58] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:20:58 compute-0 unix_chkpwd[317475]: password check failed for user (root)
Sep 30 18:20:58 compute-0 sshd-session[317471]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:20:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:20:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:20:59 compute-0 podman[317478]: 2025-09-30 18:20:59.536274785 +0000 UTC m=+0.072398642 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:20:59 compute-0 podman[317476]: 2025-09-30 18:20:59.564466578 +0000 UTC m=+0.102439673 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:20:59 compute-0 podman[276673]: time="2025-09-30T18:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:20:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:20:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:20:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:20:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:20:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:20:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10288 "" "Go-http-client/1.1"
Sep 30 18:21:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:00.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:00 compute-0 nova_compute[265391]: 2025-09-30 18:21:00.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 3.1 KiB/s wr, 6 op/s
Sep 30 18:21:00 compute-0 nova_compute[265391]: 2025-09-30 18:21:00.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:01 compute-0 sshd-session[317471]: Failed password for root from 45.252.249.158 port 42098 ssh2
Sep 30 18:21:01 compute-0 ceph-mon[73755]: pgmap v1271: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 3.1 KiB/s wr, 6 op/s
Sep 30 18:21:01 compute-0 openstack_network_exporter[279566]: ERROR   18:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:21:01 compute-0 openstack_network_exporter[279566]: ERROR   18:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:21:01 compute-0 openstack_network_exporter[279566]: ERROR   18:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:21:01 compute-0 openstack_network_exporter[279566]: ERROR   18:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:21:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:21:01 compute-0 openstack_network_exporter[279566]: ERROR   18:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:21:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:21:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:01 compute-0 sshd-session[317471]: Received disconnect from 45.252.249.158 port 42098:11: Bye Bye [preauth]
Sep 30 18:21:01 compute-0 sshd-session[317471]: Disconnected from authenticating user root 45.252.249.158 port 42098 [preauth]
Sep 30 18:21:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:02.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Sep 30 18:21:02 compute-0 ceph-mon[73755]: pgmap v1272: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Sep 30 18:21:02 compute-0 podman[317528]: 2025-09-30 18:21:02.524932301 +0000 UTC m=+0.058893272 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 18:21:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840028c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:03.742Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:04.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 5.1 KiB/s rd, 8.8 KiB/s wr, 7 op/s
Sep 30 18:21:04 compute-0 ceph-mon[73755]: pgmap v1273: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 5.1 KiB/s rd, 8.8 KiB/s wr, 7 op/s
Sep 30 18:21:04 compute-0 nova_compute[265391]: 2025-09-30 18:21:04.602 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:04 compute-0 nova_compute[265391]: 2025-09-30 18:21:04.602 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:04 compute-0 nova_compute[265391]: 2025-09-30 18:21:04.602 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "cbbd84c1-d174-40d7-be54-3123704f0e0b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.119 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.120 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.120 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.120 2 DEBUG nova.compute.resource_tracker [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.121 2 DEBUG oslo_concurrency.processutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:21:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565731300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.618 2 DEBUG oslo_concurrency.processutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/565731300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:05.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.826 2 WARNING nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.828 2 DEBUG oslo_concurrency.processutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.865 2 DEBUG oslo_concurrency.processutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.037s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.866 2 DEBUG nova.compute.resource_tracker [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4409MB free_disk=39.90115737915039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.866 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:05 compute-0 nova_compute[265391]: 2025-09-30 18:21:05.867 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:06.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 8.7 KiB/s wr, 2 op/s
Sep 30 18:21:06 compute-0 ceph-mon[73755]: pgmap v1274: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 8.7 KiB/s wr, 2 op/s
Sep 30 18:21:06 compute-0 nova_compute[265391]: 2025-09-30 18:21:06.888 2 DEBUG nova.compute.resource_tracker [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance cbbd84c1-d174-40d7-be54-3123704f0e0b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:21:07 compute-0 sshd-session[317573]: Invalid user sri from 14.225.220.107 port 49256
Sep 30 18:21:07 compute-0 sshd-session[317573]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:21:07 compute-0 sshd-session[317573]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:21:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:07.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:21:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:21:07 compute-0 nova_compute[265391]: 2025-09-30 18:21:07.398 2 DEBUG nova.compute.resource_tracker [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:21:07 compute-0 nova_compute[265391]: 2025-09-30 18:21:07.430 2 DEBUG nova.compute.resource_tracker [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 75d9dfb1-2eb5-41de-bab1-af56e2a3c1a7 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:21:07 compute-0 nova_compute[265391]: 2025-09-30 18:21:07.431 2 DEBUG nova.compute.resource_tracker [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:21:07 compute-0 nova_compute[265391]: 2025-09-30 18:21:07.431 2 DEBUG nova.compute.resource_tracker [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:21:05 up  1:24,  0 user,  load average: 1.28, 0.92, 0.88\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:21:07 compute-0 nova_compute[265391]: 2025-09-30 18:21:07.468 2 DEBUG oslo_concurrency.processutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:21:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:21:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:07.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002276197591082243 of space, bias 1.0, pg target 0.4552395182164486 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:21:07
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr', '.nfs']
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:21:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:21:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:21:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676924550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:07 compute-0 nova_compute[265391]: 2025-09-30 18:21:07.948 2 DEBUG oslo_concurrency.processutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:07 compute-0 nova_compute[265391]: 2025-09-30 18:21:07.958 2 DEBUG nova.compute.provider_tree [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:21:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:08.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 8.7 KiB/s wr, 2 op/s
Sep 30 18:21:08 compute-0 nova_compute[265391]: 2025-09-30 18:21:08.468 2 DEBUG nova.scheduler.client.report [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:21:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2676924550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:08 compute-0 ceph-mon[73755]: pgmap v1275: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 8.7 KiB/s wr, 2 op/s
Sep 30 18:21:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:08] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:21:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:08] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:21:08 compute-0 sshd-session[317573]: Failed password for invalid user sri from 14.225.220.107 port 49256 ssh2
Sep 30 18:21:08 compute-0 nova_compute[265391]: 2025-09-30 18:21:08.980 2 DEBUG nova.compute.resource_tracker [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:21:08 compute-0 nova_compute[265391]: 2025-09-30 18:21:08.981 2 DEBUG oslo_concurrency.lockutils [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.114s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:09 compute-0 nova_compute[265391]: 2025-09-30 18:21:08.999 2 INFO nova.compute.manager [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:21:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 18:21:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:10 compute-0 nova_compute[265391]: 2025-09-30 18:21:10.064 2 INFO nova.scheduler.client.report [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 75d9dfb1-2eb5-41de-bab1-af56e2a3c1a7
Sep 30 18:21:10 compute-0 nova_compute[265391]: 2025-09-30 18:21:10.065 2 DEBUG nova.virt.libvirt.driver [None req-5ff3dc87-0e66-4db3-a7ee-76ad4b78a511 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: cbbd84c1-d174-40d7-be54-3123704f0e0b] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:21:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:10.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:10 compute-0 nova_compute[265391]: 2025-09-30 18:21:10.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 11 KiB/s wr, 2 op/s
Sep 30 18:21:10 compute-0 ceph-mon[73755]: pgmap v1276: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 11 KiB/s wr, 2 op/s
Sep 30 18:21:10 compute-0 sshd-session[317573]: Received disconnect from 14.225.220.107 port 49256:11: Bye Bye [preauth]
Sep 30 18:21:10 compute-0 sshd-session[317573]: Disconnected from invalid user sri 14.225.220.107 port 49256 [preauth]
Sep 30 18:21:10 compute-0 nova_compute[265391]: 2025-09-30 18:21:10.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:11 compute-0 podman[317610]: 2025-09-30 18:21:11.536290898 +0000 UTC m=+0.061565441 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, version=9.6, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Sep 30 18:21:11 compute-0 podman[317609]: 2025-09-30 18:21:11.567387946 +0000 UTC m=+0.094590339 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.4, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Sep 30 18:21:11 compute-0 podman[317608]: 2025-09-30 18:21:11.568247959 +0000 UTC m=+0.101657763 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, container_name=multipathd, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2)
Sep 30 18:21:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:11.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:12.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 10 KiB/s wr, 2 op/s
Sep 30 18:21:12 compute-0 sudo[317666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:21:12 compute-0 sudo[317666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:12 compute-0 sudo[317666]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:13 compute-0 ceph-mon[73755]: pgmap v1277: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 10 KiB/s wr, 2 op/s
Sep 30 18:21:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:13.744Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:13.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 30 op/s
Sep 30 18:21:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2323369020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:14 compute-0 ceph-mon[73755]: pgmap v1278: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 30 op/s
Sep 30 18:21:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:15 compute-0 nova_compute[265391]: 2025-09-30 18:21:15.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:15 compute-0 nova_compute[265391]: 2025-09-30 18:21:15.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:16.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:21:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:17.220Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:21:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:17.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:21:17 compute-0 ceph-mon[73755]: pgmap v1279: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:21:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:18.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:21:18 compute-0 ceph-mon[73755]: pgmap v1280: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:21:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:18] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:21:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:18] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:21:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:19.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:20.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:20 compute-0 nova_compute[265391]: 2025-09-30 18:21:20.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 3.5 KiB/s wr, 31 op/s
Sep 30 18:21:20 compute-0 ceph-mon[73755]: pgmap v1281: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 3.5 KiB/s wr, 31 op/s
Sep 30 18:21:20 compute-0 nova_compute[265391]: 2025-09-30 18:21:20.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:21 compute-0 sshd-session[317604]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:21:21 compute-0 sshd-session[317604]: banner exchange: Connection from 115.190.39.222 port 37846: Connection timed out
Sep 30 18:21:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:22.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Sep 30 18:21:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:21:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:21:22 compute-0 ceph-mon[73755]: pgmap v1282: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Sep 30 18:21:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:21:22 compute-0 sshd-session[317699]: error: kex_exchange_identification: read: Connection reset by peer
Sep 30 18:21:22 compute-0 sshd-session[317699]: Connection reset by 154.125.120.7 port 35921
Sep 30 18:21:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/182123 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:21:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=cleanup t=2025-09-30T18:21:23.510257165Z level=info msg="Completed cleanup jobs" duration=14.131597ms
Sep 30 18:21:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T18:21:23.616136437Z level=info msg="Update check succeeded" duration=52.795673ms
Sep 30 18:21:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T18:21:23.617735639Z level=info msg="Update check succeeded" duration=49.190869ms
Sep 30 18:21:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:23.744Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:21:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:23.745Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:24 compute-0 sshd-session[317705]: error: kex_exchange_identification: read: Connection reset by peer
Sep 30 18:21:24 compute-0 sshd-session[317705]: Connection reset by 45.140.17.97 port 9676
Sep 30 18:21:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:21:24 compute-0 ceph-mon[73755]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:21:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b40023e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:25 compute-0 nova_compute[265391]: 2025-09-30 18:21:25.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:25 compute-0 nova_compute[265391]: 2025-09-30 18:21:25.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:25.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:26.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:21:26 compute-0 ceph-mon[73755]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:21:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:27.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:27.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:28.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:21:28 compute-0 ceph-mon[73755]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:21:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:28] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:21:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:28] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:21:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400aaf0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:29 compute-0 podman[276673]: time="2025-09-30T18:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:21:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:21:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10281 "" "Go-http-client/1.1"
Sep 30 18:21:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:29.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:30.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:30 compute-0 nova_compute[265391]: 2025-09-30 18:21:30.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:21:30 compute-0 ceph-mon[73755]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:21:30 compute-0 nova_compute[265391]: 2025-09-30 18:21:30.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:30 compute-0 podman[317713]: 2025-09-30 18:21:30.538666425 +0000 UTC m=+0.070874663 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:21:30 compute-0 nova_compute[265391]: 2025-09-30 18:21:30.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:30 compute-0 podman[317712]: 2025-09-30 18:21:30.577246768 +0000 UTC m=+0.114958989 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2)
Sep 30 18:21:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:31 compute-0 openstack_network_exporter[279566]: ERROR   18:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:21:31 compute-0 openstack_network_exporter[279566]: ERROR   18:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:21:31 compute-0 openstack_network_exporter[279566]: ERROR   18:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:21:31 compute-0 openstack_network_exporter[279566]: ERROR   18:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:21:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:21:31 compute-0 openstack_network_exporter[279566]: ERROR   18:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:21:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:21:31 compute-0 nova_compute[265391]: 2025-09-30 18:21:31.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:31.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:32.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Sep 30 18:21:32 compute-0 ceph-mon[73755]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Sep 30 18:21:32 compute-0 nova_compute[265391]: 2025-09-30 18:21:32.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:32 compute-0 nova_compute[265391]: 2025-09-30 18:21:32.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:32 compute-0 sudo[317766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:21:32 compute-0 sudo[317766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:32 compute-0 sudo[317766]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:32 compute-0 podman[317790]: 2025-09-30 18:21:32.705906432 +0000 UTC m=+0.076942381 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent)
Sep 30 18:21:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:21:32 compute-0 nova_compute[265391]: 2025-09-30 18:21:32.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:32 compute-0 nova_compute[265391]: 2025-09-30 18:21:32.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:32 compute-0 nova_compute[265391]: 2025-09-30 18:21:32.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:32 compute-0 nova_compute[265391]: 2025-09-30 18:21:32.944 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:21:32 compute-0 nova_compute[265391]: 2025-09-30 18:21:32.944 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400aaf0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:21:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861977476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2861977476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:33 compute-0 nova_compute[265391]: 2025-09-30 18:21:33.436 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:33 compute-0 nova_compute[265391]: 2025-09-30 18:21:33.587 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:21:33 compute-0 nova_compute[265391]: 2025-09-30 18:21:33.588 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:33 compute-0 nova_compute[265391]: 2025-09-30 18:21:33.612 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:33 compute-0 nova_compute[265391]: 2025-09-30 18:21:33.613 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4409MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:21:33 compute-0 nova_compute[265391]: 2025-09-30 18:21:33.613 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:33 compute-0 nova_compute[265391]: 2025-09-30 18:21:33.613 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:33.747Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:33.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:34.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.6 KiB/s wr, 26 op/s
Sep 30 18:21:34 compute-0 ceph-mon[73755]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.6 KiB/s wr, 26 op/s
Sep 30 18:21:34 compute-0 nova_compute[265391]: 2025-09-30 18:21:34.686 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:21:34 compute-0 nova_compute[265391]: 2025-09-30 18:21:34.687 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:21:33 up  1:24,  0 user,  load average: 0.89, 0.86, 0.87\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:21:34 compute-0 nova_compute[265391]: 2025-09-30 18:21:34.737 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:21:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315415786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:35 compute-0 nova_compute[265391]: 2025-09-30 18:21:35.197 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:35 compute-0 nova_compute[265391]: 2025-09-30 18:21:35.204 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:21:35 compute-0 nova_compute[265391]: 2025-09-30 18:21:35.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2315415786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:35 compute-0 nova_compute[265391]: 2025-09-30 18:21:35.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:35 compute-0 nova_compute[265391]: 2025-09-30 18:21:35.713 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:21:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:35.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:21:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:21:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004250 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:36 compute-0 nova_compute[265391]: 2025-09-30 18:21:36.225 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:21:36 compute-0 nova_compute[265391]: 2025-09-30 18:21:36.225 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.612s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:36.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:21:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/904585573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:36 compute-0 ceph-mon[73755]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:21:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:21:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2895475536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:21:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:21:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2895475536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:21:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400aaf0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:37.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:37 compute-0 nova_compute[265391]: 2025-09-30 18:21:37.225 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:37 compute-0 nova_compute[265391]: 2025-09-30 18:21:37.226 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:37 compute-0 nova_compute[265391]: 2025-09-30 18:21:37.226 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:37 compute-0 nova_compute[265391]: 2025-09-30 18:21:37.226 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:21:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:21:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:21:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:21:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:21:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:21:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:21:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:21:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:21:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2895475536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:21:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2895475536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:21:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:21:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:37.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:38.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:21:38 compute-0 nova_compute[265391]: 2025-09-30 18:21:38.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:38 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/447436546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:38 compute-0 ceph-mon[73755]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Sep 30 18:21:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:38] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:21:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:38] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:21:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 18:21:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:39 compute-0 nova_compute[265391]: 2025-09-30 18:21:39.185 2 DEBUG nova.compute.manager [None req-090cd9a0-e4be-4d96-bcbb-743993eef154 e33f9dc9fbb84319b00517567fe4b47e 4e2dde567e5c4b1c9802c64cfc281b6d - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:631
Sep 30 18:21:39 compute-0 nova_compute[265391]: 2025-09-30 18:21:39.226 2 DEBUG nova.compute.provider_tree [None req-090cd9a0-e4be-4d96-bcbb-743993eef154 e33f9dc9fbb84319b00517567fe4b47e 4e2dde567e5c4b1c9802c64cfc281b6d - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 19 to 22 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:21:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:39.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:40.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:40 compute-0 nova_compute[265391]: 2025-09-30 18:21:40.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:21:40 compute-0 ceph-mon[73755]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:21:40 compute-0 nova_compute[265391]: 2025-09-30 18:21:40.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400aaf0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:41.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:42.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:21:42 compute-0 ceph-mon[73755]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s
Sep 30 18:21:42 compute-0 podman[317868]: 2025-09-30 18:21:42.544254173 +0000 UTC m=+0.071116169 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=iscsid)
Sep 30 18:21:42 compute-0 podman[317869]: 2025-09-30 18:21:42.568285908 +0000 UTC m=+0.086244203 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Sep 30 18:21:42 compute-0 podman[317867]: 2025-09-30 18:21:42.56913715 +0000 UTC m=+0.098464270 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:21:42 compute-0 nova_compute[265391]: 2025-09-30 18:21:42.834 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:42 compute-0 nova_compute[265391]: 2025-09-30 18:21:42.834 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:43 compute-0 nova_compute[265391]: 2025-09-30 18:21:43.342 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:21:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:43.748Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:43.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:43 compute-0 nova_compute[265391]: 2025-09-30 18:21:43.889 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:43 compute-0 nova_compute[265391]: 2025-09-30 18:21:43.890 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:43 compute-0 nova_compute[265391]: 2025-09-30 18:21:43.895 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:21:43 compute-0 nova_compute[265391]: 2025-09-30 18:21:43.895 2 INFO nova.compute.claims [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:21:43 compute-0 sudo[317930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:21:43 compute-0 sudo[317930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:43 compute-0 sudo[317930]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:44 compute-0 sudo[317955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:21:44 compute-0 sudo[317955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:44.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:21:44 compute-0 nova_compute[265391]: 2025-09-30 18:21:44.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:21:44 compute-0 ceph-mon[73755]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:21:44 compute-0 sudo[317955]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 18:21:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:21:45 compute-0 nova_compute[265391]: 2025-09-30 18:21:45.034 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/182145 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 18:21:45 compute-0 nova_compute[265391]: 2025-09-30 18:21:45.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:21:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:21:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815262515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:45 compute-0 nova_compute[265391]: 2025-09-30 18:21:45.557 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:45 compute-0 nova_compute[265391]: 2025-09-30 18:21:45.566 2 DEBUG nova.compute.provider_tree [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:21:45 compute-0 nova_compute[265391]: 2025-09-30 18:21:45.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:46 compute-0 nova_compute[265391]: 2025-09-30 18:21:46.084 2 DEBUG nova.scheduler.client.report [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:21:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:21:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3815262515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:21:46 compute-0 ceph-mon[73755]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:21:46 compute-0 nova_compute[265391]: 2025-09-30 18:21:46.597 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.707s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:46 compute-0 nova_compute[265391]: 2025-09-30 18:21:46.598 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:21:47 compute-0 nova_compute[265391]: 2025-09-30 18:21:47.112 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:21:47 compute-0 nova_compute[265391]: 2025-09-30 18:21:47.112 2 DEBUG nova.network.neutron [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:21:47 compute-0 nova_compute[265391]: 2025-09-30 18:21:47.113 2 WARNING neutronclient.v2_0.client [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:21:47 compute-0 nova_compute[265391]: 2025-09-30 18:21:47.113 2 WARNING neutronclient.v2_0.client [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:21:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:47.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:21:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:21:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:47 compute-0 nova_compute[265391]: 2025-09-30 18:21:47.622 2 INFO nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:21:47 compute-0 nova_compute[265391]: 2025-09-30 18:21:47.684 2 DEBUG nova.network.neutron [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Successfully created port: c08e30da-2028-4b45-9b18-b77d81894e93 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:21:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:21:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:47.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:21:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:48 compute-0 nova_compute[265391]: 2025-09-30 18:21:48.136 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:21:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 18:21:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:21:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:21:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:21:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:48.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:21:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:21:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:21:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:21:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:21:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:21:48 compute-0 sudo[318037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:21:48 compute-0 sudo[318037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:48 compute-0 sudo[318037]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:48 compute-0 sudo[318062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:48 compute-0 ceph-mon[73755]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:21:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:21:48 compute-0 sudo[318062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:48 compute-0 nova_compute[265391]: 2025-09-30 18:21:48.683 2 DEBUG nova.network.neutron [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Successfully updated port: c08e30da-2028-4b45-9b18-b77d81894e93 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:21:48 compute-0 nova_compute[265391]: 2025-09-30 18:21:48.745 2 DEBUG nova.compute.manager [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-changed-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:21:48 compute-0 nova_compute[265391]: 2025-09-30 18:21:48.746 2 DEBUG nova.compute.manager [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Refreshing instance network info cache due to event network-changed-c08e30da-2028-4b45-9b18-b77d81894e93. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:21:48 compute-0 nova_compute[265391]: 2025-09-30 18:21:48.746 2 DEBUG oslo_concurrency.lockutils [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:21:48 compute-0 nova_compute[265391]: 2025-09-30 18:21:48.746 2 DEBUG oslo_concurrency.lockutils [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:21:48 compute-0 nova_compute[265391]: 2025-09-30 18:21:48.746 2 DEBUG nova.network.neutron [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Refreshing network info cache for port c08e30da-2028-4b45-9b18-b77d81894e93 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:21:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:48] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:21:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:48] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:21:48 compute-0 podman[318126]: 2025-09-30 18:21:48.894485886 +0000 UTC m=+0.049651871 container create 7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:21:48 compute-0 systemd[1]: Started libpod-conmon-7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4.scope.
Sep 30 18:21:48 compute-0 podman[318126]: 2025-09-30 18:21:48.869931548 +0000 UTC m=+0.025097563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:21:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:21:49 compute-0 podman[318126]: 2025-09-30 18:21:49.019381452 +0000 UTC m=+0.174547527 container init 7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:21:49 compute-0 podman[318126]: 2025-09-30 18:21:49.029542707 +0000 UTC m=+0.184708732 container start 7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_galileo, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:21:49 compute-0 podman[318126]: 2025-09-30 18:21:49.033714995 +0000 UTC m=+0.188881020 container attach 7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:21:49 compute-0 beautiful_galileo[318142]: 167 167
Sep 30 18:21:49 compute-0 systemd[1]: libpod-7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4.scope: Deactivated successfully.
Sep 30 18:21:49 compute-0 podman[318126]: 2025-09-30 18:21:49.040735248 +0000 UTC m=+0.195901233 container died 7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_galileo, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:21:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc17d7de894a8a98c258b7ad0a61ef4046efbb94cfd274c7ae08da6f09567489-merged.mount: Deactivated successfully.
Sep 30 18:21:49 compute-0 podman[318126]: 2025-09-30 18:21:49.093302504 +0000 UTC m=+0.248468509 container remove 7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 18:21:49 compute-0 systemd[1]: libpod-conmon-7c0a3a37335dd219eaad17e523e46269769a1de8c6e003aeb6cf264940bc90b4.scope: Deactivated successfully.
Sep 30 18:21:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.167 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.169 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.169 2 INFO nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Creating image(s)
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.203 2 DEBUG nova.storage.rbd_utils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.248 2 DEBUG nova.storage.rbd_utils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.287 2 DEBUG nova.storage.rbd_utils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.294 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.308 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.311 2 WARNING neutronclient.v2_0.client [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:21:49 compute-0 podman[318218]: 2025-09-30 18:21:49.338296401 +0000 UTC m=+0.051582811 container create 6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:21:49 compute-0 systemd[1]: Started libpod-conmon-6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7.scope.
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.377 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.378 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.379 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.379 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.406 2 DEBUG nova.storage.rbd_utils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:21:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702b58a6c79a0f7dacd06563eb06f54183cd9b448bcf71b80d0fd074253078d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:49 compute-0 podman[318218]: 2025-09-30 18:21:49.318972239 +0000 UTC m=+0.032258669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702b58a6c79a0f7dacd06563eb06f54183cd9b448bcf71b80d0fd074253078d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.415 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702b58a6c79a0f7dacd06563eb06f54183cd9b448bcf71b80d0fd074253078d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702b58a6c79a0f7dacd06563eb06f54183cd9b448bcf71b80d0fd074253078d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702b58a6c79a0f7dacd06563eb06f54183cd9b448bcf71b80d0fd074253078d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:49 compute-0 podman[318218]: 2025-09-30 18:21:49.427955951 +0000 UTC m=+0.141242361 container init 6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_maxwell, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:21:49 compute-0 podman[318218]: 2025-09-30 18:21:49.438131666 +0000 UTC m=+0.151418076 container start 6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:21:49 compute-0 podman[318218]: 2025-09-30 18:21:49.441590326 +0000 UTC m=+0.154876726 container attach 6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.465 2 DEBUG nova.network.neutron [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.623 2 DEBUG nova.network.neutron [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.717 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.782 2 DEBUG nova.storage.rbd_utils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] resizing rbd image 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:21:49 compute-0 compassionate_maxwell[318238]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:21:49 compute-0 compassionate_maxwell[318238]: --> All data devices are unavailable
Sep 30 18:21:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:49.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:49 compute-0 systemd[1]: libpod-6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7.scope: Deactivated successfully.
Sep 30 18:21:49 compute-0 podman[318218]: 2025-09-30 18:21:49.858611764 +0000 UTC m=+0.571898214 container died 6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:21:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-702b58a6c79a0f7dacd06563eb06f54183cd9b448bcf71b80d0fd074253078d2-merged.mount: Deactivated successfully.
Sep 30 18:21:49 compute-0 podman[318218]: 2025-09-30 18:21:49.905742589 +0000 UTC m=+0.619028999 container remove 6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_maxwell, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.907 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.908 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Ensure instance console log exists: /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.909 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.910 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:49 compute-0 nova_compute[265391]: 2025-09-30 18:21:49.910 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:49 compute-0 systemd[1]: libpod-conmon-6254e7162a804a45557311e05111bdbcab3eb0240685bba813b8642df30b2fa7.scope: Deactivated successfully.
Sep 30 18:21:49 compute-0 sudo[318062]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:50 compute-0 sudo[318378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:21:50 compute-0 sudo[318378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:50 compute-0 sudo[318378]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:50 compute-0 sudo[318403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:21:50 compute-0 sudo[318403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:50 compute-0 nova_compute[265391]: 2025-09-30 18:21:50.133 2 DEBUG oslo_concurrency.lockutils [req-87e89ebf-6acc-43a7-8c24-849e7d6def36 req-a6bbdb54-a28a-40fa-b2ce-8e0c9db3a584 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:21:50 compute-0 nova_compute[265391]: 2025-09-30 18:21:50.134 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquired lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:21:50 compute-0 nova_compute[265391]: 2025-09-30 18:21:50.134 2 DEBUG nova.network.neutron [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:21:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:50.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:50 compute-0 nova_compute[265391]: 2025-09-30 18:21:50.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:21:50 compute-0 podman[318470]: 2025-09-30 18:21:50.525897227 +0000 UTC m=+0.070324838 container create 63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:21:50 compute-0 systemd[1]: Started libpod-conmon-63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236.scope.
Sep 30 18:21:50 compute-0 nova_compute[265391]: 2025-09-30 18:21:50.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:50 compute-0 podman[318470]: 2025-09-30 18:21:50.498841394 +0000 UTC m=+0.043269095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:21:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:21:50 compute-0 podman[318470]: 2025-09-30 18:21:50.63681786 +0000 UTC m=+0.181245561 container init 63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hofstadter, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:21:50 compute-0 podman[318470]: 2025-09-30 18:21:50.646463331 +0000 UTC m=+0.190890932 container start 63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:21:50 compute-0 podman[318470]: 2025-09-30 18:21:50.651132272 +0000 UTC m=+0.195559963 container attach 63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hofstadter, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:21:50 compute-0 dreamy_hofstadter[318486]: 167 167
Sep 30 18:21:50 compute-0 systemd[1]: libpod-63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236.scope: Deactivated successfully.
Sep 30 18:21:50 compute-0 podman[318470]: 2025-09-30 18:21:50.655928277 +0000 UTC m=+0.200355938 container died 63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:21:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bdaf6a9b55d352527896746563fa97bb789e95d81fdac4a29b197b8d4f7b4e0-merged.mount: Deactivated successfully.
Sep 30 18:21:50 compute-0 podman[318470]: 2025-09-30 18:21:50.711275645 +0000 UTC m=+0.255703256 container remove 63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hofstadter, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:21:50 compute-0 systemd[1]: libpod-conmon-63c03184e521470e8f87f9ab116630badbca0986dc897a74ed20eacc32158236.scope: Deactivated successfully.
Sep 30 18:21:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:50 compute-0 podman[318509]: 2025-09-30 18:21:50.931222222 +0000 UTC m=+0.055773331 container create 6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:21:50 compute-0 systemd[1]: Started libpod-conmon-6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730.scope.
Sep 30 18:21:50 compute-0 podman[318509]: 2025-09-30 18:21:50.905600496 +0000 UTC m=+0.030151605 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:21:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d11fa74236c9e47738b6ef0833e9a86fec71da5db014270a086d0ea68f638fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d11fa74236c9e47738b6ef0833e9a86fec71da5db014270a086d0ea68f638fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d11fa74236c9e47738b6ef0833e9a86fec71da5db014270a086d0ea68f638fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d11fa74236c9e47738b6ef0833e9a86fec71da5db014270a086d0ea68f638fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:51 compute-0 podman[318509]: 2025-09-30 18:21:51.042933145 +0000 UTC m=+0.167484234 container init 6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_nobel, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:21:51 compute-0 podman[318509]: 2025-09-30 18:21:51.056702933 +0000 UTC m=+0.181254012 container start 6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_nobel, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:21:51 compute-0 podman[318509]: 2025-09-30 18:21:51.060726518 +0000 UTC m=+0.185277607 container attach 6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_nobel, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:21:51 compute-0 nova_compute[265391]: 2025-09-30 18:21:51.146 2 DEBUG nova.network.neutron [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:21:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4002340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:51 compute-0 ceph-mon[73755]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]: {
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:     "0": [
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:         {
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "devices": [
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "/dev/loop3"
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             ],
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "lv_name": "ceph_lv0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "lv_size": "21470642176",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "name": "ceph_lv0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "tags": {
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.cluster_name": "ceph",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.crush_device_class": "",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.encrypted": "0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.osd_id": "0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.type": "block",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.vdo": "0",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:                 "ceph.with_tpm": "0"
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             },
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "type": "block",
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:             "vg_name": "ceph_vg0"
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:         }
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]:     ]
Sep 30 18:21:51 compute-0 sleepy_nobel[318525]: }
Sep 30 18:21:51 compute-0 nova_compute[265391]: 2025-09-30 18:21:51.387 2 WARNING neutronclient.v2_0.client [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:21:51 compute-0 systemd[1]: libpod-6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730.scope: Deactivated successfully.
Sep 30 18:21:51 compute-0 conmon[318525]: conmon 6f636d558ba4941dbd93 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730.scope/container/memory.events
Sep 30 18:21:51 compute-0 podman[318509]: 2025-09-30 18:21:51.42748911 +0000 UTC m=+0.552040169 container died 6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d11fa74236c9e47738b6ef0833e9a86fec71da5db014270a086d0ea68f638fd-merged.mount: Deactivated successfully.
Sep 30 18:21:51 compute-0 podman[318509]: 2025-09-30 18:21:51.479599535 +0000 UTC m=+0.604150614 container remove 6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 18:21:51 compute-0 systemd[1]: libpod-conmon-6f636d558ba4941dbd934f30a1bbc9547306c279cd7158540d213e39e65db730.scope: Deactivated successfully.
Sep 30 18:21:51 compute-0 sudo[318403]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:51 compute-0 sudo[318546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:21:51 compute-0 sudo[318546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:51 compute-0 sudo[318546]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:51 compute-0 sudo[318571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:21:51 compute-0 sudo[318571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:51.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:52 compute-0 podman[318637]: 2025-09-30 18:21:52.19160439 +0000 UTC m=+0.040368230 container create 99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.203 2 DEBUG nova.network.neutron [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Updating instance_info_cache with network_info: [{"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:21:52 compute-0 systemd[1]: Started libpod-conmon-99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a.scope.
Sep 30 18:21:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:21:52 compute-0 podman[318637]: 2025-09-30 18:21:52.176020035 +0000 UTC m=+0.024783895 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:21:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:21:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:52.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:21:52 compute-0 podman[318637]: 2025-09-30 18:21:52.291448215 +0000 UTC m=+0.140212075 container init 99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:21:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:21:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:21:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:21:52 compute-0 podman[318637]: 2025-09-30 18:21:52.304248458 +0000 UTC m=+0.153012308 container start 99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:21:52 compute-0 podman[318637]: 2025-09-30 18:21:52.308688233 +0000 UTC m=+0.157452113 container attach 99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:21:52 compute-0 charming_germain[318653]: 167 167
Sep 30 18:21:52 compute-0 systemd[1]: libpod-99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a.scope: Deactivated successfully.
Sep 30 18:21:52 compute-0 podman[318637]: 2025-09-30 18:21:52.312446631 +0000 UTC m=+0.161210481 container died 99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:21:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd2d2b09efbd4c75cea25ca592b145a39642690b8a0dcd257772f133f1bccdcc-merged.mount: Deactivated successfully.
Sep 30 18:21:52 compute-0 podman[318637]: 2025-09-30 18:21:52.366237478 +0000 UTC m=+0.215001328 container remove 99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:21:52 compute-0 systemd[1]: libpod-conmon-99cb5a674bacf60f9c3b34019da9f08a658c9101053271fe8bda385efc78283a.scope: Deactivated successfully.
Sep 30 18:21:52 compute-0 podman[318678]: 2025-09-30 18:21:52.578447103 +0000 UTC m=+0.055247727 container create 6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:21:52 compute-0 systemd[1]: Started libpod-conmon-6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee.scope.
Sep 30 18:21:52 compute-0 podman[318678]: 2025-09-30 18:21:52.554383498 +0000 UTC m=+0.031184102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:21:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7a85b99d51c4e6abc32d76905fa0f35e576256d83dd6a0dfebf3e7241b51a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7a85b99d51c4e6abc32d76905fa0f35e576256d83dd6a0dfebf3e7241b51a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7a85b99d51c4e6abc32d76905fa0f35e576256d83dd6a0dfebf3e7241b51a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7a85b99d51c4e6abc32d76905fa0f35e576256d83dd6a0dfebf3e7241b51a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:52 compute-0 sudo[318695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:21:52 compute-0 sudo[318695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:52 compute-0 sudo[318695]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:52 compute-0 podman[318678]: 2025-09-30 18:21:52.709555251 +0000 UTC m=+0.186355885 container init 6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mirzakhani, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.709 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Releasing lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.711 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Instance network_info: |[{"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.712 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Start _get_guest_xml network_info=[{"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.717 2 WARNING nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.718 2 DEBUG nova.virt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteHostMaintenanceStrategy-server-1057470913', uuid='71a2a65c-86a0-4257-9bd1-1cd4e706fb69'), owner=OwnerMeta(userid='57be6c3d2e0d431dae0127ac659de1e0', username='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin', projectid='af4ef07c582847419a03275af50c6ffc', projectname='tempest-TestExecuteHostMaintenanceStrategy-1597156537'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759256512.7185814) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:21:52 compute-0 podman[318678]: 2025-09-30 18:21:52.720929596 +0000 UTC m=+0.197730180 container start 6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.723 2 DEBUG nova.virt.libvirt.host [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.725 2 DEBUG nova.virt.libvirt.host [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:21:52 compute-0 podman[318678]: 2025-09-30 18:21:52.725792763 +0000 UTC m=+0.202593347 container attach 6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.728 2 DEBUG nova.virt.libvirt.host [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.729 2 DEBUG nova.virt.libvirt.host [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.729 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.729 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.730 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.730 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.730 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.731 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.731 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.731 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.731 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.732 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.732 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.732 2 DEBUG nova.virt.hardware [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:21:52 compute-0 nova_compute[265391]: 2025-09-30 18:21:52.735 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:21:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377580213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:21:53 compute-0 nova_compute[265391]: 2025-09-30 18:21:53.204 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:53 compute-0 nova_compute[265391]: 2025-09-30 18:21:53.236 2 DEBUG nova.storage.rbd_utils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:21:53 compute-0 nova_compute[265391]: 2025-09-30 18:21:53.244 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:53 compute-0 ceph-mon[73755]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 242 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 18:21:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/377580213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:21:53 compute-0 lvm[318857]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:21:53 compute-0 lvm[318857]: VG ceph_vg0 finished
Sep 30 18:21:53 compute-0 loving_mirzakhani[318696]: {}
Sep 30 18:21:53 compute-0 systemd[1]: libpod-6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee.scope: Deactivated successfully.
Sep 30 18:21:53 compute-0 systemd[1]: libpod-6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee.scope: Consumed 1.323s CPU time.
Sep 30 18:21:53 compute-0 podman[318678]: 2025-09-30 18:21:53.549898662 +0000 UTC m=+1.026699256 container died 6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c7a85b99d51c4e6abc32d76905fa0f35e576256d83dd6a0dfebf3e7241b51a-merged.mount: Deactivated successfully.
Sep 30 18:21:53 compute-0 podman[318678]: 2025-09-30 18:21:53.608208117 +0000 UTC m=+1.085008701 container remove 6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mirzakhani, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:21:53 compute-0 systemd[1]: libpod-conmon-6e62a65f728629ecb8122602306e5f8706ee0f7fe93e630d883c4d7b6e3777ee.scope: Deactivated successfully.
Sep 30 18:21:53 compute-0 sudo[318571]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:21:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:21:53 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:21:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008188633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:21:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:53.749Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:53 compute-0 nova_compute[265391]: 2025-09-30 18:21:53.784 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:53 compute-0 nova_compute[265391]: 2025-09-30 18:21:53.787 2 DEBUG nova.virt.libvirt.vif [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1057470913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1057470913',id=14,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af4ef07c582847419a03275af50c6ffc',ramdisk_id='',reservation_id='r-1zyfmf1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:21:48Z,user_data=None,user_id='57be6c3d2e0d431dae0127ac659de1e0',uuid=71a2a65c-86a0-4257-9bd1-1cd4e706fb69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:21:53 compute-0 sudo[318874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:21:53 compute-0 nova_compute[265391]: 2025-09-30 18:21:53.788 2 DEBUG nova.network.os_vif_util [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Converting VIF {"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:21:53 compute-0 nova_compute[265391]: 2025-09-30 18:21:53.790 2 DEBUG nova.network.os_vif_util [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:60:7a,bridge_name='br-int',has_traffic_filtering=True,id=c08e30da-2028-4b45-9b18-b77d81894e93,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc08e30da-20') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:21:53 compute-0 nova_compute[265391]: 2025-09-30 18:21:53.792 2 DEBUG nova.objects.instance [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lazy-loading 'pci_devices' on Instance uuid 71a2a65c-86a0-4257-9bd1-1cd4e706fb69 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:21:53 compute-0 sudo[318874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:21:53 compute-0 sudo[318874]: pam_unix(sudo:session): session closed for user root
Sep 30 18:21:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:54.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:54.299 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:54.299 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:54.299 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.303 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <uuid>71a2a65c-86a0-4257-9bd1-1cd4e706fb69</uuid>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <name>instance-0000000e</name>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1057470913</nova:name>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:21:52</nova:creationTime>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:21:54 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:21:54 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:user uuid="57be6c3d2e0d431dae0127ac659de1e0">tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin</nova:user>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:project uuid="af4ef07c582847419a03275af50c6ffc">tempest-TestExecuteHostMaintenanceStrategy-1597156537</nova:project>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <nova:port uuid="c08e30da-2028-4b45-9b18-b77d81894e93">
Sep 30 18:21:54 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <system>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <entry name="serial">71a2a65c-86a0-4257-9bd1-1cd4e706fb69</entry>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <entry name="uuid">71a2a65c-86a0-4257-9bd1-1cd4e706fb69</entry>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </system>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <os>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   </os>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <features>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   </features>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk">
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       </source>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config">
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       </source>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:21:54 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:ab:60:7a"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <target dev="tapc08e30da-20"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/console.log" append="off"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <video>
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </video>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:21:54 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:21:54 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:21:54 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:21:54 compute-0 nova_compute[265391]: </domain>
Sep 30 18:21:54 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.304 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Preparing to wait for external event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.304 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.304 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.305 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.305 2 DEBUG nova.virt.libvirt.vif [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1057470913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1057470913',id=14,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af4ef07c582847419a03275af50c6ffc',ramdisk_id='',reservation_id='r-1zyfmf1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:21:48Z,user_data=None,user_id='57be6c3d2e0d431dae0127ac659de1e0',uuid=71a2a65c-86a0-4257-9bd1-1cd4e706fb69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.306 2 DEBUG nova.network.os_vif_util [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Converting VIF {"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.306 2 DEBUG nova.network.os_vif_util [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:60:7a,bridge_name='br-int',has_traffic_filtering=True,id=c08e30da-2028-4b45-9b18-b77d81894e93,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc08e30da-20') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.306 2 DEBUG os_vif [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:60:7a,bridge_name='br-int',has_traffic_filtering=True,id=c08e30da-2028-4b45-9b18-b77d81894e93,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc08e30da-20') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.307 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.308 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.309 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'a8d03729-f345-5c8b-be8a-7b89f1b0e918', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc08e30da-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapc08e30da-20, col_values=(('qos', UUID('dc4a77ac-0049-47eb-aba1-f55689bbda5c')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapc08e30da-20, col_values=(('external_ids', {'iface-id': 'c08e30da-2028-4b45-9b18-b77d81894e93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:60:7a', 'vm-uuid': '71a2a65c-86a0-4257-9bd1-1cd4e706fb69'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:54 compute-0 NetworkManager[45059]: <info>  [1759256514.3193] manager: (tapc08e30da-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:54 compute-0 nova_compute[265391]: 2025-09-30 18:21:54.328 2 INFO os_vif [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:60:7a,bridge_name='br-int',has_traffic_filtering=True,id=c08e30da-2028-4b45-9b18-b77d81894e93,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc08e30da-20')
Sep 30 18:21:54 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:54 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:21:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4008188633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:21:54 compute-0 ceph-mon[73755]: pgmap v1298: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:21:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:55 compute-0 nova_compute[265391]: 2025-09-30 18:21:55.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:21:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:55.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:55 compute-0 nova_compute[265391]: 2025-09-30 18:21:55.871 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:21:55 compute-0 nova_compute[265391]: 2025-09-30 18:21:55.872 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:21:55 compute-0 nova_compute[265391]: 2025-09-30 18:21:55.872 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] No VIF found with MAC fa:16:3e:ab:60:7a, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:21:55 compute-0 nova_compute[265391]: 2025-09-30 18:21:55.873 2 INFO nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Using config drive
Sep 30 18:21:55 compute-0 nova_compute[265391]: 2025-09-30 18:21:55.905 2 DEBUG nova.storage.rbd_utils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:21:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:56.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:21:56 compute-0 nova_compute[265391]: 2025-09-30 18:21:56.419 2 WARNING neutronclient.v2_0.client [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:21:56 compute-0 ceph-mon[73755]: pgmap v1299: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.146 2 INFO nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Creating config drive at /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/disk.config
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.153 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmptjy4wixc execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004b70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:21:57.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.291 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmptjy4wixc" returned: 0 in 0.138s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.343 2 DEBUG nova.storage.rbd_utils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] rbd image 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.348 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/disk.config 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.531 2 DEBUG oslo_concurrency.processutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/disk.config 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.532 2 INFO nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Deleting local config drive /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/disk.config because it was imported into RBD.
Sep 30 18:21:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:21:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2571417486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:21:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:21:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2571417486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:21:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2571417486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:21:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2571417486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:21:57 compute-0 kernel: tapc08e30da-20: entered promiscuous mode
Sep 30 18:21:57 compute-0 NetworkManager[45059]: <info>  [1759256517.6081] manager: (tapc08e30da-20): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Sep 30 18:21:57 compute-0 ovn_controller[156242]: 2025-09-30T18:21:57Z|00122|binding|INFO|Claiming lport c08e30da-2028-4b45-9b18-b77d81894e93 for this chassis.
Sep 30 18:21:57 compute-0 ovn_controller[156242]: 2025-09-30T18:21:57Z|00123|binding|INFO|c08e30da-2028-4b45-9b18-b77d81894e93: Claiming fa:16:3e:ab:60:7a 10.100.0.12
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.618 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:60:7a 10.100.0.12'], port_security=['fa:16:3e:ab:60:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '71a2a65c-86a0-4257-9bd1-1cd4e706fb69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-443be7ca-f628-4a45-95b6-620d37172d7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af4ef07c582847419a03275af50c6ffc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '518a9c00-28f9-47ab-a122-e672192eedea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96eb21b8-879c-4e72-963b-37e37ae3d0c5, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=c08e30da-2028-4b45-9b18-b77d81894e93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.624 166158 INFO neutron.agent.ovn.metadata.agent [-] Port c08e30da-2028-4b45-9b18-b77d81894e93 in datapath 443be7ca-f628-4a45-95b6-620d37172d7b bound to our chassis
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.625 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 443be7ca-f628-4a45-95b6-620d37172d7b
Sep 30 18:21:57 compute-0 ovn_controller[156242]: 2025-09-30T18:21:57Z|00124|binding|INFO|Setting lport c08e30da-2028-4b45-9b18-b77d81894e93 ovn-installed in OVS
Sep 30 18:21:57 compute-0 ovn_controller[156242]: 2025-09-30T18:21:57Z|00125|binding|INFO|Setting lport c08e30da-2028-4b45-9b18-b77d81894e93 up in Southbound
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:57 compute-0 nova_compute[265391]: 2025-09-30 18:21:57.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.650 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0e1081-9d10-433b-9005-7e7c3278623e]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.651 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap443be7ca-f1 in ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:21:57 compute-0 systemd-machined[219917]: New machine qemu-10-instance-0000000e.
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.655 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap443be7ca-f0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.655 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4816424f-70e5-4050-8ff2-837d2d127985]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.656 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[44e6e268-07b6-426d-93da-bc2c25613241]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000e.
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.675 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3fb335-d5c3-4f1f-aae0-6ff4f5c6f520]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 systemd-udevd[318981]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.695 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b53d3569-630c-4430-af00-2922f3bb903b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 NetworkManager[45059]: <info>  [1759256517.7123] device (tapc08e30da-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:21:57 compute-0 NetworkManager[45059]: <info>  [1759256517.7153] device (tapc08e30da-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.736 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[b528b6a9-d0b8-411e-8952-19e1454596ad]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.742 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b3961378-6677-4d3a-ab5c-c19d92fa545f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 NetworkManager[45059]: <info>  [1759256517.7437] manager: (tap443be7ca-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Sep 30 18:21:57 compute-0 systemd-udevd[318985]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.782 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[c12958ac-8497-4da5-bded-3188cd55430a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.786 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[c540e6fd-2802-435c-a03b-dfb26932f4ac]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 NetworkManager[45059]: <info>  [1759256517.8214] device (tap443be7ca-f0): carrier: link connected
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.832 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[b3401e81-0c65-4e63-ad47-e20e1f5ff22d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.861 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3a44d53a-0fc0-4a22-aa06-47e9cd72af3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap443be7ca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:7f:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511651, 'reachable_time': 21816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319013, 'error': None, 'target': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:57.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.879 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[322e7881-65ba-4e1a-bdca-03ac883f5c3e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:7f4d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511651, 'tstamp': 511651}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319014, 'error': None, 'target': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.905 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a7946-8658-4ffe-9278-f9c9771a66fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap443be7ca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:7f:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511651, 'reachable_time': 21816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319015, 'error': None, 'target': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:57.952 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8e91a5-20b5-4835-96f6-1952abbf4442]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.035 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3180d0a4-d3db-4fbc-ba3b-c5aa50e45a6c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.037 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap443be7ca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.037 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.038 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap443be7ca-f0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:21:58 compute-0 kernel: tap443be7ca-f0: entered promiscuous mode
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:58 compute-0 NetworkManager[45059]: <info>  [1759256518.0690] manager: (tap443be7ca-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.069 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap443be7ca-f0, col_values=(('external_ids', {'iface-id': '031d2cff-b142-4423-ba99-772183b7a667'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:21:58 compute-0 ovn_controller[156242]: 2025-09-30T18:21:58Z|00126|binding|INFO|Releasing lport 031d2cff-b142-4423-ba99-772183b7a667 from this chassis (sb_readonly=0)
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.085 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7312a93d-9edc-402e-9b40-c85b9a0bb32f]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.086 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.086 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.087 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 443be7ca-f628-4a45-95b6-620d37172d7b disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.087 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.087 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[348366e4-5810-4557-8584-e3ae803aa0d8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.088 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.088 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4153f45b-2031-44d9-b057-eb0d1f35e9d7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.089 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-443be7ca-f628-4a45-95b6-620d37172d7b
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 443be7ca-f628-4a45-95b6-620d37172d7b
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.089 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'env', 'PROCESS_TAG=haproxy-443be7ca-f628-4a45-95b6-620d37172d7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/443be7ca-f628-4a45-95b6-620d37172d7b.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:21:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:21:58.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:21:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.525 2 DEBUG nova.compute.manager [req-006528bd-2fa3-464e-9621-041176c6a1c6 req-54654ad2-5471-4417-8159-48c3b2801dae 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.527 2 DEBUG oslo_concurrency.lockutils [req-006528bd-2fa3-464e-9621-041176c6a1c6 req-54654ad2-5471-4417-8159-48c3b2801dae 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.529 2 DEBUG oslo_concurrency.lockutils [req-006528bd-2fa3-464e-9621-041176c6a1c6 req-54654ad2-5471-4417-8159-48c3b2801dae 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.529 2 DEBUG oslo_concurrency.lockutils [req-006528bd-2fa3-464e-9621-041176c6a1c6 req-54654ad2-5471-4417-8159-48c3b2801dae 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.529 2 DEBUG nova.compute.manager [req-006528bd-2fa3-464e-9621-041176c6a1c6 req-54654ad2-5471-4417-8159-48c3b2801dae 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Processing event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:21:58 compute-0 podman[319089]: 2025-09-30 18:21:58.542121851 +0000 UTC m=+0.064127937 container create a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.578 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:58 compute-0 systemd[1]: Started libpod-conmon-a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568.scope.
Sep 30 18:21:58 compute-0 ceph-mon[73755]: pgmap v1300: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:21:58 compute-0 podman[319089]: 2025-09-30 18:21:58.510625383 +0000 UTC m=+0.032631499 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:21:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:21:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36caf887c8f80ff07d13b1097e2204126f87daa6cd27af25b7f4dfa48a844a61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:21:58 compute-0 podman[319089]: 2025-09-30 18:21:58.647445489 +0000 UTC m=+0.169451595 container init a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:21:58 compute-0 podman[319089]: 2025-09-30 18:21:58.658403264 +0000 UTC m=+0.180409350 container start a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.680 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.685 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.689 2 INFO nova.virt.libvirt.driver [-] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Instance spawned successfully.
Sep 30 18:21:58 compute-0 nova_compute[265391]: 2025-09-30 18:21:58.689 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:21:58 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[319105]: [NOTICE]   (319109) : New worker (319111) forked
Sep 30 18:21:58 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[319105]: [NOTICE]   (319109) : Loading success.
Sep 30 18:21:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:21:58.744 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:21:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:58] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:21:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:21:58] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:21:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:21:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.204 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.205 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.206 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.207 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.208 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.209 2 DEBUG nova.virt.libvirt.driver [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.722 2 INFO nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Took 10.55 seconds to spawn the instance on the hypervisor.
Sep 30 18:21:59 compute-0 nova_compute[265391]: 2025-09-30 18:21:59.722 2 DEBUG nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:21:59 compute-0 podman[276673]: time="2025-09-30T18:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:21:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:21:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10741 "" "Go-http-client/1.1"
Sep 30 18:21:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:21:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:21:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:21:59.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.254 2 INFO nova.compute.manager [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Took 16.40 seconds to build instance.
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:00.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:22:00 compute-0 ceph-mon[73755]: pgmap v1301: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.580 2 DEBUG nova.compute.manager [req-d20d0bbf-cf84-4f48-8567-1a69b3e3ffaa req-ee0a652c-65c7-4057-9005-84fe5b335f04 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.581 2 DEBUG oslo_concurrency.lockutils [req-d20d0bbf-cf84-4f48-8567-1a69b3e3ffaa req-ee0a652c-65c7-4057-9005-84fe5b335f04 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.581 2 DEBUG oslo_concurrency.lockutils [req-d20d0bbf-cf84-4f48-8567-1a69b3e3ffaa req-ee0a652c-65c7-4057-9005-84fe5b335f04 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.582 2 DEBUG oslo_concurrency.lockutils [req-d20d0bbf-cf84-4f48-8567-1a69b3e3ffaa req-ee0a652c-65c7-4057-9005-84fe5b335f04 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.582 2 DEBUG nova.compute.manager [req-d20d0bbf-cf84-4f48-8567-1a69b3e3ffaa req-ee0a652c-65c7-4057-9005-84fe5b335f04 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] No waiting events found dispatching network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.582 2 WARNING nova.compute.manager [req-d20d0bbf-cf84-4f48-8567-1a69b3e3ffaa req-ee0a652c-65c7-4057-9005-84fe5b335f04 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received unexpected event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 for instance with vm_state active and task_state None.
Sep 30 18:22:00 compute-0 nova_compute[265391]: 2025-09-30 18:22:00.759 2 DEBUG oslo_concurrency.lockutils [None req-da0c185b-bd19-4f3e-8338-e4d92ca9415e 57be6c3d2e0d431dae0127ac659de1e0 af4ef07c582847419a03275af50c6ffc - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.925s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004b90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:01 compute-0 openstack_network_exporter[279566]: ERROR   18:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:22:01 compute-0 openstack_network_exporter[279566]: ERROR   18:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:22:01 compute-0 openstack_network_exporter[279566]: ERROR   18:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:22:01 compute-0 openstack_network_exporter[279566]: ERROR   18:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:22:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:22:01 compute-0 openstack_network_exporter[279566]: ERROR   18:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:22:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:22:01 compute-0 podman[319125]: 2025-09-30 18:22:01.557144602 +0000 UTC m=+0.071170801 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:22:01 compute-0 podman[319124]: 2025-09-30 18:22:01.594821902 +0000 UTC m=+0.116776757 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:22:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:01.746 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:22:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:02.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:22:02 compute-0 ceph-mon[73755]: pgmap v1302: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:22:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:03 compute-0 podman[319174]: 2025-09-30 18:22:03.518899828 +0000 UTC m=+0.054640731 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250930, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Sep 30 18:22:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:03.751Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:03.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:22:04 compute-0 nova_compute[265391]: 2025-09-30 18:22:04.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:04 compute-0 ceph-mon[73755]: pgmap v1303: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:22:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:05 compute-0 nova_compute[265391]: 2025-09-30 18:22:05.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:05.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:06.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:22:06 compute-0 ceph-mon[73755]: pgmap v1304: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:22:07 compute-0 sshd-session[319196]: Invalid user testuser from 45.252.249.158 port 50956
Sep 30 18:22:07 compute-0 sshd-session[319196]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:22:07 compute-0 sshd-session[319196]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:22:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:07.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:22:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:22:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005229063904082829 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:22:07
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', '.mgr', 'volumes', '.rgw.root', 'images', 'vms', 'default.rgw.log']
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:22:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:22:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:07.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:08.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:22:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1822497323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:22:08 compute-0 ceph-mon[73755]: pgmap v1305: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:22:08 compute-0 sshd-session[319196]: Failed password for invalid user testuser from 45.252.249.158 port 50956 ssh2
Sep 30 18:22:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:08] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:22:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:08] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:22:09 compute-0 sshd-session[319196]: Received disconnect from 45.252.249.158 port 50956:11: Bye Bye [preauth]
Sep 30 18:22:09 compute-0 sshd-session[319196]: Disconnected from invalid user testuser 45.252.249.158 port 50956 [preauth]
Sep 30 18:22:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398004bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:09 compute-0 nova_compute[265391]: 2025-09-30 18:22:09.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:10 compute-0 nova_compute[265391]: 2025-09-30 18:22:10.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:10.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:22:10 compute-0 ceph-mon[73755]: pgmap v1306: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:22:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:11 compute-0 ovn_controller[156242]: 2025-09-30T18:22:11Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:60:7a 10.100.0.12
Sep 30 18:22:11 compute-0 ovn_controller[156242]: 2025-09-30T18:22:11Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:60:7a 10.100.0.12
Sep 30 18:22:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:11.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:12.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 63 op/s
Sep 30 18:22:12 compute-0 ceph-mon[73755]: pgmap v1307: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 63 op/s
Sep 30 18:22:12 compute-0 sudo[319206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:22:12 compute-0 sudo[319206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:12 compute-0 sudo[319206]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:12 compute-0 podman[319232]: 2025-09-30 18:22:12.927647655 +0000 UTC m=+0.066861489 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:22:12 compute-0 podman[319231]: 2025-09-30 18:22:12.927828039 +0000 UTC m=+0.070008040 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Sep 30 18:22:12 compute-0 podman[319230]: 2025-09-30 18:22:12.969502112 +0000 UTC m=+0.114081646 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd)
Sep 30 18:22:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038400bca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:13.753Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:13.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Sep 30 18:22:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:14.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:14 compute-0 nova_compute[265391]: 2025-09-30 18:22:14.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:14 compute-0 ceph-mon[73755]: pgmap v1308: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Sep 30 18:22:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:15 compute-0 nova_compute[265391]: 2025-09-30 18:22:15.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1168738203' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:22:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:15.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:22:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:16.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/777441767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:22:16 compute-0 ceph-mon[73755]: pgmap v1309: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:22:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:17.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:17.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:22:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:18.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:18 compute-0 ceph-mon[73755]: pgmap v1310: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:22:18 compute-0 sshd-session[319297]: Invalid user superadmin from 14.225.220.107 port 60096
Sep 30 18:22:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:18] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:22:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:18] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:22:18 compute-0 sshd-session[319297]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:22:18 compute-0 sshd-session[319297]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:22:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c001a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:19 compute-0 nova_compute[265391]: 2025-09-30 18:22:19.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:19.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:20 compute-0 nova_compute[265391]: 2025-09-30 18:22:20.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:22:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:20.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:20 compute-0 sshd-session[319297]: Failed password for invalid user superadmin from 14.225.220.107 port 60096 ssh2
Sep 30 18:22:20 compute-0 ceph-mon[73755]: pgmap v1311: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:22:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:21 compute-0 sshd-session[319297]: Received disconnect from 14.225.220.107 port 60096:11: Bye Bye [preauth]
Sep 30 18:22:21 compute-0 sshd-session[319297]: Disconnected from invalid user superadmin 14.225.220.107 port 60096 [preauth]
Sep 30 18:22:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002460 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:21.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:22:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:22:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:22:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:22.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:22:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:23 compute-0 ceph-mon[73755]: pgmap v1312: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:22:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:23.755Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:23.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Sep 30 18:22:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:24.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:24 compute-0 nova_compute[265391]: 2025-09-30 18:22:24.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:24 compute-0 ceph-mon[73755]: pgmap v1313: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Sep 30 18:22:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002460 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:25 compute-0 nova_compute[265391]: 2025-09-30 18:22:25.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:25.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:22:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:26.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:26 compute-0 ceph-mon[73755]: pgmap v1314: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:22:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:27 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:22:27 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:22:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:27.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:27.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:22:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:28.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:28 compute-0 ceph-mon[73755]: pgmap v1315: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:22:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:28] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:22:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:28] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:22:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002460 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:29 compute-0 nova_compute[265391]: 2025-09-30 18:22:29.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:29 compute-0 podman[276673]: time="2025-09-30T18:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:22:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:22:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10744 "" "Go-http-client/1.1"
Sep 30 18:22:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:29.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:22:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:30.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:30 compute-0 nova_compute[265391]: 2025-09-30 18:22:30.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:30 compute-0 ceph-mon[73755]: pgmap v1316: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:22:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:31 compute-0 openstack_network_exporter[279566]: ERROR   18:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:22:31 compute-0 openstack_network_exporter[279566]: ERROR   18:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:22:31 compute-0 openstack_network_exporter[279566]: ERROR   18:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:22:31 compute-0 openstack_network_exporter[279566]: ERROR   18:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:22:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:22:31 compute-0 openstack_network_exporter[279566]: ERROR   18:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:22:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:22:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:31.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:22:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:32.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:32 compute-0 ceph-mon[73755]: pgmap v1317: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:22:32 compute-0 nova_compute[265391]: 2025-09-30 18:22:32.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:22:32 compute-0 nova_compute[265391]: 2025-09-30 18:22:32.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:22:32 compute-0 podman[319316]: 2025-09-30 18:22:32.557048838 +0000 UTC m=+0.085153474 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:22:32 compute-0 podman[319315]: 2025-09-30 18:22:32.587324965 +0000 UTC m=+0.117517776 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Sep 30 18:22:32 compute-0 nova_compute[265391]: 2025-09-30 18:22:32.591 2 DEBUG nova.compute.manager [None req-6e18150d-dffa-46a7-8f2f-ed62302427f4 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:635
Sep 30 18:22:32 compute-0 nova_compute[265391]: 2025-09-30 18:22:32.653 2 DEBUG nova.compute.provider_tree [None req-6e18150d-dffa-46a7-8f2f-ed62302427f4 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 22 to 24 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:22:33 compute-0 sudo[319362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:22:33 compute-0 sudo[319362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:33 compute-0 sudo[319362]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:33 compute-0 nova_compute[265391]: 2025-09-30 18:22:33.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:22:33 compute-0 nova_compute[265391]: 2025-09-30 18:22:33.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:22:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:33.756Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:33.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:33 compute-0 nova_compute[265391]: 2025-09-30 18:22:33.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:33 compute-0 nova_compute[265391]: 2025-09-30 18:22:33.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:33 compute-0 nova_compute[265391]: 2025-09-30 18:22:33.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:33 compute-0 nova_compute[265391]: 2025-09-30 18:22:33.944 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:22:33 compute-0 nova_compute[265391]: 2025-09-30 18:22:33.945 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:22:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4002460 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 188 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Sep 30 18:22:34 compute-0 nova_compute[265391]: 2025-09-30 18:22:34.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:34 compute-0 ceph-mon[73755]: pgmap v1318: 353 pgs: 353 active+clean; 188 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Sep 30 18:22:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:22:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3235448362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:22:34 compute-0 nova_compute[265391]: 2025-09-30 18:22:34.467 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:22:34 compute-0 podman[319411]: 2025-09-30 18:22:34.520276983 +0000 UTC m=+0.062275309 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:22:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3235448362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.519 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.520 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.686 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.688 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.722 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.723 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4197MB free_disk=39.902034759521484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.723 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:35 compute-0 nova_compute[265391]: 2025-09-30 18:22:35.723 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:35.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 188 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 233 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Sep 30 18:22:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:36.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3008231978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:22:36 compute-0 ceph-mon[73755]: pgmap v1319: 353 pgs: 353 active+clean; 188 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 233 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Sep 30 18:22:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:37.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:37 compute-0 nova_compute[265391]: 2025-09-30 18:22:37.277 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 654528f8-f4cf-47a1-950c-aa6142fa5fa7 has allocations against this compute host but is not found in the database.
Sep 30 18:22:37 compute-0 nova_compute[265391]: 2025-09-30 18:22:37.279 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:22:37 compute-0 nova_compute[265391]: 2025-09-30 18:22:37.279 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:22:35 up  1:25,  0 user,  load average: 1.03, 0.90, 0.88\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_af4ef07c582847419a03275af50c6ffc': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:22:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:22:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:22:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:22:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:22:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:22:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:22:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:22:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:22:37 compute-0 nova_compute[265391]: 2025-09-30 18:22:37.446 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:22:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2408598724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:22:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2408598724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:22:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:22:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:22:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919677126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:22:37 compute-0 nova_compute[265391]: 2025-09-30 18:22:37.927 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:22:37 compute-0 nova_compute[265391]: 2025-09-30 18:22:37.934 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:22:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:37.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009f20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 188 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 233 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Sep 30 18:22:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:38 compute-0 nova_compute[265391]: 2025-09-30 18:22:38.444 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:22:38 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2919677126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:22:38 compute-0 ceph-mon[73755]: pgmap v1320: 353 pgs: 353 active+clean; 188 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 233 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Sep 30 18:22:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:38] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:22:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:38] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:22:38 compute-0 nova_compute[265391]: 2025-09-30 18:22:38.960 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:22:38 compute-0 nova_compute[265391]: 2025-09-30 18:22:38.961 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.237s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:39 compute-0 nova_compute[265391]: 2025-09-30 18:22:39.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1698415441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:22:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:39.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:39 compute-0 nova_compute[265391]: 2025-09-30 18:22:39.961 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:22:39 compute-0 nova_compute[265391]: 2025-09-30 18:22:39.962 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:22:39 compute-0 nova_compute[265391]: 2025-09-30 18:22:39.962 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:22:39 compute-0 nova_compute[265391]: 2025-09-30 18:22:39.963 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:22:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:22:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:40 compute-0 nova_compute[265391]: 2025-09-30 18:22:40.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:40 compute-0 nova_compute[265391]: 2025-09-30 18:22:40.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:22:40 compute-0 ceph-mon[73755]: pgmap v1321: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:22:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:41 compute-0 nova_compute[265391]: 2025-09-30 18:22:41.043 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Check if temp file /var/lib/nova/instances/tmp8q7mx0yg exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:22:41 compute-0 nova_compute[265391]: 2025-09-30 18:22:41.048 2 DEBUG nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8q7mx0yg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71a2a65c-86a0-4257-9bd1-1cd4e706fb69',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:22:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:41.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009f20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:22:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:42.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:42 compute-0 ceph-mon[73755]: pgmap v1322: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:22:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000e60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:43 compute-0 podman[319462]: 2025-09-30 18:22:43.540649624 +0000 UTC m=+0.069597859 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Sep 30 18:22:43 compute-0 podman[319465]: 2025-09-30 18:22:43.548483638 +0000 UTC m=+0.073474521 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Sep 30 18:22:43 compute-0 podman[319464]: 2025-09-30 18:22:43.571541897 +0000 UTC m=+0.089038475 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:22:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:43.757Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:43.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380004450 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:22:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:44 compute-0 nova_compute[265391]: 2025-09-30 18:22:44.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:44 compute-0 ceph-mon[73755]: pgmap v1323: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:22:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:45 compute-0 nova_compute[265391]: 2025-09-30 18:22:45.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:45 compute-0 nova_compute[265391]: 2025-09-30 18:22:45.470 2 DEBUG nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Preparing to wait for external event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:22:45 compute-0 nova_compute[265391]: 2025-09-30 18:22:45.471 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:45 compute-0 nova_compute[265391]: 2025-09-30 18:22:45.471 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:45 compute-0 nova_compute[265391]: 2025-09-30 18:22:45.471 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:45.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009f20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 94 KiB/s rd, 106 KiB/s wr, 21 op/s
Sep 30 18:22:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:46.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:46 compute-0 ceph-mon[73755]: pgmap v1324: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 94 KiB/s rd, 106 KiB/s wr, 21 op/s
Sep 30 18:22:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:47.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:47.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 94 KiB/s rd, 106 KiB/s wr, 21 op/s
Sep 30 18:22:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:48 compute-0 ceph-mon[73755]: pgmap v1325: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 94 KiB/s rd, 106 KiB/s wr, 21 op/s
Sep 30 18:22:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:48] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:22:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:48] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:22:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:49 compute-0 nova_compute[265391]: 2025-09-30 18:22:49.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:49 compute-0 nova_compute[265391]: 2025-09-30 18:22:49.753 2 DEBUG nova.compute.manager [req-f7e1bd87-2e44-4d2a-b6bb-f382a8e1f72e req-c928fc9c-df55-4948-b97f-773cf5c0cf19 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:22:49 compute-0 nova_compute[265391]: 2025-09-30 18:22:49.754 2 DEBUG oslo_concurrency.lockutils [req-f7e1bd87-2e44-4d2a-b6bb-f382a8e1f72e req-c928fc9c-df55-4948-b97f-773cf5c0cf19 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:49 compute-0 nova_compute[265391]: 2025-09-30 18:22:49.755 2 DEBUG oslo_concurrency.lockutils [req-f7e1bd87-2e44-4d2a-b6bb-f382a8e1f72e req-c928fc9c-df55-4948-b97f-773cf5c0cf19 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:49 compute-0 nova_compute[265391]: 2025-09-30 18:22:49.755 2 DEBUG oslo_concurrency.lockutils [req-f7e1bd87-2e44-4d2a-b6bb-f382a8e1f72e req-c928fc9c-df55-4948-b97f-773cf5c0cf19 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:49 compute-0 nova_compute[265391]: 2025-09-30 18:22:49.755 2 DEBUG nova.compute.manager [req-f7e1bd87-2e44-4d2a-b6bb-f382a8e1f72e req-c928fc9c-df55-4948-b97f-773cf5c0cf19 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] No event matching network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 in dict_keys([('network-vif-plugged', 'c08e30da-2028-4b45-9b18-b77d81894e93')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:22:49 compute-0 nova_compute[265391]: 2025-09-30 18:22:49.756 2 DEBUG nova.compute.manager [req-f7e1bd87-2e44-4d2a-b6bb-f382a8e1f72e req-c928fc9c-df55-4948-b97f-773cf5c0cf19 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:22:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:49.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009f20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 94 KiB/s rd, 107 KiB/s wr, 21 op/s
Sep 30 18:22:50 compute-0 nova_compute[265391]: 2025-09-30 18:22:50.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:50.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:50 compute-0 ceph-mon[73755]: pgmap v1326: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 94 KiB/s rd, 107 KiB/s wr, 21 op/s
Sep 30 18:22:50 compute-0 ovn_controller[156242]: 2025-09-30T18:22:50Z|00127|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Sep 30 18:22:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:50 compute-0 nova_compute[265391]: 2025-09-30 18:22:50.986 2 INFO nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Took 5.51 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:22:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.856 2 DEBUG nova.compute.manager [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.857 2 DEBUG oslo_concurrency.lockutils [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.857 2 DEBUG oslo_concurrency.lockutils [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.858 2 DEBUG oslo_concurrency.lockutils [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.858 2 DEBUG nova.compute.manager [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Processing event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.858 2 DEBUG nova.compute.manager [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-changed-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.859 2 DEBUG nova.compute.manager [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Refreshing instance network info cache due to event network-changed-c08e30da-2028-4b45-9b18-b77d81894e93. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.859 2 DEBUG oslo_concurrency.lockutils [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.859 2 DEBUG oslo_concurrency.lockutils [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.859 2 DEBUG nova.network.neutron [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Refreshing network info cache for port c08e30da-2028-4b45-9b18-b77d81894e93 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:22:51 compute-0 nova_compute[265391]: 2025-09-30 18:22:51.861 2 DEBUG nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:22:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:51.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:22:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:22:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:22:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.370 2 WARNING neutronclient.v2_0.client [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.375 2 DEBUG nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8q7mx0yg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='71a2a65c-86a0-4257-9bd1-1cd4e706fb69',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(654528f8-f4cf-47a1-950c-aa6142fa5fa7),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.380 2 DEBUG nova.objects.instance [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 71a2a65c-86a0-4257-9bd1-1cd4e706fb69 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.381 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.383 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.383 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.885 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.886 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.901 2 DEBUG nova.virt.libvirt.vif [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1057470913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1057470913',id=14,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:21:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='af4ef07c582847419a03275af50c6ffc',ramdisk_id='',reservation_id='r-1zyfmf1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:21:59Z,user_data=None,user_id='57be6c3d2e0d431dae0127ac659de1e0',uuid=71a2a65c-86a0-4257-9bd1-1cd4e706fb69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.902 2 DEBUG nova.network.os_vif_util [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.902 2 DEBUG nova.network.os_vif_util [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:60:7a,bridge_name='br-int',has_traffic_filtering=True,id=c08e30da-2028-4b45-9b18-b77d81894e93,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc08e30da-20') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.903 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:ab:60:7a"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <target dev="tapc08e30da-20"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]: </interface>
Sep 30 18:22:52 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.904 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <name>instance-0000000e</name>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <uuid>71a2a65c-86a0-4257-9bd1-1cd4e706fb69</uuid>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1057470913</nova:name>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:21:52</nova:creationTime>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:user uuid="57be6c3d2e0d431dae0127ac659de1e0">tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin</nova:user>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:project uuid="af4ef07c582847419a03275af50c6ffc">tempest-TestExecuteHostMaintenanceStrategy-1597156537</nova:project>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:port uuid="c08e30da-2028-4b45-9b18-b77d81894e93">
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <system>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="serial">71a2a65c-86a0-4257-9bd1-1cd4e706fb69</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="uuid">71a2a65c-86a0-4257-9bd1-1cd4e706fb69</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </system>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <os>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </os>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <features>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </features>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:ab:60:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc08e30da-20"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/console.log" append="off"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </target>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/console.log" append="off"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </console>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </input>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <video>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </video>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]: </domain>
Sep 30 18:22:52 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.904 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <name>instance-0000000e</name>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <uuid>71a2a65c-86a0-4257-9bd1-1cd4e706fb69</uuid>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1057470913</nova:name>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:21:52</nova:creationTime>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:user uuid="57be6c3d2e0d431dae0127ac659de1e0">tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin</nova:user>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:project uuid="af4ef07c582847419a03275af50c6ffc">tempest-TestExecuteHostMaintenanceStrategy-1597156537</nova:project>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:port uuid="c08e30da-2028-4b45-9b18-b77d81894e93">
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <system>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="serial">71a2a65c-86a0-4257-9bd1-1cd4e706fb69</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="uuid">71a2a65c-86a0-4257-9bd1-1cd4e706fb69</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </system>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <os>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </os>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <features>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </features>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:ab:60:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc08e30da-20"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/console.log" append="off"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </target>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/console.log" append="off"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </console>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </input>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <video>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </video>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]: </domain>
Sep 30 18:22:52 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.904 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <name>instance-0000000e</name>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <uuid>71a2a65c-86a0-4257-9bd1-1cd4e706fb69</uuid>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1057470913</nova:name>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:21:52</nova:creationTime>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:user uuid="57be6c3d2e0d431dae0127ac659de1e0">tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin</nova:user>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:project uuid="af4ef07c582847419a03275af50c6ffc">tempest-TestExecuteHostMaintenanceStrategy-1597156537</nova:project>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <nova:port uuid="c08e30da-2028-4b45-9b18-b77d81894e93">
Sep 30 18:22:52 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <system>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="serial">71a2a65c-86a0-4257-9bd1-1cd4e706fb69</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="uuid">71a2a65c-86a0-4257-9bd1-1cd4e706fb69</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </system>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <os>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </os>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <features>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </features>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk.config">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </source>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:ab:60:7a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc08e30da-20"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/console.log" append="off"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:22:52 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       </target>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69/console.log" append="off"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </console>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </input>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <video>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </video>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:22:52 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:22:52 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:22:52 compute-0 nova_compute[265391]: </domain>
Sep 30 18:22:52 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:22:52 compute-0 nova_compute[265391]: 2025-09-30 18:22:52.905 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:22:53 compute-0 sudo[319535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:22:53 compute-0 sudo[319535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:53 compute-0 sudo[319535]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:53 compute-0 ceph-mon[73755]: pgmap v1327: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:22:53 compute-0 nova_compute[265391]: 2025-09-30 18:22:53.388 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:22:53 compute-0 nova_compute[265391]: 2025-09-30 18:22:53.389 2 INFO nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:22:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:53.758Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:22:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:53.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:54 compute-0 sudo[319562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:22:54 compute-0 sudo[319562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:54 compute-0 sudo[319562]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:54 compute-0 sudo[319587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 18:22:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b4009f20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:54 compute-0 sudo[319587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:54.300 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:54.301 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:54.301 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:54 compute-0 nova_compute[265391]: 2025-09-30 18:22:54.299 2 WARNING neutronclient.v2_0.client [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:22:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 17 KiB/s wr, 1 op/s
Sep 30 18:22:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:54 compute-0 nova_compute[265391]: 2025-09-30 18:22:54.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:54 compute-0 ceph-mon[73755]: pgmap v1328: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 17 KiB/s wr, 1 op/s
Sep 30 18:22:54 compute-0 nova_compute[265391]: 2025-09-30 18:22:54.407 2 INFO nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:22:54 compute-0 podman[319686]: 2025-09-30 18:22:54.742618587 +0000 UTC m=+0.067831224 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:22:54 compute-0 podman[319686]: 2025-09-30 18:22:54.87086563 +0000 UTC m=+0.196078277 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:22:54 compute-0 nova_compute[265391]: 2025-09-30 18:22:54.912 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:22:54 compute-0 nova_compute[265391]: 2025-09-30 18:22:54.913 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:22:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.270 2 DEBUG nova.network.neutron [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Updated VIF entry in instance network info cache for port c08e30da-2028-4b45-9b18-b77d81894e93. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.271 2 DEBUG nova.network.neutron [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Updating instance_info_cache with network_info: [{"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.416 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.416 2 DEBUG nova.virt.libvirt.migration [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:22:55 compute-0 podman[319809]: 2025-09-30 18:22:55.422066396 +0000 UTC m=+0.063826640 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:22:55 compute-0 podman[319809]: 2025-09-30 18:22:55.43375374 +0000 UTC m=+0.075513964 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:22:55 compute-0 kernel: tapc08e30da-20 (unregistering): left promiscuous mode
Sep 30 18:22:55 compute-0 NetworkManager[45059]: <info>  [1759256575.4911] device (tapc08e30da-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:22:55 compute-0 ovn_controller[156242]: 2025-09-30T18:22:55Z|00128|binding|INFO|Releasing lport c08e30da-2028-4b45-9b18-b77d81894e93 from this chassis (sb_readonly=0)
Sep 30 18:22:55 compute-0 ovn_controller[156242]: 2025-09-30T18:22:55Z|00129|binding|INFO|Setting lport c08e30da-2028-4b45-9b18-b77d81894e93 down in Southbound
Sep 30 18:22:55 compute-0 ovn_controller[156242]: 2025-09-30T18:22:55Z|00130|binding|INFO|Removing iface tapc08e30da-20 ovn-installed in OVS
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.512 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:60:7a 10.100.0.12'], port_security=['fa:16:3e:ab:60:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '71a2a65c-86a0-4257-9bd1-1cd4e706fb69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-443be7ca-f628-4a45-95b6-620d37172d7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af4ef07c582847419a03275af50c6ffc', 'neutron:revision_number': '10', 'neutron:security_group_ids': '518a9c00-28f9-47ab-a122-e672192eedea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96eb21b8-879c-4e72-963b-37e37ae3d0c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=c08e30da-2028-4b45-9b18-b77d81894e93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.513 166158 INFO neutron.agent.ovn.metadata.agent [-] Port c08e30da-2028-4b45-9b18-b77d81894e93 in datapath 443be7ca-f628-4a45-95b6-620d37172d7b unbound from our chassis
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.514 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 443be7ca-f628-4a45-95b6-620d37172d7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.515 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[618e01ca-17db-44a7-a0c8-939c7a353f2d]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.515 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b namespace which is not needed anymore
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Sep 30 18:22:55 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000e.scope: Consumed 15.450s CPU time.
Sep 30 18:22:55 compute-0 systemd-machined[219917]: Machine qemu-10-instance-0000000e terminated.
Sep 30 18:22:55 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk: No such file or directory
Sep 30 18:22:55 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 71a2a65c-86a0-4257-9bd1-1cd4e706fb69_disk: No such file or directory
Sep 30 18:22:55 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[319105]: [NOTICE]   (319109) : haproxy version is 3.0.5-8e879a5
Sep 30 18:22:55 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[319105]: [NOTICE]   (319109) : path to executable is /usr/sbin/haproxy
Sep 30 18:22:55 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[319105]: [WARNING]  (319109) : Exiting Master process...
Sep 30 18:22:55 compute-0 podman[319899]: 2025-09-30 18:22:55.64652895 +0000 UTC m=+0.032149957 container kill a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:22:55 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[319105]: [ALERT]    (319109) : Current worker (319111) exited with code 143 (Terminated)
Sep 30 18:22:55 compute-0 neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b[319105]: [WARNING]  (319109) : All workers exited. Exiting... (0)
Sep 30 18:22:55 compute-0 systemd[1]: libpod-a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568.scope: Deactivated successfully.
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.676 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.677 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.677 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:22:55 compute-0 podman[319929]: 2025-09-30 18:22:55.698563912 +0000 UTC m=+0.027988168 container died a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:22:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568-userdata-shm.mount: Deactivated successfully.
Sep 30 18:22:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-36caf887c8f80ff07d13b1097e2204126f87daa6cd27af25b7f4dfa48a844a61-merged.mount: Deactivated successfully.
Sep 30 18:22:55 compute-0 podman[319929]: 2025-09-30 18:22:55.739077886 +0000 UTC m=+0.068502132 container cleanup a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:22:55 compute-0 systemd[1]: libpod-conmon-a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568.scope: Deactivated successfully.
Sep 30 18:22:55 compute-0 podman[319934]: 2025-09-30 18:22:55.761593081 +0000 UTC m=+0.079759134 container remove a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.769 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[151d6d2a-beb6-4558-a4a3-ede570344b88]: (4, ("Tue Sep 30 06:22:55 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b (a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568)\na973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568\nTue Sep 30 06:22:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b (a973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568)\na973011ad361748e9d87f253aaa1e9d63bab9b84ad7a630c27e00fea30b64568\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.772 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[813a8ed6-d4f5-41ac-9629-55bded3f9bea]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.773 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/443be7ca-f628-4a45-95b6-620d37172d7b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.773 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3a35505f-839e-4fee-a912-ff80b464e248]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.774 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap443be7ca-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.779 2 DEBUG oslo_concurrency.lockutils [req-cf5b3c5c-fae4-430d-8b62-9e2a0f604b04 req-7b80a9a4-e7c5-4fe2-9b1d-969e856507da 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-71a2a65c-86a0-4257-9bd1-1cd4e706fb69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:22:55 compute-0 kernel: tap443be7ca-f0: left promiscuous mode
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 podman[319977]: 2025-09-30 18:22:55.828996112 +0000 UTC m=+0.085917984 container exec cc4f545686f6a742167a6aa20553cb06d2dd8d75a807947df3eaf52b21deff1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.833 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8dabce1a-d433-4111-ac36-2af151ae07ac]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 podman[319977]: 2025-09-30 18:22:55.871785705 +0000 UTC m=+0.128707557 container exec_died cc4f545686f6a742167a6aa20553cb06d2dd8d75a807947df3eaf52b21deff1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.871 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9c44b9ec-6c5a-4d06-b593-7f5bd425bad7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.873 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f993f9a3-5cf8-4180-91dd-fcae03c3fc3f]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.891 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d81c77e-6289-4e7c-8f96-7f61c887461b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511641, 'reachable_time': 44498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320016, 'error': None, 'target': 'ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.895 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-443be7ca-f628-4a45-95b6-620d37172d7b deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:22:55 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:22:55.895 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[8c156260-892e-4b63-b086-557edd13225a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:22:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d443be7ca\x2df628\x2d4a45\x2d95b6\x2d620d37172d7b.mount: Deactivated successfully.
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.919 2 DEBUG nova.virt.libvirt.guest [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '71a2a65c-86a0-4257-9bd1-1cd4e706fb69' (instance-0000000e) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.920 2 INFO nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Migration operation has completed
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.920 2 INFO nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] _post_live_migration() is started..
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.937 2 WARNING neutronclient.v2_0.client [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:22:55 compute-0 nova_compute[265391]: 2025-09-30 18:22:55.937 2 WARNING neutronclient.v2_0.client [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:22:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:55.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:22:56 compute-0 podman[320050]: 2025-09-30 18:22:56.111745261 +0000 UTC m=+0.059103027 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:22:56 compute-0 podman[320050]: 2025-09-30 18:22:56.119652117 +0000 UTC m=+0.067009883 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:22:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:22:56 compute-0 nova_compute[265391]: 2025-09-30 18:22:56.358 2 DEBUG nova.compute.manager [req-58f58502-fea8-49a4-8881-74da02251029 req-6fc76f59-eda8-409c-af63-462abf174d7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:22:56 compute-0 nova_compute[265391]: 2025-09-30 18:22:56.359 2 DEBUG oslo_concurrency.lockutils [req-58f58502-fea8-49a4-8881-74da02251029 req-6fc76f59-eda8-409c-af63-462abf174d7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:56 compute-0 nova_compute[265391]: 2025-09-30 18:22:56.359 2 DEBUG oslo_concurrency.lockutils [req-58f58502-fea8-49a4-8881-74da02251029 req-6fc76f59-eda8-409c-af63-462abf174d7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:56 compute-0 nova_compute[265391]: 2025-09-30 18:22:56.359 2 DEBUG oslo_concurrency.lockutils [req-58f58502-fea8-49a4-8881-74da02251029 req-6fc76f59-eda8-409c-af63-462abf174d7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:56 compute-0 nova_compute[265391]: 2025-09-30 18:22:56.359 2 DEBUG nova.compute.manager [req-58f58502-fea8-49a4-8881-74da02251029 req-6fc76f59-eda8-409c-af63-462abf174d7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] No waiting events found dispatching network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:22:56 compute-0 nova_compute[265391]: 2025-09-30 18:22:56.359 2 DEBUG nova.compute.manager [req-58f58502-fea8-49a4-8881-74da02251029 req-6fc76f59-eda8-409c-af63-462abf174d7d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:22:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:56.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:56 compute-0 podman[320117]: 2025-09-30 18:22:56.366941464 +0000 UTC m=+0.066743686 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, description=keepalived for Ceph, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.28.2, name=keepalived, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived)
Sep 30 18:22:56 compute-0 podman[320117]: 2025-09-30 18:22:56.380893177 +0000 UTC m=+0.080695409 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, release=1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.component=keepalived-container, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-type=git, version=2.2.4)
Sep 30 18:22:56 compute-0 ceph-mon[73755]: pgmap v1329: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:22:56 compute-0 podman[320182]: 2025-09-30 18:22:56.589406876 +0000 UTC m=+0.053742608 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:22:56 compute-0 podman[320182]: 2025-09-30 18:22:56.615460393 +0000 UTC m=+0.079796105 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:22:56 compute-0 podman[320256]: 2025-09-30 18:22:56.825624404 +0000 UTC m=+0.049229390 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:22:57 compute-0 podman[320256]: 2025-09-30 18:22:57.0174496 +0000 UTC m=+0.241054596 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:22:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:57.229Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:22:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:22:57.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.285 2 DEBUG nova.network.neutron [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port c08e30da-2028-4b45-9b18-b77d81894e93 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.286 2 DEBUG nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.287 2 DEBUG nova.virt.libvirt.vif [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1057470913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1057470913',id=14,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:21:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='af4ef07c582847419a03275af50c6ffc',ramdisk_id='',reservation_id='r-1zyfmf1o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1597156537-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:22:35Z,user_data=None,user_id='57be6c3d2e0d431dae0127ac659de1e0',uuid=71a2a65c-86a0-4257-9bd1-1cd4e706fb69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.287 2 DEBUG nova.network.os_vif_util [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "c08e30da-2028-4b45-9b18-b77d81894e93", "address": "fa:16:3e:ab:60:7a", "network": {"id": "443be7ca-f628-4a45-95b6-620d37172d7b", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-1888091317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "269f60e72ce1460a98da519466c89da6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc08e30da-20", "ovs_interfaceid": "c08e30da-2028-4b45-9b18-b77d81894e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.288 2 DEBUG nova.network.os_vif_util [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:60:7a,bridge_name='br-int',has_traffic_filtering=True,id=c08e30da-2028-4b45-9b18-b77d81894e93,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc08e30da-20') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.288 2 DEBUG os_vif [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:60:7a,bridge_name='br-int',has_traffic_filtering=True,id=c08e30da-2028-4b45-9b18-b77d81894e93,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc08e30da-20') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc08e30da-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=dc4a77ac-0049-47eb-aba1-f55689bbda5c) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.301 2 INFO os_vif [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:60:7a,bridge_name='br-int',has_traffic_filtering=True,id=c08e30da-2028-4b45-9b18-b77d81894e93,network=Network(443be7ca-f628-4a45-95b6-620d37172d7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc08e30da-20')
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.301 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.301 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.302 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.302 2 DEBUG nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.302 2 INFO nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Deleting instance files /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_del
Sep 30 18:22:57 compute-0 nova_compute[265391]: 2025-09-30 18:22:57.303 2 INFO nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Deletion of /var/lib/nova/instances/71a2a65c-86a0-4257-9bd1-1cd4e706fb69_del complete
Sep 30 18:22:57 compute-0 podman[320368]: 2025-09-30 18:22:57.440733231 +0000 UTC m=+0.041572361 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:22:57 compute-0 podman[320368]: 2025-09-30 18:22:57.487522437 +0000 UTC m=+0.088361547 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:22:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:22:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3834446620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:22:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:22:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3834446620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:22:57 compute-0 sudo[319587]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:22:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:22:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:22:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:22:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3834446620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:22:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3834446620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:22:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:22:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:22:57 compute-0 sudo[320411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:22:57 compute-0 sudo[320411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:57 compute-0 sudo[320411]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:57 compute-0 sudo[320436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:22:57 compute-0 sudo[320436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:57.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:58 compute-0 sudo[320436]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:22:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:22:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:22:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:22:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:22:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:22:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:22:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:22:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:22:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:22:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:22:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:22:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:22:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:22:58 compute-0 sudo[320495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:22:58 compute-0 sudo[320495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:58 compute-0 sudo[320495]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.413 2 DEBUG nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.414 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.415 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.415 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.415 2 DEBUG nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] No waiting events found dispatching network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.415 2 WARNING nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received unexpected event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 for instance with vm_state active and task_state migrating.
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.415 2 DEBUG nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.416 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.416 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.416 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.416 2 DEBUG nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] No waiting events found dispatching network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.416 2 DEBUG nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-unplugged-c08e30da-2028-4b45-9b18-b77d81894e93 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.416 2 DEBUG nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.417 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.417 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.417 2 DEBUG oslo_concurrency.lockutils [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.417 2 DEBUG nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] No waiting events found dispatching network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:22:58 compute-0 nova_compute[265391]: 2025-09-30 18:22:58.417 2 WARNING nova.compute.manager [req-d5116b4e-0db9-4d46-a3f5-d87018ef2dc1 req-2819bfa9-f791-4c9b-9a11-14df17252949 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Received unexpected event network-vif-plugged-c08e30da-2028-4b45-9b18-b77d81894e93 for instance with vm_state active and task_state migrating.
Sep 30 18:22:58 compute-0 sudo[320520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:22:58 compute-0 sudo[320520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:22:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:22:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-mon[73755]: pgmap v1330: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:22:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:22:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:58] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:22:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:22:58] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:22:58 compute-0 podman[320587]: 2025-09-30 18:22:58.921336163 +0000 UTC m=+0.046901150 container create 62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:22:58 compute-0 systemd[1]: Started libpod-conmon-62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d.scope.
Sep 30 18:22:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:22:59 compute-0 podman[320587]: 2025-09-30 18:22:58.905182663 +0000 UTC m=+0.030747670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:22:59 compute-0 podman[320587]: 2025-09-30 18:22:59.017490962 +0000 UTC m=+0.143055969 container init 62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:22:59 compute-0 podman[320587]: 2025-09-30 18:22:59.026980469 +0000 UTC m=+0.152545466 container start 62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:22:59 compute-0 podman[320587]: 2025-09-30 18:22:59.030388247 +0000 UTC m=+0.155953254 container attach 62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:22:59 compute-0 happy_chebyshev[320604]: 167 167
Sep 30 18:22:59 compute-0 systemd[1]: libpod-62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d.scope: Deactivated successfully.
Sep 30 18:22:59 compute-0 podman[320587]: 2025-09-30 18:22:59.039536425 +0000 UTC m=+0.165101422 container died 62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-67371a5f3795fcd7ab8196416b209c87b23103b994f10c004a5cc5abefe1ef56-merged.mount: Deactivated successfully.
Sep 30 18:22:59 compute-0 podman[320587]: 2025-09-30 18:22:59.084823502 +0000 UTC m=+0.210388509 container remove 62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:22:59 compute-0 systemd[1]: libpod-conmon-62dd6e4614a2c3eddb28ba9689bc975cbde200633b93ebb2d552f83ee3b6966d.scope: Deactivated successfully.
Sep 30 18:22:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:22:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:22:59 compute-0 podman[320630]: 2025-09-30 18:22:59.278105326 +0000 UTC m=+0.047737512 container create 5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cohen, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:22:59 compute-0 systemd[1]: Started libpod-conmon-5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf.scope.
Sep 30 18:22:59 compute-0 podman[320630]: 2025-09-30 18:22:59.256945656 +0000 UTC m=+0.026577802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:22:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced0a1c828409b156867b56d7eeef644cc5e31d3eda4973026a6b51fc24c3728/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced0a1c828409b156867b56d7eeef644cc5e31d3eda4973026a6b51fc24c3728/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced0a1c828409b156867b56d7eeef644cc5e31d3eda4973026a6b51fc24c3728/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced0a1c828409b156867b56d7eeef644cc5e31d3eda4973026a6b51fc24c3728/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced0a1c828409b156867b56d7eeef644cc5e31d3eda4973026a6b51fc24c3728/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:22:59 compute-0 podman[320630]: 2025-09-30 18:22:59.381281177 +0000 UTC m=+0.150913343 container init 5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:22:59 compute-0 podman[320630]: 2025-09-30 18:22:59.396902003 +0000 UTC m=+0.166534149 container start 5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cohen, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:22:59 compute-0 podman[320630]: 2025-09-30 18:22:59.400039575 +0000 UTC m=+0.169671741 container attach 5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:22:59 compute-0 podman[276673]: time="2025-09-30T18:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:22:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43033 "" "Go-http-client/1.1"
Sep 30 18:22:59 compute-0 wizardly_cohen[320646]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:22:59 compute-0 wizardly_cohen[320646]: --> All data devices are unavailable
Sep 30 18:22:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10696 "" "Go-http-client/1.1"
Sep 30 18:22:59 compute-0 systemd[1]: libpod-5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf.scope: Deactivated successfully.
Sep 30 18:22:59 compute-0 podman[320630]: 2025-09-30 18:22:59.817633388 +0000 UTC m=+0.587265554 container died 5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ced0a1c828409b156867b56d7eeef644cc5e31d3eda4973026a6b51fc24c3728-merged.mount: Deactivated successfully.
Sep 30 18:22:59 compute-0 podman[320630]: 2025-09-30 18:22:59.8669421 +0000 UTC m=+0.636574266 container remove 5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:22:59 compute-0 systemd[1]: libpod-conmon-5f2bbdc2cdfa8d41dd94c1c432dc39e0711f7f9de5801056f89f59477999debf.scope: Deactivated successfully.
Sep 30 18:22:59 compute-0 sudo[320520]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:59 compute-0 sudo[320673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:22:59 compute-0 sudo[320673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:22:59 compute-0 sudo[320673]: pam_unix(sudo:session): session closed for user root
Sep 30 18:22:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:22:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:22:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:22:59.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:00 compute-0 sudo[320698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:23:00 compute-0 sudo[320698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:23:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.1 KiB/s wr, 5 op/s
Sep 30 18:23:00 compute-0 nova_compute[265391]: 2025-09-30 18:23:00.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:00 compute-0 ceph-mon[73755]: pgmap v1331: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.1 KiB/s wr, 5 op/s
Sep 30 18:23:00 compute-0 podman[320766]: 2025-09-30 18:23:00.519399016 +0000 UTC m=+0.051748916 container create 423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:23:00 compute-0 systemd[1]: Started libpod-conmon-423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d.scope.
Sep 30 18:23:00 compute-0 podman[320766]: 2025-09-30 18:23:00.500270089 +0000 UTC m=+0.032620019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:23:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:23:00 compute-0 podman[320766]: 2025-09-30 18:23:00.627213269 +0000 UTC m=+0.159563249 container init 423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jones, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:23:00 compute-0 podman[320766]: 2025-09-30 18:23:00.63611817 +0000 UTC m=+0.168468080 container start 423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 18:23:00 compute-0 podman[320766]: 2025-09-30 18:23:00.63956588 +0000 UTC m=+0.171915830 container attach 423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:23:00 compute-0 recursing_jones[320782]: 167 167
Sep 30 18:23:00 compute-0 systemd[1]: libpod-423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d.scope: Deactivated successfully.
Sep 30 18:23:00 compute-0 conmon[320782]: conmon 423962165e8b6cf028e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d.scope/container/memory.events
Sep 30 18:23:00 compute-0 podman[320766]: 2025-09-30 18:23:00.644707493 +0000 UTC m=+0.177057433 container died 423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa659981669a6a3ce5438c1728f9745e2bcc21d085a17fb4aa69417c4c0dbda-merged.mount: Deactivated successfully.
Sep 30 18:23:00 compute-0 podman[320766]: 2025-09-30 18:23:00.691426308 +0000 UTC m=+0.223776208 container remove 423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:23:00 compute-0 systemd[1]: libpod-conmon-423962165e8b6cf028e4270c4facd05f40a022e98a7f542e1b5f538132c12b5d.scope: Deactivated successfully.
Sep 30 18:23:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:00 compute-0 podman[320805]: 2025-09-30 18:23:00.897234417 +0000 UTC m=+0.060974436 container create fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:23:00 compute-0 systemd[1]: Started libpod-conmon-fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7.scope.
Sep 30 18:23:00 compute-0 podman[320805]: 2025-09-30 18:23:00.867322449 +0000 UTC m=+0.031062518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:23:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9ee27b5562d3a4ae6cf3b5f6c5b0bb183d491f3a6e05c5a69b453fb5cd6056/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9ee27b5562d3a4ae6cf3b5f6c5b0bb183d491f3a6e05c5a69b453fb5cd6056/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9ee27b5562d3a4ae6cf3b5f6c5b0bb183d491f3a6e05c5a69b453fb5cd6056/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9ee27b5562d3a4ae6cf3b5f6c5b0bb183d491f3a6e05c5a69b453fb5cd6056/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:23:01 compute-0 podman[320805]: 2025-09-30 18:23:01.00278341 +0000 UTC m=+0.166523399 container init fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 18:23:01 compute-0 podman[320805]: 2025-09-30 18:23:01.011852975 +0000 UTC m=+0.175592954 container start fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:23:01 compute-0 podman[320805]: 2025-09-30 18:23:01.014862024 +0000 UTC m=+0.178602003 container attach fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:23:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]: {
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:     "0": [
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:         {
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "devices": [
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "/dev/loop3"
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             ],
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "lv_name": "ceph_lv0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "lv_size": "21470642176",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "name": "ceph_lv0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "tags": {
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.cluster_name": "ceph",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.crush_device_class": "",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.encrypted": "0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.osd_id": "0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.type": "block",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.vdo": "0",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:                 "ceph.with_tpm": "0"
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             },
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "type": "block",
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:             "vg_name": "ceph_vg0"
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:         }
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]:     ]
Sep 30 18:23:01 compute-0 vigilant_joliot[320821]: }
Sep 30 18:23:01 compute-0 systemd[1]: libpod-fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7.scope: Deactivated successfully.
Sep 30 18:23:01 compute-0 podman[320805]: 2025-09-30 18:23:01.35985629 +0000 UTC m=+0.523596339 container died fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e9ee27b5562d3a4ae6cf3b5f6c5b0bb183d491f3a6e05c5a69b453fb5cd6056-merged.mount: Deactivated successfully.
Sep 30 18:23:01 compute-0 openstack_network_exporter[279566]: ERROR   18:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:23:01 compute-0 openstack_network_exporter[279566]: ERROR   18:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:23:01 compute-0 openstack_network_exporter[279566]: ERROR   18:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:23:01 compute-0 openstack_network_exporter[279566]: ERROR   18:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:23:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:23:01 compute-0 openstack_network_exporter[279566]: ERROR   18:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:23:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:23:01 compute-0 podman[320805]: 2025-09-30 18:23:01.416603565 +0000 UTC m=+0.580343534 container remove fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_joliot, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:23:01 compute-0 systemd[1]: libpod-conmon-fdc8ef6f7fc441c89ec15cd67c17bb61465facc1889e4791d8187e44c2c4dca7.scope: Deactivated successfully.
Sep 30 18:23:01 compute-0 sudo[320698]: pam_unix(sudo:session): session closed for user root
Sep 30 18:23:01 compute-0 sudo[320844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:23:01 compute-0 sudo[320844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:23:01 compute-0 sudo[320844]: pam_unix(sudo:session): session closed for user root
Sep 30 18:23:01 compute-0 sudo[320869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:23:01 compute-0 sudo[320869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:23:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:01.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:02 compute-0 podman[320935]: 2025-09-30 18:23:02.059870434 +0000 UTC m=+0.047567887 container create ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hopper, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:23:02 compute-0 systemd[1]: Started libpod-conmon-ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222.scope.
Sep 30 18:23:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:23:02 compute-0 podman[320935]: 2025-09-30 18:23:02.126769193 +0000 UTC m=+0.114466656 container init ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:23:02 compute-0 podman[320935]: 2025-09-30 18:23:02.132317947 +0000 UTC m=+0.120015390 container start ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 18:23:02 compute-0 podman[320935]: 2025-09-30 18:23:02.03971805 +0000 UTC m=+0.027415513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:23:02 compute-0 optimistic_hopper[320951]: 167 167
Sep 30 18:23:02 compute-0 podman[320935]: 2025-09-30 18:23:02.136214488 +0000 UTC m=+0.123911971 container attach ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:23:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:02 compute-0 systemd[1]: libpod-ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222.scope: Deactivated successfully.
Sep 30 18:23:02 compute-0 podman[320935]: 2025-09-30 18:23:02.143165299 +0000 UTC m=+0.130862742 container died ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d58de8d441587ea1f729520487653e77bd7813aa2606e505533413d3d4f5f74b-merged.mount: Deactivated successfully.
Sep 30 18:23:02 compute-0 podman[320935]: 2025-09-30 18:23:02.181908306 +0000 UTC m=+0.169605749 container remove ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:23:02 compute-0 systemd[1]: libpod-conmon-ee62576f921816a48520e2bf076dd6fc6a4f215e6e44a4d84805ecd8df9c5222.scope: Deactivated successfully.
Sep 30 18:23:02 compute-0 nova_compute[265391]: 2025-09-30 18:23:02.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Sep 30 18:23:02 compute-0 podman[320974]: 2025-09-30 18:23:02.358826514 +0000 UTC m=+0.042631909 container create b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:23:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:02.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:02 compute-0 systemd[1]: Started libpod-conmon-b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe.scope.
Sep 30 18:23:02 compute-0 ceph-mon[73755]: pgmap v1332: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Sep 30 18:23:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec94025e99c052236d7edeec601f2715be87374931436e3d778a424fa6383f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:23:02 compute-0 podman[320974]: 2025-09-30 18:23:02.341814342 +0000 UTC m=+0.025619757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec94025e99c052236d7edeec601f2715be87374931436e3d778a424fa6383f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec94025e99c052236d7edeec601f2715be87374931436e3d778a424fa6383f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec94025e99c052236d7edeec601f2715be87374931436e3d778a424fa6383f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:23:02 compute-0 podman[320974]: 2025-09-30 18:23:02.460760713 +0000 UTC m=+0.144566158 container init b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:23:02 compute-0 podman[320974]: 2025-09-30 18:23:02.472027616 +0000 UTC m=+0.155833021 container start b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:23:02 compute-0 podman[320974]: 2025-09-30 18:23:02.475480246 +0000 UTC m=+0.159285661 container attach b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:23:03 compute-0 lvm[321078]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:23:03 compute-0 lvm[321078]: VG ceph_vg0 finished
Sep 30 18:23:03 compute-0 sad_poitras[320991]: {}
Sep 30 18:23:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:03 compute-0 systemd[1]: libpod-b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe.scope: Deactivated successfully.
Sep 30 18:23:03 compute-0 systemd[1]: libpod-b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe.scope: Consumed 1.267s CPU time.
Sep 30 18:23:03 compute-0 podman[320974]: 2025-09-30 18:23:03.255948931 +0000 UTC m=+0.939754326 container died b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:23:03 compute-0 lvm[321111]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:23:03 compute-0 lvm[321111]: VG ceph_vg0 finished
Sep 30 18:23:03 compute-0 podman[321065]: 2025-09-30 18:23:03.270082168 +0000 UTC m=+0.107913366 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:23:03 compute-0 podman[321064]: 2025-09-30 18:23:03.27899319 +0000 UTC m=+0.117035573 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Sep 30 18:23:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ec94025e99c052236d7edeec601f2715be87374931436e3d778a424fa6383f6-merged.mount: Deactivated successfully.
Sep 30 18:23:03 compute-0 podman[320974]: 2025-09-30 18:23:03.307929622 +0000 UTC m=+0.991735017 container remove b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Sep 30 18:23:03 compute-0 systemd[1]: libpod-conmon-b3c0183be1233d916cbe50ab5775cc848dec985f838b883c0cd02a7fbae54abe.scope: Deactivated successfully.
Sep 30 18:23:03 compute-0 sudo[320869]: pam_unix(sudo:session): session closed for user root
Sep 30 18:23:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:23:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:23:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:23:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:23:03 compute-0 sudo[321129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:23:03 compute-0 sudo[321129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:23:03 compute-0 sudo[321129]: pam_unix(sudo:session): session closed for user root
Sep 30 18:23:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:03.759Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:03.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Sep 30 18:23:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:23:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:23:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:04.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:05 compute-0 nova_compute[265391]: 2025-09-30 18:23:05.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:05 compute-0 ceph-mon[73755]: pgmap v1333: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Sep 30 18:23:05 compute-0 podman[321156]: 2025-09-30 18:23:05.543028392 +0000 UTC m=+0.079286272 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Sep 30 18:23:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:05.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:06 compute-0 nova_compute[265391]: 2025-09-30 18:23:06.336 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:23:06 compute-0 nova_compute[265391]: 2025-09-30 18:23:06.337 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:23:06 compute-0 nova_compute[265391]: 2025-09-30 18:23:06.337 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "71a2a65c-86a0-4257-9bd1-1cd4e706fb69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:23:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 85 B/s wr, 5 op/s
Sep 30 18:23:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:06.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:06 compute-0 ceph-mon[73755]: pgmap v1334: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 85 B/s wr, 5 op/s
Sep 30 18:23:06 compute-0 nova_compute[265391]: 2025-09-30 18:23:06.852 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:23:06 compute-0 nova_compute[265391]: 2025-09-30 18:23:06.853 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:23:06 compute-0 nova_compute[265391]: 2025-09-30 18:23:06.854 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:23:06 compute-0 nova_compute[265391]: 2025-09-30 18:23:06.854 2 DEBUG nova.compute.resource_tracker [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:23:06 compute-0 nova_compute[265391]: 2025-09-30 18:23:06.854 2 DEBUG oslo_concurrency.processutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:23:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:07.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:07 compute-0 nova_compute[265391]: 2025-09-30 18:23:07.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:23:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:23:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:23:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2209030715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:07 compute-0 nova_compute[265391]: 2025-09-30 18:23:07.365 2 DEBUG oslo_concurrency.processutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:23:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:23:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2209030715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:23:07 compute-0 nova_compute[265391]: 2025-09-30 18:23:07.527 2 WARNING nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:23:07 compute-0 nova_compute[265391]: 2025-09-30 18:23:07.528 2 DEBUG oslo_concurrency.processutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:23:07 compute-0 nova_compute[265391]: 2025-09-30 18:23:07.551 2 DEBUG oslo_concurrency.processutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:23:07 compute-0 nova_compute[265391]: 2025-09-30 18:23:07.552 2 DEBUG nova.compute.resource_tracker [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4335MB free_disk=39.90114212036133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:23:07 compute-0 nova_compute[265391]: 2025-09-30 18:23:07.552 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:23:07 compute-0 nova_compute[265391]: 2025-09-30 18:23:07.552 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022765791353291658 of space, bias 1.0, pg target 0.4553158270658332 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:23:07
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.mgr', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.nfs', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.control']
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:23:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:23:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 85 B/s wr, 5 op/s
Sep 30 18:23:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:08 compute-0 ceph-mon[73755]: pgmap v1335: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 85 B/s wr, 5 op/s
Sep 30 18:23:08 compute-0 nova_compute[265391]: 2025-09-30 18:23:08.571 2 DEBUG nova.compute.resource_tracker [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 71a2a65c-86a0-4257-9bd1-1cd4e706fb69 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:23:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:08] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:23:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:08] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:23:09 compute-0 nova_compute[265391]: 2025-09-30 18:23:09.080 2 DEBUG nova.compute.resource_tracker [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:23:09 compute-0 nova_compute[265391]: 2025-09-30 18:23:09.111 2 DEBUG nova.compute.resource_tracker [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 654528f8-f4cf-47a1-950c-aa6142fa5fa7 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:23:09 compute-0 nova_compute[265391]: 2025-09-30 18:23:09.111 2 DEBUG nova.compute.resource_tracker [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:23:09 compute-0 nova_compute[265391]: 2025-09-30 18:23:09.111 2 DEBUG nova.compute.resource_tracker [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:23:07 up  1:26,  0 user,  load average: 0.83, 0.87, 0.87\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:23:09 compute-0 nova_compute[265391]: 2025-09-30 18:23:09.138 2 DEBUG oslo_concurrency.processutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:23:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:23:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1217540508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:09 compute-0 nova_compute[265391]: 2025-09-30 18:23:09.611 2 DEBUG oslo_concurrency.processutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:23:09 compute-0 nova_compute[265391]: 2025-09-30 18:23:09.617 2 DEBUG nova.compute.provider_tree [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:23:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1217540508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:10.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:10 compute-0 nova_compute[265391]: 2025-09-30 18:23:10.128 2 DEBUG nova.scheduler.client.report [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:23:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.4 KiB/s wr, 6 op/s
Sep 30 18:23:10 compute-0 nova_compute[265391]: 2025-09-30 18:23:10.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:10.407 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:23:10 compute-0 nova_compute[265391]: 2025-09-30 18:23:10.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:10.409 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:23:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:10.410 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:23:10 compute-0 nova_compute[265391]: 2025-09-30 18:23:10.639 2 DEBUG nova.compute.resource_tracker [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:23:10 compute-0 nova_compute[265391]: 2025-09-30 18:23:10.640 2 DEBUG oslo_concurrency.lockutils [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.088s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:23:10 compute-0 ceph-mon[73755]: pgmap v1336: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.4 KiB/s wr, 6 op/s
Sep 30 18:23:10 compute-0 nova_compute[265391]: 2025-09-30 18:23:10.655 2 INFO nova.compute.manager [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:23:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:11 compute-0 nova_compute[265391]: 2025-09-30 18:23:11.743 2 INFO nova.scheduler.client.report [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 654528f8-f4cf-47a1-950c-aa6142fa5fa7
Sep 30 18:23:11 compute-0 nova_compute[265391]: 2025-09-30 18:23:11.744 2 DEBUG nova.virt.libvirt.driver [None req-133c1b82-b925-4555-94b9-892a28b8d3a1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 71a2a65c-86a0-4257-9bd1-1cd4e706fb69] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:23:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:12.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003a90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:12 compute-0 nova_compute[265391]: 2025-09-30 18:23:12.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:23:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:12.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:12 compute-0 ceph-mon[73755]: pgmap v1337: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:23:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:13 compute-0 sudo[321233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:23:13 compute-0 sudo[321233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:23:13 compute-0 sudo[321233]: pam_unix(sudo:session): session closed for user root
Sep 30 18:23:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3217688103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:13 compute-0 sshd-session[321231]: Invalid user ahmed from 45.252.249.158 port 57428
Sep 30 18:23:13 compute-0 sshd-session[321231]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:23:13 compute-0 sshd-session[321231]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:23:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:13.762Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:14.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:23:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:14 compute-0 ceph-mon[73755]: pgmap v1338: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:23:14 compute-0 podman[321260]: 2025-09-30 18:23:14.552184571 +0000 UTC m=+0.077681989 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:23:14 compute-0 podman[321261]: 2025-09-30 18:23:14.580959549 +0000 UTC m=+0.100980095 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.4)
Sep 30 18:23:14 compute-0 podman[321262]: 2025-09-30 18:23:14.57982499 +0000 UTC m=+0.099193579 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, name=ubi9-minimal, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Sep 30 18:23:15 compute-0 sshd-session[321231]: Failed password for invalid user ahmed from 45.252.249.158 port 57428 ssh2
Sep 30 18:23:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:15 compute-0 nova_compute[265391]: 2025-09-30 18:23:15.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:15 compute-0 sshd-session[321231]: Received disconnect from 45.252.249.158 port 57428:11: Bye Bye [preauth]
Sep 30 18:23:15 compute-0 sshd-session[321231]: Disconnected from invalid user ahmed 45.252.249.158 port 57428 [preauth]
Sep 30 18:23:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:16.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:23:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:16.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:16 compute-0 ceph-mon[73755]: pgmap v1339: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:23:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:17.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:17 compute-0 nova_compute[265391]: 2025-09-30 18:23:17.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:18.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:23:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:18.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:18 compute-0 ceph-mon[73755]: pgmap v1340: 353 pgs: 353 active+clean; 121 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:23:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:18] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:23:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:18] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:23:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 57 op/s
Sep 30 18:23:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:20.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:20 compute-0 nova_compute[265391]: 2025-09-30 18:23:20.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:20 compute-0 ceph-mon[73755]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 57 op/s
Sep 30 18:23:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:22.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:22 compute-0 nova_compute[265391]: 2025-09-30 18:23:22.233 2 DEBUG nova.compute.manager [None req-b61d234b-53c9-4971-bfc1-022681edeeb6 e33f9dc9fbb84319b00517567fe4b47e 4e2dde567e5c4b1c9802c64cfc281b6d - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:631
Sep 30 18:23:22 compute-0 nova_compute[265391]: 2025-09-30 18:23:22.285 2 DEBUG nova.compute.provider_tree [None req-b61d234b-53c9-4971-bfc1-022681edeeb6 e33f9dc9fbb84319b00517567fe4b47e 4e2dde567e5c4b1c9802c64cfc281b6d - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 24 to 27 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:23:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:23:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:23:22 compute-0 nova_compute[265391]: 2025-09-30 18:23:22.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:23:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:23:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:22.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:23 compute-0 ceph-mon[73755]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:23:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:23.764Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:23:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:23.765Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:23:24 compute-0 ceph-mon[73755]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:23:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:24.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:25 compute-0 nova_compute[265391]: 2025-09-30 18:23:25.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:25 compute-0 nova_compute[265391]: 2025-09-30 18:23:25.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:26.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:23:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:26.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:26 compute-0 ceph-mon[73755]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:23:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:27.232Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:27 compute-0 nova_compute[265391]: 2025-09-30 18:23:27.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:28.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:23:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:28.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:28 compute-0 ceph-mon[73755]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:23:28 compute-0 sshd-session[321332]: Invalid user me from 14.225.220.107 port 54852
Sep 30 18:23:28 compute-0 sshd-session[321332]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:23:28 compute-0 sshd-session[321332]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:23:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:28] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:23:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:28] "GET /metrics HTTP/1.1" 200 46631 "" "Prometheus/2.51.0"
Sep 30 18:23:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:29 compute-0 podman[276673]: time="2025-09-30T18:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:23:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:23:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10294 "" "Go-http-client/1.1"
Sep 30 18:23:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:23:30 compute-0 nova_compute[265391]: 2025-09-30 18:23:30.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:30.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:30 compute-0 ceph-mon[73755]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:23:30 compute-0 sshd-session[321332]: Failed password for invalid user me from 14.225.220.107 port 54852 ssh2
Sep 30 18:23:30 compute-0 sshd-session[321332]: Received disconnect from 14.225.220.107 port 54852:11: Bye Bye [preauth]
Sep 30 18:23:30 compute-0 sshd-session[321332]: Disconnected from invalid user me 14.225.220.107 port 54852 [preauth]
Sep 30 18:23:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:31 compute-0 openstack_network_exporter[279566]: ERROR   18:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:23:31 compute-0 openstack_network_exporter[279566]: ERROR   18:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:23:31 compute-0 openstack_network_exporter[279566]: ERROR   18:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:23:31 compute-0 openstack_network_exporter[279566]: ERROR   18:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:23:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:23:31 compute-0 openstack_network_exporter[279566]: ERROR   18:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:23:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:23:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:32.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:32 compute-0 nova_compute[265391]: 2025-09-30 18:23:32.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:32.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:32 compute-0 ceph-mon[73755]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:32 compute-0 nova_compute[265391]: 2025-09-30 18:23:32.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:33 compute-0 sudo[321340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:23:33 compute-0 sudo[321340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:23:33 compute-0 sudo[321340]: pam_unix(sudo:session): session closed for user root
Sep 30 18:23:33 compute-0 podman[321365]: 2025-09-30 18:23:33.439460496 +0000 UTC m=+0.069066316 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:23:33 compute-0 podman[321364]: 2025-09-30 18:23:33.455308888 +0000 UTC m=+0.099328612 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:23:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:33.766Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:23:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:33.766Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:23:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:34.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:34 compute-0 ceph-mon[73755]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 341 B/s rd, 0 op/s
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:23:34 compute-0 nova_compute[265391]: 2025-09-30 18:23:34.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:23:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:23:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386502316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:35 compute-0 nova_compute[265391]: 2025-09-30 18:23:35.391 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:23:35 compute-0 nova_compute[265391]: 2025-09-30 18:23:35.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3386502316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:35 compute-0 nova_compute[265391]: 2025-09-30 18:23:35.563 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:23:35 compute-0 nova_compute[265391]: 2025-09-30 18:23:35.564 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:23:35 compute-0 nova_compute[265391]: 2025-09-30 18:23:35.590 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:23:35 compute-0 nova_compute[265391]: 2025-09-30 18:23:35.590 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4387MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:23:35 compute-0 nova_compute[265391]: 2025-09-30 18:23:35.591 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:23:35 compute-0 nova_compute[265391]: 2025-09-30 18:23:35.591 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:23:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:36.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:36.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/348427045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:36 compute-0 ceph-mon[73755]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:36 compute-0 podman[321438]: 2025-09-30 18:23:36.524499967 +0000 UTC m=+0.055865893 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:23:36 compute-0 nova_compute[265391]: 2025-09-30 18:23:36.644 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:23:36 compute-0 nova_compute[265391]: 2025-09-30 18:23:36.644 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:23:35 up  1:26,  0 user,  load average: 0.75, 0.85, 0.86\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:23:36 compute-0 nova_compute[265391]: 2025-09-30 18:23:36.666 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:23:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:23:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754555794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:37 compute-0 nova_compute[265391]: 2025-09-30 18:23:37.141 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:23:37 compute-0 nova_compute[265391]: 2025-09-30 18:23:37.147 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:23:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:37.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:23:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:23:37 compute-0 nova_compute[265391]: 2025-09-30 18:23:37.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:23:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:23:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:23:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:23:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:23:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:23:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4203769896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:23:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4203769896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:23:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3754555794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:23:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:37.589 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:1b:1b 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-275a3eeb-ef50-4b9a-853e-ab955980469b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-275a3eeb-ef50-4b9a-853e-ab955980469b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c53b06039bf4f348ffe63a9201c8e5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c860265-d2f7-41ef-be31-8eb602810d53, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c620806e-4b76-4583-9d88-892c2b24da08) old=Port_Binding(mac=['fa:16:3e:bb:1b:1b'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-275a3eeb-ef50-4b9a-853e-ab955980469b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-275a3eeb-ef50-4b9a-853e-ab955980469b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c53b06039bf4f348ffe63a9201c8e5f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:23:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:37.590 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c620806e-4b76-4583-9d88-892c2b24da08 in datapath 275a3eeb-ef50-4b9a-853e-ab955980469b updated
Sep 30 18:23:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:37.591 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 275a3eeb-ef50-4b9a-853e-ab955980469b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:23:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:37.592 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fc34eaa3-e993-466b-9fe4-3e09dfc5d4ef]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:23:37 compute-0 nova_compute[265391]: 2025-09-30 18:23:37.657 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:23:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:38.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:38 compute-0 nova_compute[265391]: 2025-09-30 18:23:38.170 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:23:38 compute-0 nova_compute[265391]: 2025-09-30 18:23:38.170 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.579s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:23:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:38.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.474709) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256618474743, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2097, "num_deletes": 251, "total_data_size": 3849103, "memory_usage": 3901904, "flush_reason": "Manual Compaction"}
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Sep 30 18:23:38 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4159245570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:23:38 compute-0 ceph-mon[73755]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256618491441, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3737815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33690, "largest_seqno": 35786, "table_properties": {"data_size": 3728541, "index_size": 5768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19633, "raw_average_key_size": 20, "raw_value_size": 3709833, "raw_average_value_size": 3844, "num_data_blocks": 252, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759256413, "oldest_key_time": 1759256413, "file_creation_time": 1759256618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 16782 microseconds, and 8463 cpu microseconds.
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.491487) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3737815 bytes OK
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.491512) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.493207) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.493224) EVENT_LOG_v1 {"time_micros": 1759256618493219, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.493243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3840449, prev total WAL file size 3840449, number of live WAL files 2.
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.494147) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3650KB)], [74(10206KB)]
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256618494229, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 14189411, "oldest_snapshot_seqno": -1}
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6375 keys, 12282302 bytes, temperature: kUnknown
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256618549117, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12282302, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12241477, "index_size": 23755, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 163852, "raw_average_key_size": 25, "raw_value_size": 12128598, "raw_average_value_size": 1902, "num_data_blocks": 951, "num_entries": 6375, "num_filter_entries": 6375, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759256618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.549465) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12282302 bytes
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.550532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 258.0 rd, 223.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 6893, records dropped: 518 output_compression: NoCompression
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.550551) EVENT_LOG_v1 {"time_micros": 1759256618550541, "job": 42, "event": "compaction_finished", "compaction_time_micros": 54990, "compaction_time_cpu_micros": 27948, "output_level": 6, "num_output_files": 1, "total_output_size": 12282302, "num_input_records": 6893, "num_output_records": 6375, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256618551370, "job": 42, "event": "table_file_deletion", "file_number": 76}
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256618553648, "job": 42, "event": "table_file_deletion", "file_number": 74}
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.494023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.553698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.553705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.553706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.553708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:23:38 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:23:38.553709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:23:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:38] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:23:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:38] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:23:39 compute-0 nova_compute[265391]: 2025-09-30 18:23:39.170 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:39 compute-0 nova_compute[265391]: 2025-09-30 18:23:39.171 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:39 compute-0 nova_compute[265391]: 2025-09-30 18:23:39.171 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:40.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:40 compute-0 nova_compute[265391]: 2025-09-30 18:23:40.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:40.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:40 compute-0 ceph-mon[73755]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/182341 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:23:41 compute-0 nova_compute[265391]: 2025-09-30 18:23:41.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:42.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:42 compute-0 nova_compute[265391]: 2025-09-30 18:23:42.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:42.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:42 compute-0 ceph-mon[73755]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:43.767Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:44.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:23:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:44.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:44 compute-0 ceph-mon[73755]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 255 B/s rd, 0 op/s
Sep 30 18:23:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:45 compute-0 nova_compute[265391]: 2025-09-30 18:23:45.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:45 compute-0 podman[321487]: 2025-09-30 18:23:45.532636042 +0000 UTC m=+0.069343113 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:23:45 compute-0 podman[321489]: 2025-09-30 18:23:45.542797846 +0000 UTC m=+0.071164831 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:23:45 compute-0 podman[321490]: 2025-09-30 18:23:45.57375571 +0000 UTC m=+0.097878254 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Sep 30 18:23:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:46.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:46 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:46.338 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:33:a9 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7b746d23-00c6-4893-9766-0d92e4633a53', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b746d23-00c6-4893-9766-0d92e4633a53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31f85dcb85374df695d9e661ebe35eab', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1059929-d89b-4274-b16c-528ada6d21cb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f6f0e667-67f4-4d4e-acc8-cfbe5cf0907b) old=Port_Binding(mac=['fa:16:3e:c7:33:a9'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-7b746d23-00c6-4893-9766-0d92e4633a53', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b746d23-00c6-4893-9766-0d92e4633a53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31f85dcb85374df695d9e661ebe35eab', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:23:46 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:46.339 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f6f0e667-67f4-4d4e-acc8-cfbe5cf0907b in datapath 7b746d23-00c6-4893-9766-0d92e4633a53 updated
Sep 30 18:23:46 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:46.341 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b746d23-00c6-4893-9766-0d92e4633a53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:23:46 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:46.342 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d85458f1-5e81-4f3d-8ad1-2a11d54ec0ec]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:23:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:46.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:46 compute-0 nova_compute[265391]: 2025-09-30 18:23:46.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:23:46 compute-0 ceph-mon[73755]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:47.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:47 compute-0 nova_compute[265391]: 2025-09-30 18:23:47.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:48.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:48.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:48 compute-0 ceph-mon[73755]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:48] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:23:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:48] "GET /metrics HTTP/1.1" 200 46632 "" "Prometheus/2.51.0"
Sep 30 18:23:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002bf0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:50.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:50 compute-0 nova_compute[265391]: 2025-09-30 18:23:50.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:50.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:50 compute-0 ceph-mon[73755]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:52.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:23:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:23:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:23:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:52 compute-0 nova_compute[265391]: 2025-09-30 18:23:52.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:52.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:53 compute-0 ceph-mon[73755]: pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:53 compute-0 sudo[321556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:23:53 compute-0 sudo[321556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:23:53 compute-0 sudo[321556]: pam_unix(sudo:session): session closed for user root
Sep 30 18:23:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:53.768Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:54.302 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:23:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:54.303 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:23:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:23:54.303 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:23:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:54.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:55 compute-0 ceph-mon[73755]: pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:23:55 compute-0 nova_compute[265391]: 2025-09-30 18:23:55.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:23:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4003f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:56.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:23:57.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:23:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384002c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:57 compute-0 ceph-mon[73755]: pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:57 compute-0 nova_compute[265391]: 2025-09-30 18:23:57.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:23:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:23:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2428006301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:23:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:23:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2428006301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:23:57 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 18:23:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:23:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:23:58.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:23:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2428006301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:23:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2428006301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:23:58 compute-0 ceph-mon[73755]: pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:23:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:23:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:23:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:23:58.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:23:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:58] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:23:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:23:58] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:23:59 compute-0 ovn_controller[156242]: 2025-09-30T18:23:59Z|00131|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Sep 30 18:23:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:23:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:23:59 compute-0 podman[276673]: time="2025-09-30T18:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:23:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:23:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10295 "" "Go-http-client/1.1"
Sep 30 18:24:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:00.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:00 compute-0 nova_compute[265391]: 2025-09-30 18:24:00.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:00 compute-0 ceph-mon[73755]: pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:00.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:01 compute-0 openstack_network_exporter[279566]: ERROR   18:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:24:01 compute-0 openstack_network_exporter[279566]: ERROR   18:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:24:01 compute-0 openstack_network_exporter[279566]: ERROR   18:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:24:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:24:01 compute-0 openstack_network_exporter[279566]: ERROR   18:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:24:01 compute-0 openstack_network_exporter[279566]: ERROR   18:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:24:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:24:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:02.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:02 compute-0 ceph-mon[73755]: pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:02 compute-0 nova_compute[265391]: 2025-09-30 18:24:02.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:02.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:03 compute-0 sudo[321594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:24:03 compute-0 sudo[321594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:03 compute-0 sudo[321594]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:03.768Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:03 compute-0 sudo[321626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:24:03 compute-0 sudo[321626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:03 compute-0 podman[321619]: 2025-09-30 18:24:03.817255092 +0000 UTC m=+0.074565859 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:24:03 compute-0 podman[321618]: 2025-09-30 18:24:03.855300441 +0000 UTC m=+0.115376110 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Sep 30 18:24:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:04.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:04 compute-0 sudo[321626]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:24:04 compute-0 ceph-mon[73755]: pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:24:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:04.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:24:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:24:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:24:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:24:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:24:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:24:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:24:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:24:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:24:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:24:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:24:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:24:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:24:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:24:04 compute-0 sudo[321725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:24:04 compute-0 sudo[321725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:04 compute-0 sudo[321725]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:04 compute-0 sudo[321750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:24:04 compute-0 sudo[321750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:05 compute-0 podman[321816]: 2025-09-30 18:24:05.13152817 +0000 UTC m=+0.063270816 container create cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:24:05 compute-0 systemd[1]: Started libpod-conmon-cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2.scope.
Sep 30 18:24:05 compute-0 podman[321816]: 2025-09-30 18:24:05.099238871 +0000 UTC m=+0.030981577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:24:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:24:05 compute-0 podman[321816]: 2025-09-30 18:24:05.231277402 +0000 UTC m=+0.163020058 container init cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 18:24:05 compute-0 podman[321816]: 2025-09-30 18:24:05.240353878 +0000 UTC m=+0.172096494 container start cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:24:05 compute-0 podman[321816]: 2025-09-30 18:24:05.243998823 +0000 UTC m=+0.175741489 container attach cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:24:05 compute-0 adoring_elion[321832]: 167 167
Sep 30 18:24:05 compute-0 systemd[1]: libpod-cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2.scope: Deactivated successfully.
Sep 30 18:24:05 compute-0 podman[321816]: 2025-09-30 18:24:05.248838279 +0000 UTC m=+0.180580905 container died cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-23945a609ac9187a3b230e4b3d8201900c20f978a383101daa5eccfccae3b3d5-merged.mount: Deactivated successfully.
Sep 30 18:24:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:05 compute-0 podman[321816]: 2025-09-30 18:24:05.296712473 +0000 UTC m=+0.228455079 container remove cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:24:05 compute-0 systemd[1]: libpod-conmon-cc647cff3b43b67532fa50b406c7103e2588da03c3b314cd18ac92cb3ca2a8d2.scope: Deactivated successfully.
Sep 30 18:24:05 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:05.315 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:41:2a 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08fc2cbd16474855b7ae474fa9859f76', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5b6cbf18-1826-41d0-920f-e9db4f1a1832) old=Port_Binding(mac=['fa:16:3e:35:41:2a'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08fc2cbd16474855b7ae474fa9859f76', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:24:05 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:05.317 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5b6cbf18-1826-41d0-920f-e9db4f1a1832 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 updated
Sep 30 18:24:05 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:05.318 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:24:05 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:05.319 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f14f99fc-b669-44f1-b910-5126b66312f7]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:05 compute-0 nova_compute[265391]: 2025-09-30 18:24:05.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:05 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:24:05 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:24:05 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:24:05 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:24:05 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:24:05 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:24:05 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:24:05 compute-0 podman[321856]: 2025-09-30 18:24:05.504278608 +0000 UTC m=+0.058638275 container create 93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_albattani, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:24:05 compute-0 systemd[1]: Started libpod-conmon-93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc.scope.
Sep 30 18:24:05 compute-0 podman[321856]: 2025-09-30 18:24:05.477900442 +0000 UTC m=+0.032260169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:24:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756edd8642c5dfa10157b08acc5bd9ba61588d5699607c2f3c8890df040e228/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756edd8642c5dfa10157b08acc5bd9ba61588d5699607c2f3c8890df040e228/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756edd8642c5dfa10157b08acc5bd9ba61588d5699607c2f3c8890df040e228/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756edd8642c5dfa10157b08acc5bd9ba61588d5699607c2f3c8890df040e228/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756edd8642c5dfa10157b08acc5bd9ba61588d5699607c2f3c8890df040e228/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:05 compute-0 podman[321856]: 2025-09-30 18:24:05.614863102 +0000 UTC m=+0.169222879 container init 93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_albattani, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:24:05 compute-0 podman[321856]: 2025-09-30 18:24:05.626653918 +0000 UTC m=+0.181013585 container start 93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 18:24:05 compute-0 podman[321856]: 2025-09-30 18:24:05.630740235 +0000 UTC m=+0.185100002 container attach 93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_albattani, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:24:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.850135) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256645850186, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 481, "num_deletes": 255, "total_data_size": 448754, "memory_usage": 459488, "flush_reason": "Manual Compaction"}
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256645856003, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 442828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35787, "largest_seqno": 36267, "table_properties": {"data_size": 440144, "index_size": 720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6451, "raw_average_key_size": 18, "raw_value_size": 434658, "raw_average_value_size": 1241, "num_data_blocks": 33, "num_entries": 350, "num_filter_entries": 350, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759256620, "oldest_key_time": 1759256620, "file_creation_time": 1759256645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 5926 microseconds, and 3464 cpu microseconds.
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.856061) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 442828 bytes OK
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.856087) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.858066) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.858089) EVENT_LOG_v1 {"time_micros": 1759256645858082, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.858110) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 445900, prev total WAL file size 445900, number of live WAL files 2.
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.858704) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323533' seq:0, type:0; will stop at (end)
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(432KB)], [77(11MB)]
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256645858748, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 12725130, "oldest_snapshot_seqno": -1}
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6204 keys, 12615631 bytes, temperature: kUnknown
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256645918866, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12615631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12575052, "index_size": 23957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 161240, "raw_average_key_size": 25, "raw_value_size": 12464269, "raw_average_value_size": 2009, "num_data_blocks": 958, "num_entries": 6204, "num_filter_entries": 6204, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759256645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.919252) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12615631 bytes
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.921041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.2 rd, 209.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 11.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(57.2) write-amplify(28.5) OK, records in: 6725, records dropped: 521 output_compression: NoCompression
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.921077) EVENT_LOG_v1 {"time_micros": 1759256645921060, "job": 44, "event": "compaction_finished", "compaction_time_micros": 60251, "compaction_time_cpu_micros": 36376, "output_level": 6, "num_output_files": 1, "total_output_size": 12615631, "num_input_records": 6725, "num_output_records": 6204, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256645921538, "job": 44, "event": "table_file_deletion", "file_number": 79}
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256645926505, "job": 44, "event": "table_file_deletion", "file_number": 77}
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.858622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.926619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.926628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.926633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.926638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:24:05 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:24:05.926641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:24:06 compute-0 dreamy_albattani[321873]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:24:06 compute-0 dreamy_albattani[321873]: --> All data devices are unavailable
Sep 30 18:24:06 compute-0 systemd[1]: libpod-93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc.scope: Deactivated successfully.
Sep 30 18:24:06 compute-0 podman[321856]: 2025-09-30 18:24:06.053326698 +0000 UTC m=+0.607686435 container died 93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:24:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:06.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9756edd8642c5dfa10157b08acc5bd9ba61588d5699607c2f3c8890df040e228-merged.mount: Deactivated successfully.
Sep 30 18:24:06 compute-0 podman[321856]: 2025-09-30 18:24:06.121575202 +0000 UTC m=+0.675934909 container remove 93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:24:06 compute-0 systemd[1]: libpod-conmon-93a02e8866dcfd3e66a8ee0ed3c87ecf833d6bfdf20671de71c82a6abbdc1efc.scope: Deactivated successfully.
Sep 30 18:24:06 compute-0 sudo[321750]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:06 compute-0 sudo[321902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:24:06 compute-0 sudo[321902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:06 compute-0 sudo[321902]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:06 compute-0 sudo[321927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:24:06 compute-0 sudo[321927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:06.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:06 compute-0 ceph-mon[73755]: pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:06 compute-0 podman[321995]: 2025-09-30 18:24:06.910927877 +0000 UTC m=+0.048980384 container create db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:24:06 compute-0 systemd[1]: Started libpod-conmon-db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33.scope.
Sep 30 18:24:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:24:06 compute-0 podman[321995]: 2025-09-30 18:24:06.894443679 +0000 UTC m=+0.032496226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:24:07 compute-0 podman[321995]: 2025-09-30 18:24:07.005911916 +0000 UTC m=+0.143964453 container init db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_darwin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 18:24:07 compute-0 podman[321995]: 2025-09-30 18:24:07.014029357 +0000 UTC m=+0.152081874 container start db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:24:07 compute-0 podman[321995]: 2025-09-30 18:24:07.018186225 +0000 UTC m=+0.156238742 container attach db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_darwin, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:24:07 compute-0 elastic_darwin[322011]: 167 167
Sep 30 18:24:07 compute-0 systemd[1]: libpod-db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33.scope: Deactivated successfully.
Sep 30 18:24:07 compute-0 conmon[322011]: conmon db43b46fbc3b25123215 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33.scope/container/memory.events
Sep 30 18:24:07 compute-0 podman[321995]: 2025-09-30 18:24:07.021924252 +0000 UTC m=+0.159976769 container died db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_darwin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:24:07 compute-0 podman[322008]: 2025-09-30 18:24:07.030614008 +0000 UTC m=+0.073146222 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent)
Sep 30 18:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fba280f34c9c21592c4eb593cf199d44801d5665ba581dec8f4add45ea71abf-merged.mount: Deactivated successfully.
Sep 30 18:24:07 compute-0 podman[321995]: 2025-09-30 18:24:07.06572354 +0000 UTC m=+0.203776057 container remove db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_darwin, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:24:07 compute-0 systemd[1]: libpod-conmon-db43b46fbc3b25123215b52d24ed44a74eb4ac2589e41becaaaabc5226080b33.scope: Deactivated successfully.
Sep 30 18:24:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:07.237Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:07 compute-0 podman[322055]: 2025-09-30 18:24:07.250983395 +0000 UTC m=+0.044115997 container create 42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_morse, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:24:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:07 compute-0 systemd[1]: Started libpod-conmon-42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78.scope.
Sep 30 18:24:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:24:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:24:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:24:07 compute-0 podman[322055]: 2025-09-30 18:24:07.233885661 +0000 UTC m=+0.027018303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dcd8dae87131d17f596369b8051d551bc9e8cddc0ee7c6816ecbf784831ffa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dcd8dae87131d17f596369b8051d551bc9e8cddc0ee7c6816ecbf784831ffa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dcd8dae87131d17f596369b8051d551bc9e8cddc0ee7c6816ecbf784831ffa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9dcd8dae87131d17f596369b8051d551bc9e8cddc0ee7c6816ecbf784831ffa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:07 compute-0 podman[322055]: 2025-09-30 18:24:07.34928455 +0000 UTC m=+0.142417172 container init 42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_morse, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:24:07 compute-0 podman[322055]: 2025-09-30 18:24:07.356245081 +0000 UTC m=+0.149377673 container start 42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_morse, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:24:07 compute-0 podman[322055]: 2025-09-30 18:24:07.360385549 +0000 UTC m=+0.153518151 container attach 42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_morse, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:24:07 compute-0 nova_compute[265391]: 2025-09-30 18:24:07.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:07 compute-0 adoring_morse[322071]: {
Sep 30 18:24:07 compute-0 adoring_morse[322071]:     "0": [
Sep 30 18:24:07 compute-0 adoring_morse[322071]:         {
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "devices": [
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "/dev/loop3"
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             ],
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "lv_name": "ceph_lv0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "lv_size": "21470642176",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "name": "ceph_lv0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "tags": {
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.cluster_name": "ceph",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.crush_device_class": "",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.encrypted": "0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.osd_id": "0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.type": "block",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.vdo": "0",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:                 "ceph.with_tpm": "0"
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             },
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "type": "block",
Sep 30 18:24:07 compute-0 adoring_morse[322071]:             "vg_name": "ceph_vg0"
Sep 30 18:24:07 compute-0 adoring_morse[322071]:         }
Sep 30 18:24:07 compute-0 adoring_morse[322071]:     ]
Sep 30 18:24:07 compute-0 adoring_morse[322071]: }
Sep 30 18:24:07 compute-0 systemd[1]: libpod-42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78.scope: Deactivated successfully.
Sep 30 18:24:07 compute-0 podman[322055]: 2025-09-30 18:24:07.655616112 +0000 UTC m=+0.448748714 container died 42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_morse, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9dcd8dae87131d17f596369b8051d551bc9e8cddc0ee7c6816ecbf784831ffa-merged.mount: Deactivated successfully.
Sep 30 18:24:07 compute-0 podman[322055]: 2025-09-30 18:24:07.708971369 +0000 UTC m=+0.502103981 container remove 42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_morse, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 18:24:07 compute-0 systemd[1]: libpod-conmon-42fd78a3bf7f47dbda912b64cdefcaff0ecd8d9c7a70e086c84424f29629bf78.scope: Deactivated successfully.
Sep 30 18:24:07 compute-0 sudo[321927]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:07 compute-0 sudo[322093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:24:07 compute-0 sudo[322093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:07 compute-0 sudo[322093]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:24:07
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.nfs', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:24:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:24:07 compute-0 sudo[322118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:24:07 compute-0 sudo[322118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:08.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:08 compute-0 podman[322186]: 2025-09-30 18:24:08.402919144 +0000 UTC m=+0.039476027 container create 86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_turing, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:24:08 compute-0 systemd[1]: Started libpod-conmon-86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949.scope.
Sep 30 18:24:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:24:08 compute-0 podman[322186]: 2025-09-30 18:24:08.479827403 +0000 UTC m=+0.116384306 container init 86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_turing, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:24:08 compute-0 podman[322186]: 2025-09-30 18:24:08.386889927 +0000 UTC m=+0.023446830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:24:08 compute-0 podman[322186]: 2025-09-30 18:24:08.488367875 +0000 UTC m=+0.124924798 container start 86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_turing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:24:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:08.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:08 compute-0 podman[322186]: 2025-09-30 18:24:08.492187484 +0000 UTC m=+0.128744397 container attach 86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 18:24:08 compute-0 modest_turing[322202]: 167 167
Sep 30 18:24:08 compute-0 systemd[1]: libpod-86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949.scope: Deactivated successfully.
Sep 30 18:24:08 compute-0 podman[322186]: 2025-09-30 18:24:08.494647278 +0000 UTC m=+0.131204181 container died 86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_turing, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f363944edd6abfef4579e4fd1fb0cad9e463f442e42c51c04f9f674949dfef5-merged.mount: Deactivated successfully.
Sep 30 18:24:08 compute-0 podman[322186]: 2025-09-30 18:24:08.532724257 +0000 UTC m=+0.169281140 container remove 86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:24:08 compute-0 systemd[1]: libpod-conmon-86394c667fc65aa49e802068d5704290fc1793700d219b654b727fccf65ff949.scope: Deactivated successfully.
Sep 30 18:24:08 compute-0 podman[322224]: 2025-09-30 18:24:08.744787419 +0000 UTC m=+0.069608160 container create 4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kalam, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:24:08 compute-0 systemd[1]: Started libpod-conmon-4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308.scope.
Sep 30 18:24:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:08] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:24:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:08] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:24:08 compute-0 podman[322224]: 2025-09-30 18:24:08.715511568 +0000 UTC m=+0.040332389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:24:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac782d7ee66a136c80f8827f608cbce51be55b3cbe9654eca968fb9b8e90700/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac782d7ee66a136c80f8827f608cbce51be55b3cbe9654eca968fb9b8e90700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac782d7ee66a136c80f8827f608cbce51be55b3cbe9654eca968fb9b8e90700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac782d7ee66a136c80f8827f608cbce51be55b3cbe9654eca968fb9b8e90700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:08 compute-0 podman[322224]: 2025-09-30 18:24:08.83408908 +0000 UTC m=+0.158909821 container init 4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kalam, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 18:24:08 compute-0 podman[322224]: 2025-09-30 18:24:08.843069724 +0000 UTC m=+0.167890475 container start 4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:24:08 compute-0 podman[322224]: 2025-09-30 18:24:08.84677552 +0000 UTC m=+0.171596271 container attach 4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:24:08 compute-0 ceph-mon[73755]: pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:09 compute-0 lvm[322315]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:24:09 compute-0 lvm[322315]: VG ceph_vg0 finished
Sep 30 18:24:09 compute-0 quizzical_kalam[322240]: {}
Sep 30 18:24:09 compute-0 systemd[1]: libpod-4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308.scope: Deactivated successfully.
Sep 30 18:24:09 compute-0 podman[322224]: 2025-09-30 18:24:09.627671296 +0000 UTC m=+0.952492037 container died 4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:24:09 compute-0 systemd[1]: libpod-4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308.scope: Consumed 1.267s CPU time.
Sep 30 18:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac782d7ee66a136c80f8827f608cbce51be55b3cbe9654eca968fb9b8e90700-merged.mount: Deactivated successfully.
Sep 30 18:24:09 compute-0 podman[322224]: 2025-09-30 18:24:09.674235716 +0000 UTC m=+0.999056467 container remove 4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_kalam, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:24:09 compute-0 systemd[1]: libpod-conmon-4095af3daad7a6da68ff09d6ac200f0cd14e1f2024246872a6146ce2dab91308.scope: Deactivated successfully.
Sep 30 18:24:09 compute-0 sudo[322118]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:24:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:24:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:24:09 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:24:09 compute-0 sudo[322331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:24:09 compute-0 sudo[322331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:09 compute-0 sudo[322331]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:10 compute-0 nova_compute[265391]: 2025-09-30 18:24:10.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:24:10 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:24:10 compute-0 ceph-mon[73755]: pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:12.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:12 compute-0 ceph-mon[73755]: pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:12 compute-0 nova_compute[265391]: 2025-09-30 18:24:12.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:12.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:13 compute-0 sudo[322360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:24:13 compute-0 sudo[322360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:13 compute-0 sudo[322360]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:13.769Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:24:14 compute-0 ceph-mon[73755]: pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:24:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:14.492 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:24:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:14.493 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:24:14 compute-0 nova_compute[265391]: 2025-09-30 18:24:14.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:14.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:15 compute-0 nova_compute[265391]: 2025-09-30 18:24:15.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:15 compute-0 unix_chkpwd[322390]: password check failed for user (root)
Sep 30 18:24:15 compute-0 sshd-session[322386]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:24:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:16 compute-0 ceph-mon[73755]: pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:16.483 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:36:fd 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a03d08ec-972f-45ad-9eed-86a07dbccb55', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a03d08ec-972f-45ad-9eed-86a07dbccb55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8f0f2ed-64e2-4f02-8e96-5263bb1056ff, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c1a76e60-8e45-44a5-9199-b4a5de182dea) old=Port_Binding(mac=['fa:16:3e:e4:36:fd'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-a03d08ec-972f-45ad-9eed-86a07dbccb55', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a03d08ec-972f-45ad-9eed-86a07dbccb55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:24:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:16.483 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c1a76e60-8e45-44a5-9199-b4a5de182dea in datapath a03d08ec-972f-45ad-9eed-86a07dbccb55 updated
Sep 30 18:24:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:16.485 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a03d08ec-972f-45ad-9eed-86a07dbccb55, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:24:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:16.486 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[34e6569e-617b-4292-b84b-3b1a58151be6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:16.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:16 compute-0 podman[322393]: 2025-09-30 18:24:16.555032069 +0000 UTC m=+0.076247843 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:24:16 compute-0 podman[322394]: 2025-09-30 18:24:16.565444139 +0000 UTC m=+0.078427679 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9)
Sep 30 18:24:16 compute-0 podman[322392]: 2025-09-30 18:24:16.585440099 +0000 UTC m=+0.102020383 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Sep 30 18:24:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:17.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:17 compute-0 sshd-session[322386]: Failed password for root from 45.252.249.158 port 42276 ssh2
Sep 30 18:24:17 compute-0 nova_compute[265391]: 2025-09-30 18:24:17.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:18 compute-0 ceph-mon[73755]: pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:18.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:18] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:24:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:18] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:24:18 compute-0 sshd-session[322386]: Received disconnect from 45.252.249.158 port 42276:11: Bye Bye [preauth]
Sep 30 18:24:18 compute-0 sshd-session[322386]: Disconnected from authenticating user root 45.252.249.158 port 42276 [preauth]
Sep 30 18:24:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:20 compute-0 nova_compute[265391]: 2025-09-30 18:24:20.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:20 compute-0 ceph-mon[73755]: pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:20 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:20.495 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:20.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:24:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:24:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:24:22 compute-0 nova_compute[265391]: 2025-09-30 18:24:22.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:22.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:23 compute-0 ceph-mon[73755]: pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:23.770Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:24:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:24.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:25 compute-0 ceph-mon[73755]: pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:24:25 compute-0 nova_compute[265391]: 2025-09-30 18:24:25.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:25 compute-0 sshd-session[322450]: Invalid user elite from 154.125.120.7 port 53163
Sep 30 18:24:25 compute-0 sshd-session[322450]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:24:25 compute-0 sshd-session[322450]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.125.120.7
Sep 30 18:24:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:26.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:27 compute-0 sshd-session[322450]: Failed password for invalid user elite from 154.125.120.7 port 53163 ssh2
Sep 30 18:24:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:27.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:27 compute-0 ceph-mon[73755]: pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:27 compute-0 nova_compute[265391]: 2025-09-30 18:24:27.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:27 compute-0 sshd-session[322450]: Received disconnect from 154.125.120.7 port 53163:11: Bye Bye [preauth]
Sep 30 18:24:27 compute-0 sshd-session[322450]: Disconnected from invalid user elite 154.125.120.7 port 53163 [preauth]
Sep 30 18:24:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:28 compute-0 nova_compute[265391]: 2025-09-30 18:24:28.130 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:28 compute-0 nova_compute[265391]: 2025-09-30 18:24:28.131 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:28 compute-0 ceph-mon[73755]: pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:28 compute-0 nova_compute[265391]: 2025-09-30 18:24:28.636 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:24:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:28] "GET /metrics HTTP/1.1" 200 46634 "" "Prometheus/2.51.0"
Sep 30 18:24:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:28] "GET /metrics HTTP/1.1" 200 46634 "" "Prometheus/2.51.0"
Sep 30 18:24:29 compute-0 nova_compute[265391]: 2025-09-30 18:24:29.196 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:29 compute-0 nova_compute[265391]: 2025-09-30 18:24:29.197 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:29 compute-0 nova_compute[265391]: 2025-09-30 18:24:29.206 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:24:29 compute-0 nova_compute[265391]: 2025-09-30 18:24:29.206 2 INFO nova.compute.claims [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:24:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:29 compute-0 podman[276673]: time="2025-09-30T18:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:24:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:24:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10287 "" "Go-http-client/1.1"
Sep 30 18:24:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:30 compute-0 nova_compute[265391]: 2025-09-30 18:24:30.263 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:30 compute-0 nova_compute[265391]: 2025-09-30 18:24:30.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:30 compute-0 ceph-mon[73755]: pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:24:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520679321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:30 compute-0 nova_compute[265391]: 2025-09-30 18:24:30.788 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:30 compute-0 nova_compute[265391]: 2025-09-30 18:24:30.796 2 DEBUG nova.compute.provider_tree [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:24:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:31 compute-0 nova_compute[265391]: 2025-09-30 18:24:31.308 2 DEBUG nova.scheduler.client.report [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:24:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:31 compute-0 openstack_network_exporter[279566]: ERROR   18:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:24:31 compute-0 openstack_network_exporter[279566]: ERROR   18:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:24:31 compute-0 openstack_network_exporter[279566]: ERROR   18:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:24:31 compute-0 openstack_network_exporter[279566]: ERROR   18:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:24:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:24:31 compute-0 openstack_network_exporter[279566]: ERROR   18:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:24:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:24:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2520679321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:31 compute-0 nova_compute[265391]: 2025-09-30 18:24:31.821 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.624s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:31 compute-0 nova_compute[265391]: 2025-09-30 18:24:31.822 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:24:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:32.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:32 compute-0 nova_compute[265391]: 2025-09-30 18:24:32.341 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:24:32 compute-0 nova_compute[265391]: 2025-09-30 18:24:32.341 2 DEBUG nova.network.neutron [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:24:32 compute-0 nova_compute[265391]: 2025-09-30 18:24:32.342 2 WARNING neutronclient.v2_0.client [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:24:32 compute-0 nova_compute[265391]: 2025-09-30 18:24:32.343 2 WARNING neutronclient.v2_0.client [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:24:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:32 compute-0 nova_compute[265391]: 2025-09-30 18:24:32.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:32 compute-0 ceph-mon[73755]: pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:32 compute-0 nova_compute[265391]: 2025-09-30 18:24:32.849 2 INFO nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:24:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:33 compute-0 nova_compute[265391]: 2025-09-30 18:24:33.367 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:24:33 compute-0 nova_compute[265391]: 2025-09-30 18:24:33.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:33 compute-0 nova_compute[265391]: 2025-09-30 18:24:33.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:33 compute-0 nova_compute[265391]: 2025-09-30 18:24:33.607 2 DEBUG nova.network.neutron [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Successfully created port: fba1be8e-f7c4-4bd0-b8a7-6e854986df69 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:24:33 compute-0 sudo[322492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:24:33 compute-0 sudo[322492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:33 compute-0 sudo[322492]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:33.771Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:33 compute-0 unix_chkpwd[322518]: password check failed for user (root)
Sep 30 18:24:33 compute-0 sshd-session[322489]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 18:24:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:34.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.395 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.398 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.399 2 INFO nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Creating image(s)
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.443 2 DEBUG nova.storage.rbd_utils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:24:34 compute-0 ceph-mon[73755]: pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.477 2 DEBUG nova.storage.rbd_utils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.520 2 DEBUG nova.storage.rbd_utils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:24:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.525 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:34 compute-0 podman[322538]: 2025-09-30 18:24:34.566640164 +0000 UTC m=+0.095308458 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:24:34 compute-0 podman[322535]: 2025-09-30 18:24:34.580680639 +0000 UTC m=+0.121379836 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.594 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.595 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.595 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.595 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.623 2 DEBUG nova.storage.rbd_utils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.627 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.688 2 DEBUG nova.network.neutron [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Successfully updated port: fba1be8e-f7c4-4bd0-b8a7-6e854986df69 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.783 2 DEBUG nova.compute.manager [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-changed-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.784 2 DEBUG nova.compute.manager [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Refreshing instance network info cache due to event network-changed-fba1be8e-f7c4-4bd0-b8a7-6e854986df69. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.785 2 DEBUG oslo_concurrency.lockutils [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-7569c2e7-ec68-4b21-b36b-7c828ac8af52" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.785 2 DEBUG oslo_concurrency.lockutils [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-7569c2e7-ec68-4b21-b36b-7c828ac8af52" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.786 2 DEBUG nova.network.neutron [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Refreshing network info cache for port fba1be8e-f7c4-4bd0-b8a7-6e854986df69 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.911 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:34 compute-0 nova_compute[265391]: 2025-09-30 18:24:34.984 2 DEBUG nova.storage.rbd_utils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] resizing rbd image 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.100 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.100 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Ensure instance console log exists: /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.101 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.101 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.101 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.200 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "refresh_cache-7569c2e7-ec68-4b21-b36b-7c828ac8af52" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.293 2 WARNING neutronclient.v2_0.client [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:24:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:35 compute-0 nova_compute[265391]: 2025-09-30 18:24:35.934 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:36 compute-0 sshd-session[322489]: Failed password for root from 14.225.220.107 port 59652 ssh2
Sep 30 18:24:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2280115228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:36.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:36 compute-0 nova_compute[265391]: 2025-09-30 18:24:36.278 2 DEBUG nova.network.neutron [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:24:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:36 compute-0 nova_compute[265391]: 2025-09-30 18:24:36.450 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:36 compute-0 nova_compute[265391]: 2025-09-30 18:24:36.451 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:36 compute-0 nova_compute[265391]: 2025-09-30 18:24:36.451 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:36 compute-0 nova_compute[265391]: 2025-09-30 18:24:36.452 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:24:36 compute-0 nova_compute[265391]: 2025-09-30 18:24:36.452 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:24:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2209775993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:24:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:24:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2209775993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:24:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:24:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878725520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:36 compute-0 nova_compute[265391]: 2025-09-30 18:24:36.912 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:36 compute-0 sshd-session[322489]: Received disconnect from 14.225.220.107 port 59652:11: Bye Bye [preauth]
Sep 30 18:24:36 compute-0 sshd-session[322489]: Disconnected from authenticating user root 14.225.220.107 port 59652 [preauth]
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.077 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.079 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:37 compute-0 ceph-mon[73755]: pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2209775993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:24:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2209775993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:24:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/878725520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.102 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.102 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4413MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.103 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.103 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:37.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.269 2 DEBUG nova.network.neutron [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:24:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:24:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:24:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390003390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:24:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:24:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:24:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:24:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:24:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:37 compute-0 podman[322760]: 2025-09-30 18:24:37.517544579 +0000 UTC m=+0.052796753 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.774 2 DEBUG oslo_concurrency.lockutils [req-34024375-fe92-4164-a303-a31ea50f00af req-1e3e9041-8fad-402c-8144-fda1025f3915 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-7569c2e7-ec68-4b21-b36b-7c828ac8af52" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.776 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquired lock "refresh_cache-7569c2e7-ec68-4b21-b36b-7c828ac8af52" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:24:37 compute-0 nova_compute[265391]: 2025-09-30 18:24:37.776 2 DEBUG nova.network.neutron [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:24:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:24:38 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/510085707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.151 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 7569c2e7-ec68-4b21-b36b-7c828ac8af52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.151 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.152 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:24:37 up  1:27,  0 user,  load average: 0.48, 0.75, 0.83\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_spawning': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.188 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.218 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.219 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.239 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:24:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.258 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.294 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.425 2 DEBUG nova.network.neutron [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:24:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.740 2 WARNING neutronclient.v2_0.client [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:24:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:24:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232404500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:38] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:24:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:38] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.796 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.800 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:24:38 compute-0 nova_compute[265391]: 2025-09-30 18:24:38.907 2 DEBUG nova.network.neutron [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Updating instance_info_cache with network_info: [{"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:24:39 compute-0 ceph-mon[73755]: pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 246 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:24:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2232404500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.307 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:24:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.413 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Releasing lock "refresh_cache-7569c2e7-ec68-4b21-b36b-7c828ac8af52" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.413 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Instance network_info: |[{"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.417 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Start _get_guest_xml network_info=[{"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.422 2 WARNING nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.424 2 DEBUG nova.virt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteStrategies-server-2011896342', uuid='7569c2e7-ec68-4b21-b36b-7c828ac8af52'), owner=OwnerMeta(userid='623ef4a55c9e4fc28bb65e49246b5008', username='tempest-TestExecuteStrategies-1883747907-project-admin', projectid='c634e1c17ed54907969576a0eb8eff50', projectname='tempest-TestExecuteStrategies-1883747907'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759256679.4244888) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.430 2 DEBUG nova.virt.libvirt.host [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.431 2 DEBUG nova.virt.libvirt.host [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.434 2 DEBUG nova.virt.libvirt.host [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.434 2 DEBUG nova.virt.libvirt.host [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.435 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.435 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.436 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.436 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.436 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.436 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.437 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.437 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.437 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.438 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.438 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.438 2 DEBUG nova.virt.hardware [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.442 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.818 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.820 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.716s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.820 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.821 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:24:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:24:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824539300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.933 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.962 2 DEBUG nova.storage.rbd_utils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:24:39 compute-0 nova_compute[265391]: 2025-09-30 18:24:39.967 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1824539300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:24:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:40.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.330 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:24:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:24:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:24:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56867910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.454 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.456 2 DEBUG nova.virt.libvirt.vif [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:24:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2011896342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2011896342',id=16,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ccq6xsdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:24:33Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=7569c2e7-ec68-4b21-b36b-7c828ac8af52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.456 2 DEBUG nova.network.os_vif_util [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.457 2 DEBUG nova.network.os_vif_util [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:86:5f,bridge_name='br-int',has_traffic_filtering=True,id=fba1be8e-f7c4-4bd0-b8a7-6e854986df69,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba1be8e-f7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.458 2 DEBUG nova.objects.instance [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7569c2e7-ec68-4b21-b36b-7c828ac8af52 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:40.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.824 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.825 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.826 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.827 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.827 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.827 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:24:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.968 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <uuid>7569c2e7-ec68-4b21-b36b-7c828ac8af52</uuid>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <name>instance-00000010</name>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-2011896342</nova:name>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:24:39</nova:creationTime>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:24:40 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:24:40 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <nova:port uuid="fba1be8e-f7c4-4bd0-b8a7-6e854986df69">
Sep 30 18:24:40 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <system>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <entry name="serial">7569c2e7-ec68-4b21-b36b-7c828ac8af52</entry>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <entry name="uuid">7569c2e7-ec68-4b21-b36b-7c828ac8af52</entry>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </system>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <os>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   </os>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <features>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   </features>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk">
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       </source>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk.config">
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       </source>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:24:40 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:8b:86:5f"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <target dev="tapfba1be8e-f7"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52/console.log" append="off"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <video>
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </video>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:24:40 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:24:40 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:24:40 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:24:40 compute-0 nova_compute[265391]: </domain>
Sep 30 18:24:40 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.969 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Preparing to wait for external event network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.969 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.969 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.970 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.971 2 DEBUG nova.virt.libvirt.vif [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:24:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2011896342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2011896342',id=16,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ccq6xsdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:24:33Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=7569c2e7-ec68-4b21-b36b-7c828ac8af52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.972 2 DEBUG nova.network.os_vif_util [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.973 2 DEBUG nova.network.os_vif_util [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:86:5f,bridge_name='br-int',has_traffic_filtering=True,id=fba1be8e-f7c4-4bd0-b8a7-6e854986df69,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba1be8e-f7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.973 2 DEBUG os_vif [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:86:5f,bridge_name='br-int',has_traffic_filtering=True,id=fba1be8e-f7c4-4bd0-b8a7-6e854986df69,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba1be8e-f7') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.975 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.976 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '5398334e-effb-5d09-a3eb-be5c20eafe27', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba1be8e-f7, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapfba1be8e-f7, col_values=(('qos', UUID('41dc3b88-b9e9-44ce-8341-713443690be4')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapfba1be8e-f7, col_values=(('external_ids', {'iface-id': 'fba1be8e-f7c4-4bd0-b8a7-6e854986df69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:86:5f', 'vm-uuid': '7569c2e7-ec68-4b21-b36b-7c828ac8af52'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:40 compute-0 NetworkManager[45059]: <info>  [1759256680.9918] manager: (tapfba1be8e-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Sep 30 18:24:40 compute-0 nova_compute[265391]: 2025-09-30 18:24:40.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:24:41 compute-0 nova_compute[265391]: 2025-09-30 18:24:41.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:41 compute-0 nova_compute[265391]: 2025-09-30 18:24:41.004 2 INFO os_vif [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:86:5f,bridge_name='br-int',has_traffic_filtering=True,id=fba1be8e-f7c4-4bd0-b8a7-6e854986df69,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba1be8e-f7')
Sep 30 18:24:41 compute-0 ceph-mon[73755]: pgmap v1381: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:24:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/56867910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:24:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:41 compute-0 nova_compute[265391]: 2025-09-30 18:24:41.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:42.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:24:42 compute-0 ceph-mon[73755]: pgmap v1382: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:24:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:42.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:42 compute-0 nova_compute[265391]: 2025-09-30 18:24:42.554 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:24:42 compute-0 nova_compute[265391]: 2025-09-30 18:24:42.554 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:24:42 compute-0 nova_compute[265391]: 2025-09-30 18:24:42.554 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No VIF found with MAC fa:16:3e:8b:86:5f, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:24:42 compute-0 nova_compute[265391]: 2025-09-30 18:24:42.555 2 INFO nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Using config drive
Sep 30 18:24:42 compute-0 nova_compute[265391]: 2025-09-30 18:24:42.578 2 DEBUG nova.storage.rbd_utils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.094 2 WARNING neutronclient.v2_0.client [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:24:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.333 2 INFO nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Creating config drive at /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52/disk.config
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.344 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpyd_3pwau execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.482 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpyd_3pwau" returned: 0 in 0.138s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.514 2 DEBUG nova.storage.rbd_utils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.520 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52/disk.config 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.702 2 DEBUG oslo_concurrency.processutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52/disk.config 7569c2e7-ec68-4b21-b36b-7c828ac8af52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.704 2 INFO nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Deleting local config drive /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52/disk.config because it was imported into RBD.
Sep 30 18:24:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 18:24:43 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 18:24:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:43.772Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:43 compute-0 kernel: tapfba1be8e-f7: entered promiscuous mode
Sep 30 18:24:43 compute-0 ovn_controller[156242]: 2025-09-30T18:24:43Z|00132|binding|INFO|Claiming lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 for this chassis.
Sep 30 18:24:43 compute-0 ovn_controller[156242]: 2025-09-30T18:24:43Z|00133|binding|INFO|fba1be8e-f7c4-4bd0-b8a7-6e854986df69: Claiming fa:16:3e:8b:86:5f 10.100.0.13
Sep 30 18:24:43 compute-0 NetworkManager[45059]: <info>  [1759256683.8320] manager: (tapfba1be8e-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.860 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:86:5f 10.100.0.13'], port_security=['fa:16:3e:8b:86:5f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7569c2e7-ec68-4b21-b36b-7c828ac8af52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=fba1be8e-f7c4-4bd0-b8a7-6e854986df69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.862 166158 INFO neutron.agent.ovn.metadata.agent [-] Port fba1be8e-f7c4-4bd0-b8a7-6e854986df69 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.864 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:24:43 compute-0 systemd-machined[219917]: New machine qemu-11-instance-00000010.
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.884 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3db829-7e8a-4eff-a51a-8e9703037f55]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.886 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6901f664-31 in ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.889 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6901f664-30 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.889 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e1515b3f-ba0b-4c10-83d5-dbe4703fd314]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.890 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[161a1034-3ec1-49b2-8d1d-7634d9659c05]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:43 compute-0 systemd-udevd[322964]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.907 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[569d8691-048c-4c00-9f95-e156b95525d7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:43 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000010.
Sep 30 18:24:43 compute-0 NetworkManager[45059]: <info>  [1759256683.9130] device (tapfba1be8e-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:24:43 compute-0 NetworkManager[45059]: <info>  [1759256683.9145] device (tapfba1be8e-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.925 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5b122d7e-50f6-4327-9402-cb7626bbf305]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:43 compute-0 ovn_controller[156242]: 2025-09-30T18:24:43Z|00134|binding|INFO|Setting lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 ovn-installed in OVS
Sep 30 18:24:43 compute-0 ovn_controller[156242]: 2025-09-30T18:24:43Z|00135|binding|INFO|Setting lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 up in Southbound
Sep 30 18:24:43 compute-0 nova_compute[265391]: 2025-09-30 18:24:43.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.963 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd9f028-cb01-4749-bcef-36bff8c12860]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:43 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:43.971 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[caa78d58-b628-4a99-84ae-fba79fbee9a4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:43 compute-0 NetworkManager[45059]: <info>  [1759256683.9731] manager: (tap6901f664-30): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.017 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[96d699ac-b1d1-45df-8433-e2b73cb32431]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.024 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[487493f4-9cba-4c4d-beab-358570798202]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 NetworkManager[45059]: <info>  [1759256684.0551] device (tap6901f664-30): carrier: link connected
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.067 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[606a2102-8c54-44d0-8363-e5a780e3740a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.092 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a426bc-efe8-446c-b8ca-73f2df4e37f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528275, 'reachable_time': 37779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322996, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.116 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ec43af91-7aa7-4d2f-9951-096e08a0dfce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:412a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528275, 'tstamp': 528275}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322997, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:44.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.150 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[58f24e6d-56f8-417c-b086-b9a8d9a47429]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528275, 'reachable_time': 37779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322998, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.188 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[478c31f0-9cf9-4603-8699-efb98e97391d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.263 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a952df8d-a323-4a11-aeba-a3af69ce0d35]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.264 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.265 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.265 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:44 compute-0 NetworkManager[45059]: <info>  [1759256684.2679] manager: (tap6901f664-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:44 compute-0 kernel: tap6901f664-30: entered promiscuous mode
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.271 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:44 compute-0 ovn_controller[156242]: 2025-09-30T18:24:44Z|00136|binding|INFO|Releasing lport 5b6cbf18-1826-41d0-920f-e9db4f1a1832 from this chassis (sb_readonly=0)
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.287 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e3905b-5253-433b-b249-45bd3b16c093]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.288 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.288 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.288 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 6901f664-336b-42d2-bbf7-58951befc8d1 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.288 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.289 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4d764196-9a07-4fe4-a5a7-088742308058]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.289 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.289 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c7c4a9-b640-4ad2-9137-7dc393dc8cb8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.290 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:24:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:44.290 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'env', 'PROCESS_TAG=haproxy-6901f664-336b-42d2-bbf7-58951befc8d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6901f664-336b-42d2-bbf7-58951befc8d1.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:24:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:24:44 compute-0 ceph-mon[73755]: pgmap v1383: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:24:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:44.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:44 compute-0 podman[323030]: 2025-09-30 18:24:44.694967321 +0000 UTC m=+0.058173573 container create ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.715 2 DEBUG nova.compute.manager [req-e202324f-b279-4cb6-806d-edb6db9c0197 req-4cbc65ce-402b-433b-9edd-7ecf7fb0a175 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.716 2 DEBUG oslo_concurrency.lockutils [req-e202324f-b279-4cb6-806d-edb6db9c0197 req-4cbc65ce-402b-433b-9edd-7ecf7fb0a175 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.716 2 DEBUG oslo_concurrency.lockutils [req-e202324f-b279-4cb6-806d-edb6db9c0197 req-4cbc65ce-402b-433b-9edd-7ecf7fb0a175 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.716 2 DEBUG oslo_concurrency.lockutils [req-e202324f-b279-4cb6-806d-edb6db9c0197 req-4cbc65ce-402b-433b-9edd-7ecf7fb0a175 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:44 compute-0 nova_compute[265391]: 2025-09-30 18:24:44.717 2 DEBUG nova.compute.manager [req-e202324f-b279-4cb6-806d-edb6db9c0197 req-4cbc65ce-402b-433b-9edd-7ecf7fb0a175 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Processing event network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:24:44 compute-0 systemd[1]: Started libpod-conmon-ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a.scope.
Sep 30 18:24:44 compute-0 podman[323030]: 2025-09-30 18:24:44.666421899 +0000 UTC m=+0.029628201 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:24:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b187512898779f8d62265ded9c5761bdda2ea1ea8eb8e613b2c7862b17c60478/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:24:44 compute-0 podman[323030]: 2025-09-30 18:24:44.790898824 +0000 UTC m=+0.154105086 container init ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:24:44 compute-0 podman[323030]: 2025-09-30 18:24:44.800163265 +0000 UTC m=+0.163369517 container start ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:24:44 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[323079]: [NOTICE]   (323091) : New worker (323093) forked
Sep 30 18:24:44 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[323079]: [NOTICE]   (323091) : Loading success.
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.311 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.315 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.319 2 INFO nova.virt.libvirt.driver [-] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Instance spawned successfully.
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.319 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:24:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.836 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.837 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.838 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.838 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.839 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.840 2 DEBUG nova.virt.libvirt.driver [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:24:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:45 compute-0 nova_compute[265391]: 2025-09-30 18:24:45.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.350 2 INFO nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Took 11.95 seconds to spawn the instance on the hypervisor.
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.351 2 DEBUG nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:24:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:24:46 compute-0 ceph-mon[73755]: pgmap v1384: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:24:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.804 2 DEBUG nova.compute.manager [req-0aa1e151-afde-46c3-8275-098c7b94d842 req-2e342849-5369-4250-bae1-ce50e23a2d84 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.805 2 DEBUG oslo_concurrency.lockutils [req-0aa1e151-afde-46c3-8275-098c7b94d842 req-2e342849-5369-4250-bae1-ce50e23a2d84 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.805 2 DEBUG oslo_concurrency.lockutils [req-0aa1e151-afde-46c3-8275-098c7b94d842 req-2e342849-5369-4250-bae1-ce50e23a2d84 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.805 2 DEBUG oslo_concurrency.lockutils [req-0aa1e151-afde-46c3-8275-098c7b94d842 req-2e342849-5369-4250-bae1-ce50e23a2d84 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.806 2 DEBUG nova.compute.manager [req-0aa1e151-afde-46c3-8275-098c7b94d842 req-2e342849-5369-4250-bae1-ce50e23a2d84 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] No waiting events found dispatching network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.806 2 WARNING nova.compute.manager [req-0aa1e151-afde-46c3-8275-098c7b94d842 req-2e342849-5369-4250-bae1-ce50e23a2d84 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received unexpected event network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 for instance with vm_state active and task_state None.
Sep 30 18:24:46 compute-0 nova_compute[265391]: 2025-09-30 18:24:46.878 2 INFO nova.compute.manager [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Took 17.73 seconds to build instance.
Sep 30 18:24:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:47.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:47 compute-0 nova_compute[265391]: 2025-09-30 18:24:47.383 2 DEBUG oslo_concurrency.lockutils [None req-7c5f19a0-94e3-427b-ab88-89d4231d1d4e 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.253s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:47 compute-0 podman[323108]: 2025-09-30 18:24:47.540741234 +0000 UTC m=+0.063807360 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Sep 30 18:24:47 compute-0 podman[323109]: 2025-09-30 18:24:47.545368474 +0000 UTC m=+0.066982362 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Sep 30 18:24:47 compute-0 podman[323106]: 2025-09-30 18:24:47.54752378 +0000 UTC m=+0.072599558 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=multipathd)
Sep 30 18:24:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 69 op/s
Sep 30 18:24:48 compute-0 ceph-mon[73755]: pgmap v1385: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 69 op/s
Sep 30 18:24:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:48.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:48] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:24:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:48] "GET /metrics HTTP/1.1" 200 46630 "" "Prometheus/2.51.0"
Sep 30 18:24:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:50.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Sep 30 18:24:50 compute-0 ceph-mon[73755]: pgmap v1386: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Sep 30 18:24:50 compute-0 nova_compute[265391]: 2025-09-30 18:24:50.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:50 compute-0 nova_compute[265391]: 2025-09-30 18:24:50.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:24:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:24:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:24:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:24:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:53 compute-0 ceph-mon[73755]: pgmap v1387: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:24:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:53.774Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:53 compute-0 sudo[323173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:24:53 compute-0 sudo[323173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:24:53 compute-0 sudo[323173]: pam_unix(sudo:session): session closed for user root
Sep 30 18:24:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:54.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:54.304 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:24:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:54.304 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:24:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:24:54.305 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:24:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:24:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4264755529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:24:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:54.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:55 compute-0 ceph-mon[73755]: pgmap v1388: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:24:55 compute-0 nova_compute[265391]: 2025-09-30 18:24:55.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:24:55 compute-0 nova_compute[265391]: 2025-09-30 18:24:55.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:24:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:56.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:24:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:24:57.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:24:57 compute-0 ovn_controller[156242]: 2025-09-30T18:24:57Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:86:5f 10.100.0.13
Sep 30 18:24:57 compute-0 ovn_controller[156242]: 2025-09-30T18:24:57Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:86:5f 10.100.0.13
Sep 30 18:24:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:57 compute-0 ceph-mon[73755]: pgmap v1389: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:24:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:24:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861102180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:24:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:24:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861102180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:24:57 compute-0 sshd-session[323105]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:24:57 compute-0 sshd-session[323105]: banner exchange: Connection from 115.190.39.222 port 54938: Connection timed out
Sep 30 18:24:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:24:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:24:58.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:24:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 121 MiB data, 279 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 116 op/s
Sep 30 18:24:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/861102180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:24:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/861102180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:24:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:24:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:24:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:24:58.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:24:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:58] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:24:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:24:58] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:24:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:24:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:24:59 compute-0 ceph-mon[73755]: pgmap v1390: 353 pgs: 353 active+clean; 121 MiB data, 279 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 116 op/s
Sep 30 18:24:59 compute-0 podman[276673]: time="2025-09-30T18:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:24:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:24:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10746 "" "Go-http-client/1.1"
Sep 30 18:25:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:00.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 157 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 1.1 MiB/s rd, 3.5 MiB/s wr, 105 op/s
Sep 30 18:25:00 compute-0 ceph-mon[73755]: pgmap v1391: 353 pgs: 353 active+clean; 157 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 1.1 MiB/s rd, 3.5 MiB/s wr, 105 op/s
Sep 30 18:25:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:00.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:00 compute-0 nova_compute[265391]: 2025-09-30 18:25:00.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:00 compute-0 nova_compute[265391]: 2025-09-30 18:25:00.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:01 compute-0 openstack_network_exporter[279566]: ERROR   18:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:25:01 compute-0 openstack_network_exporter[279566]: ERROR   18:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:25:01 compute-0 openstack_network_exporter[279566]: ERROR   18:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:25:01 compute-0 openstack_network_exporter[279566]: ERROR   18:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:25:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:25:01 compute-0 openstack_network_exporter[279566]: ERROR   18:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:25:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:25:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:02.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 157 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 294 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Sep 30 18:25:02 compute-0 ceph-mon[73755]: pgmap v1392: 353 pgs: 353 active+clean; 157 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 294 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Sep 30 18:25:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:02.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:03.777Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:04.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:25:04 compute-0 ceph-mon[73755]: pgmap v1393: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:25:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3160135630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:25:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/187030609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:25:05 compute-0 podman[323211]: 2025-09-30 18:25:05.528461019 +0000 UTC m=+0.064904208 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:25:05 compute-0 podman[323209]: 2025-09-30 18:25:05.552488973 +0000 UTC m=+0.088916812 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:25:05 compute-0 nova_compute[265391]: 2025-09-30 18:25:05.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:06 compute-0 nova_compute[265391]: 2025-09-30 18:25:06.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:06.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:25:06 compute-0 ceph-mon[73755]: pgmap v1394: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:25:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:07.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:25:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:25:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003a20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:25:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016533266079800742 of space, bias 1.0, pg target 0.3306653215960148 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:25:07
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta']
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:25:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:25:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:25:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:08.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:25:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1395: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:25:08 compute-0 ceph-mon[73755]: pgmap v1395: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:25:08 compute-0 podman[323259]: 2025-09-30 18:25:08.533594093 +0000 UTC m=+0.061047967 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:25:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:08] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:25:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:08] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:25:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:10 compute-0 sudo[323283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:25:10 compute-0 sudo[323283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:10 compute-0 sudo[323283]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:10.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:10 compute-0 sudo[323308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:25:10 compute-0 sudo[323308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400ac30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1396: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 144 KiB/s rd, 2.5 MiB/s wr, 45 op/s
Sep 30 18:25:10 compute-0 ceph-mon[73755]: pgmap v1396: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 144 KiB/s rd, 2.5 MiB/s wr, 45 op/s
Sep 30 18:25:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:10.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:10 compute-0 nova_compute[265391]: 2025-09-30 18:25:10.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:10 compute-0 sudo[323308]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:25:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:25:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:25:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:25:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:25:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:25:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:25:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:25:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:25:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:25:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:25:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:25:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:25:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:25:11 compute-0 nova_compute[265391]: 2025-09-30 18:25:11.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:11 compute-0 sudo[323366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:25:11 compute-0 sudo[323366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:11 compute-0 sudo[323366]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:11 compute-0 sudo[323391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:25:11 compute-0 sudo[323391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:25:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:25:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:25:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:25:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:25:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:25:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:25:11 compute-0 podman[323458]: 2025-09-30 18:25:11.690612905 +0000 UTC m=+0.055540555 container create cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:25:11 compute-0 systemd[1]: Started libpod-conmon-cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f.scope.
Sep 30 18:25:11 compute-0 podman[323458]: 2025-09-30 18:25:11.666254022 +0000 UTC m=+0.031181752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:25:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:25:11 compute-0 podman[323458]: 2025-09-30 18:25:11.78851919 +0000 UTC m=+0.153446860 container init cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:25:11 compute-0 podman[323458]: 2025-09-30 18:25:11.80278109 +0000 UTC m=+0.167708780 container start cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 18:25:11 compute-0 podman[323458]: 2025-09-30 18:25:11.807210085 +0000 UTC m=+0.172137765 container attach cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:25:11 compute-0 pensive_gauss[323475]: 167 167
Sep 30 18:25:11 compute-0 systemd[1]: libpod-cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f.scope: Deactivated successfully.
Sep 30 18:25:11 compute-0 podman[323458]: 2025-09-30 18:25:11.813767736 +0000 UTC m=+0.178695406 container died cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd73c9ef133f6d6f21e907086d4c2d550a4a142640177096bf9e6190962e663a-merged.mount: Deactivated successfully.
Sep 30 18:25:11 compute-0 podman[323458]: 2025-09-30 18:25:11.862272767 +0000 UTC m=+0.227200437 container remove cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:25:11 compute-0 systemd[1]: libpod-conmon-cd3171403658e3c8270f431bf0b4044c46ab232348f00f079f68c08eddecdb3f.scope: Deactivated successfully.
Sep 30 18:25:12 compute-0 podman[323498]: 2025-09-30 18:25:12.087613333 +0000 UTC m=+0.065441962 container create 1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:25:12 compute-0 systemd[1]: Started libpod-conmon-1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e.scope.
Sep 30 18:25:12 compute-0 podman[323498]: 2025-09-30 18:25:12.056549706 +0000 UTC m=+0.034378325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:25:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8022247e929dd29cbc6b1c8123b2353632bcc00fef874bc6a2b1abfc9cec91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8022247e929dd29cbc6b1c8123b2353632bcc00fef874bc6a2b1abfc9cec91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8022247e929dd29cbc6b1c8123b2353632bcc00fef874bc6a2b1abfc9cec91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8022247e929dd29cbc6b1c8123b2353632bcc00fef874bc6a2b1abfc9cec91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8022247e929dd29cbc6b1c8123b2353632bcc00fef874bc6a2b1abfc9cec91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:12 compute-0 podman[323498]: 2025-09-30 18:25:12.181295838 +0000 UTC m=+0.159124437 container init 1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:25:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:12.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:12 compute-0 podman[323498]: 2025-09-30 18:25:12.193938187 +0000 UTC m=+0.171766786 container start 1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:25:12 compute-0 podman[323498]: 2025-09-30 18:25:12.197582161 +0000 UTC m=+0.175410780 container attach 1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:25:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1397: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 445 KiB/s wr, 14 op/s
Sep 30 18:25:12 compute-0 ceph-mon[73755]: pgmap v1397: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 445 KiB/s wr, 14 op/s
Sep 30 18:25:12 compute-0 reverent_jepsen[323514]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:25:12 compute-0 reverent_jepsen[323514]: --> All data devices are unavailable
Sep 30 18:25:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:12 compute-0 systemd[1]: libpod-1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e.scope: Deactivated successfully.
Sep 30 18:25:12 compute-0 podman[323529]: 2025-09-30 18:25:12.624277191 +0000 UTC m=+0.027912096 container died 1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa8022247e929dd29cbc6b1c8123b2353632bcc00fef874bc6a2b1abfc9cec91-merged.mount: Deactivated successfully.
Sep 30 18:25:12 compute-0 podman[323529]: 2025-09-30 18:25:12.665654227 +0000 UTC m=+0.069289102 container remove 1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jepsen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:25:12 compute-0 systemd[1]: libpod-conmon-1e8fa6be904bd915a58269924964acf276d07d306fa13948ce1ce8db5bbbe06e.scope: Deactivated successfully.
Sep 30 18:25:12 compute-0 sudo[323391]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:12 compute-0 sudo[323544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:25:12 compute-0 sudo[323544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:12 compute-0 sudo[323544]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:12 compute-0 sudo[323569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:25:12 compute-0 sudo[323569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:13 compute-0 podman[323636]: 2025-09-30 18:25:13.335785163 +0000 UTC m=+0.047159017 container create ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:25:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:13 compute-0 systemd[1]: Started libpod-conmon-ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa.scope.
Sep 30 18:25:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:25:13 compute-0 podman[323636]: 2025-09-30 18:25:13.313896094 +0000 UTC m=+0.025269918 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:25:13 compute-0 podman[323636]: 2025-09-30 18:25:13.41225227 +0000 UTC m=+0.123626114 container init ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:25:13 compute-0 podman[323636]: 2025-09-30 18:25:13.423119513 +0000 UTC m=+0.134493337 container start ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poincare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:25:13 compute-0 podman[323636]: 2025-09-30 18:25:13.426841159 +0000 UTC m=+0.138215003 container attach ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poincare, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:25:13 compute-0 agitated_poincare[323653]: 167 167
Sep 30 18:25:13 compute-0 systemd[1]: libpod-ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa.scope: Deactivated successfully.
Sep 30 18:25:13 compute-0 podman[323636]: 2025-09-30 18:25:13.428947394 +0000 UTC m=+0.140321218 container died ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 18:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b3f70f0d3fe6163759c261b48c647fbb7e66c64776fe3e200052425548d6ae0-merged.mount: Deactivated successfully.
Sep 30 18:25:13 compute-0 podman[323636]: 2025-09-30 18:25:13.468530363 +0000 UTC m=+0.179904187 container remove ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 18:25:13 compute-0 systemd[1]: libpod-conmon-ba8cfe12768790192774dc627b3d267b8440ad58e08e79875b87797f56cbe8fa.scope: Deactivated successfully.
Sep 30 18:25:13 compute-0 podman[323676]: 2025-09-30 18:25:13.646242282 +0000 UTC m=+0.042836235 container create 95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kepler, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:25:13 compute-0 systemd[1]: Started libpod-conmon-95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2.scope.
Sep 30 18:25:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6320ac95a8a01f13ee7d01ebcd951f5930a6c6ba48b1cc833e1095a713c35eee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6320ac95a8a01f13ee7d01ebcd951f5930a6c6ba48b1cc833e1095a713c35eee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6320ac95a8a01f13ee7d01ebcd951f5930a6c6ba48b1cc833e1095a713c35eee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6320ac95a8a01f13ee7d01ebcd951f5930a6c6ba48b1cc833e1095a713c35eee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:13 compute-0 podman[323676]: 2025-09-30 18:25:13.719407073 +0000 UTC m=+0.116001026 container init 95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:25:13 compute-0 podman[323676]: 2025-09-30 18:25:13.630096832 +0000 UTC m=+0.026690805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:25:13 compute-0 podman[323676]: 2025-09-30 18:25:13.726885348 +0000 UTC m=+0.123479301 container start 95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kepler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:25:13 compute-0 podman[323676]: 2025-09-30 18:25:13.729916617 +0000 UTC m=+0.126510660 container attach 95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:25:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:13.778Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:13 compute-0 sudo[323699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:25:13 compute-0 sudo[323699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:13 compute-0 sudo[323699]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:14 compute-0 lucid_kepler[323693]: {
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:     "0": [
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:         {
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "devices": [
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "/dev/loop3"
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             ],
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "lv_name": "ceph_lv0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "lv_size": "21470642176",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "name": "ceph_lv0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "tags": {
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.cluster_name": "ceph",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.crush_device_class": "",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.encrypted": "0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.osd_id": "0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.type": "block",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.vdo": "0",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:                 "ceph.with_tpm": "0"
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             },
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "type": "block",
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:             "vg_name": "ceph_vg0"
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:         }
Sep 30 18:25:14 compute-0 lucid_kepler[323693]:     ]
Sep 30 18:25:14 compute-0 lucid_kepler[323693]: }
Sep 30 18:25:14 compute-0 systemd[1]: libpod-95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2.scope: Deactivated successfully.
Sep 30 18:25:14 compute-0 podman[323676]: 2025-09-30 18:25:14.052166802 +0000 UTC m=+0.448760755 container died 95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6320ac95a8a01f13ee7d01ebcd951f5930a6c6ba48b1cc833e1095a713c35eee-merged.mount: Deactivated successfully.
Sep 30 18:25:14 compute-0 podman[323676]: 2025-09-30 18:25:14.095512978 +0000 UTC m=+0.492106961 container remove 95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_kepler, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 18:25:14 compute-0 systemd[1]: libpod-conmon-95767ac2c8c91efe506dce12500fe14ae5050fed64f4f50063e8fc91206bb9a2.scope: Deactivated successfully.
Sep 30 18:25:14 compute-0 sudo[323569]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:14.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:14 compute-0 sudo[323740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:25:14 compute-0 sudo[323740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:14 compute-0 sudo[323740]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:14 compute-0 sudo[323765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:25:14 compute-0 sudo[323765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1398: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 459 KiB/s wr, 88 op/s
Sep 30 18:25:14 compute-0 ceph-mon[73755]: pgmap v1398: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 459 KiB/s wr, 88 op/s
Sep 30 18:25:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:14.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:14 compute-0 podman[323835]: 2025-09-30 18:25:14.709843315 +0000 UTC m=+0.042654589 container create 0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elbakyan, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:25:14 compute-0 systemd[1]: Started libpod-conmon-0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d.scope.
Sep 30 18:25:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:25:14 compute-0 podman[323835]: 2025-09-30 18:25:14.692163956 +0000 UTC m=+0.024975260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:25:14 compute-0 podman[323835]: 2025-09-30 18:25:14.787431582 +0000 UTC m=+0.120242876 container init 0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elbakyan, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:25:14 compute-0 podman[323835]: 2025-09-30 18:25:14.794264709 +0000 UTC m=+0.127075983 container start 0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:25:14 compute-0 podman[323835]: 2025-09-30 18:25:14.797693118 +0000 UTC m=+0.130504392 container attach 0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elbakyan, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:25:14 compute-0 funny_elbakyan[323853]: 167 167
Sep 30 18:25:14 compute-0 systemd[1]: libpod-0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d.scope: Deactivated successfully.
Sep 30 18:25:14 compute-0 podman[323835]: 2025-09-30 18:25:14.799912926 +0000 UTC m=+0.132724200 container died 0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-132b76178deb63af402ac42526b2976609c87515bd5c85f81db2c7cb2c7ed2e3-merged.mount: Deactivated successfully.
Sep 30 18:25:14 compute-0 podman[323835]: 2025-09-30 18:25:14.83506409 +0000 UTC m=+0.167875364 container remove 0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:25:14 compute-0 systemd[1]: libpod-conmon-0067254c68bb62598017412a29d52042ebd740c3d69f1bf14ad8f4af461d9f3d.scope: Deactivated successfully.
Sep 30 18:25:15 compute-0 podman[323877]: 2025-09-30 18:25:15.007848461 +0000 UTC m=+0.045089583 container create 1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chaum, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:25:15 compute-0 systemd[1]: Started libpod-conmon-1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956.scope.
Sep 30 18:25:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89bd071876a09dae5626acfa9f34d9434ece53a14331699cbc003f4bc88b6f5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89bd071876a09dae5626acfa9f34d9434ece53a14331699cbc003f4bc88b6f5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89bd071876a09dae5626acfa9f34d9434ece53a14331699cbc003f4bc88b6f5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89bd071876a09dae5626acfa9f34d9434ece53a14331699cbc003f4bc88b6f5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:25:15 compute-0 podman[323877]: 2025-09-30 18:25:14.989967076 +0000 UTC m=+0.027208218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:25:15 compute-0 podman[323877]: 2025-09-30 18:25:15.095090968 +0000 UTC m=+0.132332120 container init 1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:25:15 compute-0 podman[323877]: 2025-09-30 18:25:15.102223423 +0000 UTC m=+0.139464545 container start 1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:25:15 compute-0 podman[323877]: 2025-09-30 18:25:15.106477684 +0000 UTC m=+0.143718926 container attach 1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 18:25:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:15 compute-0 nova_compute[265391]: 2025-09-30 18:25:15.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:15 compute-0 sshd-session[323790]: Invalid user superadmin from 45.252.249.158 port 35918
Sep 30 18:25:15 compute-0 sshd-session[323790]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:25:15 compute-0 sshd-session[323790]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:25:15 compute-0 lvm[323970]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:25:15 compute-0 lvm[323970]: VG ceph_vg0 finished
Sep 30 18:25:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:15 compute-0 trusting_chaum[323894]: {}
Sep 30 18:25:15 compute-0 systemd[1]: libpod-1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956.scope: Deactivated successfully.
Sep 30 18:25:15 compute-0 systemd[1]: libpod-1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956.scope: Consumed 1.274s CPU time.
Sep 30 18:25:15 compute-0 podman[323877]: 2025-09-30 18:25:15.923471238 +0000 UTC m=+0.960712390 container died 1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-89bd071876a09dae5626acfa9f34d9434ece53a14331699cbc003f4bc88b6f5d-merged.mount: Deactivated successfully.
Sep 30 18:25:15 compute-0 podman[323877]: 2025-09-30 18:25:15.979697619 +0000 UTC m=+1.016938741 container remove 1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chaum, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 18:25:15 compute-0 systemd[1]: libpod-conmon-1471a6d7bc82cc78e46db33e47f5515b014a954c9724081041af3369327fb956.scope: Deactivated successfully.
Sep 30 18:25:16 compute-0 nova_compute[265391]: 2025-09-30 18:25:16.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:16 compute-0 sudo[323765]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:25:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:25:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:25:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:25:16 compute-0 sudo[323986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:25:16 compute-0 sudo[323986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:16 compute-0 sudo[323986]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:16.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1399: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:25:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:16.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:25:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:25:17 compute-0 ceph-mon[73755]: pgmap v1399: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:25:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:17.243Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:25:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:17.243Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:25:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:17 compute-0 sshd-session[323790]: Failed password for invalid user superadmin from 45.252.249.158 port 35918 ssh2
Sep 30 18:25:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:18.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1400: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:25:18 compute-0 ceph-mon[73755]: pgmap v1400: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:25:18 compute-0 podman[324014]: 2025-09-30 18:25:18.550608408 +0000 UTC m=+0.079948119 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 18:25:18 compute-0 podman[324013]: 2025-09-30 18:25:18.55108787 +0000 UTC m=+0.075207686 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:25:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:18.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:18 compute-0 podman[324015]: 2025-09-30 18:25:18.585443993 +0000 UTC m=+0.105022971 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, release=1755695350)
Sep 30 18:25:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:18] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:25:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:18] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:25:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:20 compute-0 sshd-session[323790]: Received disconnect from 45.252.249.158 port 35918:11: Bye Bye [preauth]
Sep 30 18:25:20 compute-0 sshd-session[323790]: Disconnected from invalid user superadmin 45.252.249.158 port 35918 [preauth]
Sep 30 18:25:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1401: 353 pgs: 353 active+clean; 167 MiB data, 311 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:25:20 compute-0 ceph-mon[73755]: pgmap v1401: 353 pgs: 353 active+clean; 167 MiB data, 311 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:25:20 compute-0 nova_compute[265391]: 2025-09-30 18:25:20.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:21 compute-0 nova_compute[265391]: 2025-09-30 18:25:21.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:25:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:25:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:25:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1402: 353 pgs: 353 active+clean; 167 MiB data, 311 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Sep 30 18:25:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:25:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:22.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:25:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:23 compute-0 ceph-mon[73755]: pgmap v1402: 353 pgs: 353 active+clean; 167 MiB data, 311 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Sep 30 18:25:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:23.778Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1403: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Sep 30 18:25:24 compute-0 ceph-mon[73755]: pgmap v1403: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Sep 30 18:25:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:24.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:25 compute-0 nova_compute[265391]: 2025-09-30 18:25:25.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:26 compute-0 nova_compute[265391]: 2025-09-30 18:25:26.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1404: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:25:26 compute-0 ceph-mon[73755]: pgmap v1404: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:25:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:27.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:28.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1405: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:25:28 compute-0 ceph-mon[73755]: pgmap v1405: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:25:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:28] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:25:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:28] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:25:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:29 compute-0 podman[276673]: time="2025-09-30T18:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:25:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:25:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10744 "" "Go-http-client/1.1"
Sep 30 18:25:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:30.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1406: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:25:30 compute-0 ceph-mon[73755]: pgmap v1406: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:25:30 compute-0 nova_compute[265391]: 2025-09-30 18:25:30.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:30.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:31 compute-0 nova_compute[265391]: 2025-09-30 18:25:31.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:31 compute-0 openstack_network_exporter[279566]: ERROR   18:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:25:31 compute-0 openstack_network_exporter[279566]: ERROR   18:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:25:31 compute-0 openstack_network_exporter[279566]: ERROR   18:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:25:31 compute-0 openstack_network_exporter[279566]: ERROR   18:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:25:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:25:31 compute-0 openstack_network_exporter[279566]: ERROR   18:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:25:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:25:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:32.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1407: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:25:32 compute-0 ceph-mon[73755]: pgmap v1407: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:25:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:32.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:33.780Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:34 compute-0 sudo[324086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:25:34 compute-0 sudo[324086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:34 compute-0 sudo[324086]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1408: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:25:34 compute-0 ceph-mon[73755]: pgmap v1408: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:25:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:34 compute-0 nova_compute[265391]: 2025-09-30 18:25:34.947 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:35 compute-0 nova_compute[265391]: 2025-09-30 18:25:35.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:36 compute-0 nova_compute[265391]: 2025-09-30 18:25:36.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2006376901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:25:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:36.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_47] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1409: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Sep 30 18:25:36 compute-0 nova_compute[265391]: 2025-09-30 18:25:36.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:25:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752651912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:25:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:25:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752651912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:25:36 compute-0 podman[324116]: 2025-09-30 18:25:36.557651155 +0000 UTC m=+0.078135592 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:25:36 compute-0 podman[324115]: 2025-09-30 18:25:36.571113865 +0000 UTC m=+0.103436630 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:25:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:36.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:37 compute-0 ceph-mon[73755]: pgmap v1409: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Sep 30 18:25:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2752651912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:25:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2752651912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:25:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:37.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:25:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:25:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:25:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:25:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:25:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:25:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:25:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:37 compute-0 sshd-session[324113]: Invalid user cuser from 14.225.220.107 port 44918
Sep 30 18:25:37 compute-0 sshd-session[324113]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:25:37 compute-0 sshd-session[324113]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.947 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:25:37 compute-0 nova_compute[265391]: 2025-09-30 18:25:37.947 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:25:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:25:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:38.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:25:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2733581921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:25:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1410: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:25:38 compute-0 nova_compute[265391]: 2025-09-30 18:25:38.425 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:25:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:38.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:38] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:25:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:38] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:25:39 compute-0 ovn_controller[156242]: 2025-09-30T18:25:39Z|00137|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Sep 30 18:25:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2733581921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:25:39 compute-0 ceph-mon[73755]: pgmap v1410: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:25:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/376485844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:25:39 compute-0 sshd-session[324113]: Failed password for invalid user cuser from 14.225.220.107 port 44918 ssh2
Sep 30 18:25:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:39 compute-0 nova_compute[265391]: 2025-09-30 18:25:39.489 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:25:39 compute-0 nova_compute[265391]: 2025-09-30 18:25:39.489 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:25:39 compute-0 podman[324193]: 2025-09-30 18:25:39.55745947 +0000 UTC m=+0.079093047 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 18:25:39 compute-0 nova_compute[265391]: 2025-09-30 18:25:39.661 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:25:39 compute-0 nova_compute[265391]: 2025-09-30 18:25:39.663 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:25:39 compute-0 nova_compute[265391]: 2025-09-30 18:25:39.712 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.049s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:25:39 compute-0 nova_compute[265391]: 2025-09-30 18:25:39.713 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4207MB free_disk=39.90116500854492GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:25:39 compute-0 nova_compute[265391]: 2025-09-30 18:25:39.713 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:25:39 compute-0 nova_compute[265391]: 2025-09-30 18:25:39.713 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:25:40 compute-0 sshd-session[324113]: Received disconnect from 14.225.220.107 port 44918:11: Bye Bye [preauth]
Sep 30 18:25:40 compute-0 sshd-session[324113]: Disconnected from invalid user cuser 14.225.220.107 port 44918 [preauth]
Sep 30 18:25:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1411: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 3.3 KiB/s wr, 0 op/s
Sep 30 18:25:40 compute-0 ceph-mon[73755]: pgmap v1411: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 3.3 KiB/s wr, 0 op/s
Sep 30 18:25:40 compute-0 nova_compute[265391]: 2025-09-30 18:25:40.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:40.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:40 compute-0 nova_compute[265391]: 2025-09-30 18:25:40.839 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 7569c2e7-ec68-4b21-b36b-7c828ac8af52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:25:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:41 compute-0 nova_compute[265391]: 2025-09-30 18:25:41.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:41 compute-0 nova_compute[265391]: 2025-09-30 18:25:41.346 2 WARNING nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 05075776-ca3e-4416-bdd4-558a62d1cf69 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Sep 30 18:25:41 compute-0 nova_compute[265391]: 2025-09-30 18:25:41.347 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:25:41 compute-0 nova_compute[265391]: 2025-09-30 18:25:41.347 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:25:39 up  1:28,  0 user,  load average: 0.54, 0.71, 0.81\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:25:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:41 compute-0 nova_compute[265391]: 2025-09-30 18:25:41.458 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:25:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:25:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874261876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:25:41 compute-0 nova_compute[265391]: 2025-09-30 18:25:41.912 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:25:41 compute-0 nova_compute[265391]: 2025-09-30 18:25:41.918 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:25:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/874261876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:25:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:42.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1412: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:25:42 compute-0 nova_compute[265391]: 2025-09-30 18:25:42.427 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:25:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:42.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:42 compute-0 nova_compute[265391]: 2025-09-30 18:25:42.654 2 DEBUG nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Creating tmpfile /var/lib/nova/instances/tmp3yp66xia to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Sep 30 18:25:42 compute-0 nova_compute[265391]: 2025-09-30 18:25:42.656 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:25:42 compute-0 nova_compute[265391]: 2025-09-30 18:25:42.660 2 DEBUG nova.compute.manager [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3yp66xia',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Sep 30 18:25:42 compute-0 nova_compute[265391]: 2025-09-30 18:25:42.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:25:42 compute-0 nova_compute[265391]: 2025-09-30 18:25:42.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.231s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:25:42 compute-0 ceph-mon[73755]: pgmap v1412: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:25:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:43.781Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:43 compute-0 nova_compute[265391]: 2025-09-30 18:25:43.943 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:43 compute-0 nova_compute[265391]: 2025-09-30 18:25:43.944 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:43 compute-0 nova_compute[265391]: 2025-09-30 18:25:43.945 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:44.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1413: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:25:44 compute-0 ceph-mon[73755]: pgmap v1413: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:25:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:44.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:44 compute-0 nova_compute[265391]: 2025-09-30 18:25:44.688 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:25:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003240 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:45 compute-0 nova_compute[265391]: 2025-09-30 18:25:45.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:46 compute-0 nova_compute[265391]: 2025-09-30 18:25:46.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:46.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1414: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:25:46 compute-0 nova_compute[265391]: 2025-09-30 18:25:46.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:25:46 compute-0 ceph-mon[73755]: pgmap v1414: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:25:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:46.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:47.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:48.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1415: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 10 KiB/s wr, 1 op/s
Sep 30 18:25:48 compute-0 ceph-mon[73755]: pgmap v1415: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 10 KiB/s wr, 1 op/s
Sep 30 18:25:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.004000104s ======
Sep 30 18:25:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:48.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000104s
Sep 30 18:25:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:48] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:25:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:48] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:25:49 compute-0 nova_compute[265391]: 2025-09-30 18:25:49.234 2 DEBUG nova.compute.manager [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3yp66xia',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='05075776-ca3e-4416-bdd4-558a62d1cf69',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Sep 30 18:25:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:49 compute-0 podman[324247]: 2025-09-30 18:25:49.552850323 +0000 UTC m=+0.067536676 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, container_name=iscsid)
Sep 30 18:25:49 compute-0 podman[324245]: 2025-09-30 18:25:49.557248847 +0000 UTC m=+0.074538598 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:25:49 compute-0 podman[324248]: 2025-09-30 18:25:49.56158947 +0000 UTC m=+0.066770426 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:25:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:50 compute-0 nova_compute[265391]: 2025-09-30 18:25:50.246 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-05075776-ca3e-4416-bdd4-558a62d1cf69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:25:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:50.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:50 compute-0 nova_compute[265391]: 2025-09-30 18:25:50.247 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-05075776-ca3e-4416-bdd4-558a62d1cf69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:25:50 compute-0 nova_compute[265391]: 2025-09-30 18:25:50.247 2 DEBUG nova.network.neutron [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:25:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1416: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:25:50 compute-0 ceph-mon[73755]: pgmap v1416: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:25:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:50.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:50 compute-0 nova_compute[265391]: 2025-09-30 18:25:50.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:50 compute-0 nova_compute[265391]: 2025-09-30 18:25:50.754 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:25:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:51 compute-0 nova_compute[265391]: 2025-09-30 18:25:51.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:51 compute-0 nova_compute[265391]: 2025-09-30 18:25:51.644 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:25:51 compute-0 nova_compute[265391]: 2025-09-30 18:25:51.788 2 DEBUG nova.network.neutron [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Updating instance_info_cache with network_info: [{"id": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "address": "fa:16:3e:ec:82:fd", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295c346d-8d", "ovs_interfaceid": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:25:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:52.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.295 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-05075776-ca3e-4416-bdd4-558a62d1cf69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:25:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:25:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.310 2 DEBUG nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3yp66xia',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='05075776-ca3e-4416-bdd4-558a62d1cf69',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.311 2 DEBUG nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Creating instance directory: /var/lib/nova/instances/05075776-ca3e-4416-bdd4-558a62d1cf69 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.312 2 DEBUG nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Ensure instance console log exists: /var/lib/nova/instances/05075776-ca3e-4416-bdd4-558a62d1cf69/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.312 2 DEBUG nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.314 2 DEBUG nova.virt.libvirt.vif [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-09-30T18:24:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-763414646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-763414646',id=17,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:25:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-u3g6u6z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:25:11Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=05075776-ca3e-4416-bdd4-558a62d1cf69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "address": "fa:16:3e:ec:82:fd", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap295c346d-8d", "ovs_interfaceid": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.314 2 DEBUG nova.network.os_vif_util [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "address": "fa:16:3e:ec:82:fd", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap295c346d-8d", "ovs_interfaceid": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.315 2 DEBUG nova.network.os_vif_util [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:82:fd,bridge_name='br-int',has_traffic_filtering=True,id=295c346d-8de9-4a50-883e-9a7e1ccdccc7,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295c346d-8d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.316 2 DEBUG os_vif [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:82:fd,bridge_name='br-int',has_traffic_filtering=True,id=295c346d-8de9-4a50-883e-9a7e1ccdccc7,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295c346d-8d') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.317 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'ddddbef0-2ef1-507f-a9ee-3f4ffec99c70', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:25:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap295c346d-8d, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap295c346d-8d, col_values=(('qos', UUID('9ed5af49-621f-4616-966c-e41e744f5fbf')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap295c346d-8d, col_values=(('external_ids', {'iface-id': '295c346d-8de9-4a50-883e-9a7e1ccdccc7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:82:fd', 'vm-uuid': '05075776-ca3e-4416-bdd4-558a62d1cf69'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:25:52 compute-0 NetworkManager[45059]: <info>  [1759256752.3330] manager: (tap295c346d-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.342 2 INFO os_vif [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:82:fd,bridge_name='br-int',has_traffic_filtering=True,id=295c346d-8de9-4a50-883e-9a7e1ccdccc7,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295c346d-8d')
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.342 2 DEBUG nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.342 2 DEBUG nova.compute.manager [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3yp66xia',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='05075776-ca3e-4416-bdd4-558a62d1cf69',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Sep 30 18:25:52 compute-0 nova_compute[265391]: 2025-09-30 18:25:52.343 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:25:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:25:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1417: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:25:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:52.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:53 compute-0 nova_compute[265391]: 2025-09-30 18:25:53.247 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:25:53 compute-0 ceph-mon[73755]: pgmap v1417: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:25:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:53.782Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:54 compute-0 sudo[324312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:25:54 compute-0 sudo[324312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:25:54 compute-0 sudo[324312]: pam_unix(sudo:session): session closed for user root
Sep 30 18:25:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:54.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:54.306 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:25:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:54.307 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:25:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:54.308 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:25:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:54.332 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:25:54 compute-0 nova_compute[265391]: 2025-09-30 18:25:54.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:54.333 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:25:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1418: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:25:54 compute-0 ceph-mon[73755]: pgmap v1418: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:25:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:54.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:55 compute-0 nova_compute[265391]: 2025-09-30 18:25:55.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:25:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:56.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:56 compute-0 nova_compute[265391]: 2025-09-30 18:25:56.307 2 DEBUG nova.network.neutron [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Port 295c346d-8de9-4a50-883e-9a7e1ccdccc7 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Sep 30 18:25:56 compute-0 nova_compute[265391]: 2025-09-30 18:25:56.319 2 DEBUG nova.compute.manager [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3yp66xia',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='05075776-ca3e-4416-bdd4-558a62d1cf69',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Sep 30 18:25:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1419: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:25:56 compute-0 ceph-mon[73755]: pgmap v1419: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:25:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:56.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:25:57.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:25:57 compute-0 nova_compute[265391]: 2025-09-30 18:25:57.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:25:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1146159520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:25:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:25:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1146159520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:25:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1146159520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:25:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1146159520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:25:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:25:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:25:58.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:25:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:58 compute-0 systemd[1]: Starting libvirt proxy daemon...
Sep 30 18:25:58 compute-0 systemd[1]: Started libvirt proxy daemon.
Sep 30 18:25:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1420: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 10 KiB/s wr, 1 op/s
Sep 30 18:25:58 compute-0 kernel: tap295c346d-8d: entered promiscuous mode
Sep 30 18:25:58 compute-0 NetworkManager[45059]: <info>  [1759256758.5255] manager: (tap295c346d-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Sep 30 18:25:58 compute-0 nova_compute[265391]: 2025-09-30 18:25:58.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:58 compute-0 ovn_controller[156242]: 2025-09-30T18:25:58Z|00138|binding|INFO|Claiming lport 295c346d-8de9-4a50-883e-9a7e1ccdccc7 for this additional chassis.
Sep 30 18:25:58 compute-0 ovn_controller[156242]: 2025-09-30T18:25:58Z|00139|binding|INFO|295c346d-8de9-4a50-883e-9a7e1ccdccc7: Claiming fa:16:3e:ec:82:fd 10.100.0.9
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.537 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:82:fd 10.100.0.9'], port_security=['fa:16:3e:ec:82:fd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '05075776-ca3e-4416-bdd4-558a62d1cf69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=295c346d-8de9-4a50-883e-9a7e1ccdccc7) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.538 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 295c346d-8de9-4a50-883e-9a7e1ccdccc7 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.540 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:25:58 compute-0 ovn_controller[156242]: 2025-09-30T18:25:58Z|00140|binding|INFO|Setting lport 295c346d-8de9-4a50-883e-9a7e1ccdccc7 ovn-installed in OVS
Sep 30 18:25:58 compute-0 nova_compute[265391]: 2025-09-30 18:25:58.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.561 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4ffa37-824b-4c24-a2f6-2df9e7e441b9]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:25:58 compute-0 systemd-udevd[324374]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:25:58 compute-0 systemd-machined[219917]: New machine qemu-12-instance-00000011.
Sep 30 18:25:58 compute-0 NetworkManager[45059]: <info>  [1759256758.5819] device (tap295c346d-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:25:58 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000011.
Sep 30 18:25:58 compute-0 NetworkManager[45059]: <info>  [1759256758.5833] device (tap295c346d-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:25:58 compute-0 ceph-mon[73755]: pgmap v1420: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 10 KiB/s wr, 1 op/s
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.600 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[700bcbaf-f180-4cb7-8bad-4abe53625fca]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.604 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[ace93512-e174-4d92-9a40-6e2c5ef166e1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:25:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:25:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:25:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:25:58.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.636 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[699d7dfa-6df1-460e-91b8-914553326678]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.655 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc3ab1c-88ef-43d9-9445-91caa0e78864]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528275, 'reachable_time': 37779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324386, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.675 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c41044d2-9972-4a80-9f19-b7cf765b5b78]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528291, 'tstamp': 528291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324387, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528295, 'tstamp': 528295}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324387, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.677 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:25:58 compute-0 nova_compute[265391]: 2025-09-30 18:25:58.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:58 compute-0 nova_compute[265391]: 2025-09-30 18:25:58.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.683 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.683 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.683 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.684 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:25:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:25:58.685 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ef357084-6263-4a04-91d1-932631cbb247]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:25:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:58] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:25:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:25:58] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:25:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:25:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:25:59 compute-0 podman[276673]: time="2025-09-30T18:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:25:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:25:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10751 "" "Go-http-client/1.1"
Sep 30 18:26:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:00.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1421: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.4 KiB/s wr, 0 op/s
Sep 30 18:26:00 compute-0 ceph-mon[73755]: pgmap v1421: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.4 KiB/s wr, 0 op/s
Sep 30 18:26:00 compute-0 nova_compute[265391]: 2025-09-30 18:26:00.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:00.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:00 compute-0 ovn_controller[156242]: 2025-09-30T18:26:00Z|00141|binding|INFO|Claiming lport 295c346d-8de9-4a50-883e-9a7e1ccdccc7 for this chassis.
Sep 30 18:26:00 compute-0 ovn_controller[156242]: 2025-09-30T18:26:00Z|00142|binding|INFO|295c346d-8de9-4a50-883e-9a7e1ccdccc7: Claiming fa:16:3e:ec:82:fd 10.100.0.9
Sep 30 18:26:00 compute-0 ovn_controller[156242]: 2025-09-30T18:26:00Z|00143|binding|INFO|Setting lport 295c346d-8de9-4a50-883e-9a7e1ccdccc7 up in Southbound
Sep 30 18:26:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:01 compute-0 openstack_network_exporter[279566]: ERROR   18:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:26:01 compute-0 openstack_network_exporter[279566]: ERROR   18:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:26:01 compute-0 openstack_network_exporter[279566]: ERROR   18:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:26:01 compute-0 openstack_network_exporter[279566]: ERROR   18:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:26:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:26:01 compute-0 openstack_network_exporter[279566]: ERROR   18:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:26:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:26:01 compute-0 anacron[4242]: Job `cron.monthly' started
Sep 30 18:26:01 compute-0 anacron[4242]: Job `cron.monthly' terminated
Sep 30 18:26:01 compute-0 anacron[4242]: Normal exit (3 jobs run)
Sep 30 18:26:01 compute-0 nova_compute[265391]: 2025-09-30 18:26:01.920 2 INFO nova.compute.manager [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Post operation of migration started
Sep 30 18:26:01 compute-0 nova_compute[265391]: 2025-09-30 18:26:01.921 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:02.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:02 compute-0 nova_compute[265391]: 2025-09-30 18:26:02.323 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:02 compute-0 nova_compute[265391]: 2025-09-30 18:26:02.324 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:02 compute-0 nova_compute[265391]: 2025-09-30 18:26:02.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:02 compute-0 nova_compute[265391]: 2025-09-30 18:26:02.403 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-05075776-ca3e-4416-bdd4-558a62d1cf69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:26:02 compute-0 nova_compute[265391]: 2025-09-30 18:26:02.404 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-05075776-ca3e-4416-bdd4-558a62d1cf69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:26:02 compute-0 nova_compute[265391]: 2025-09-30 18:26:02.404 2 DEBUG nova.network.neutron [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:26:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1422: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:26:02 compute-0 ceph-mon[73755]: pgmap v1422: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:26:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:02.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:02 compute-0 nova_compute[265391]: 2025-09-30 18:26:02.910 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:03.334 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:03 compute-0 nova_compute[265391]: 2025-09-30 18:26:03.482 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:03 compute-0 nova_compute[265391]: 2025-09-30 18:26:03.670 2 DEBUG nova.network.neutron [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Updating instance_info_cache with network_info: [{"id": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "address": "fa:16:3e:ec:82:fd", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295c346d-8d", "ovs_interfaceid": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:26:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:03.783Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:04 compute-0 nova_compute[265391]: 2025-09-30 18:26:04.177 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-05075776-ca3e-4416-bdd4-558a62d1cf69" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:26:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:04.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004c90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1423: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:26:04 compute-0 ceph-mon[73755]: pgmap v1423: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:26:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:04.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:04 compute-0 nova_compute[265391]: 2025-09-30 18:26:04.700 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:04 compute-0 nova_compute[265391]: 2025-09-30 18:26:04.701 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:04 compute-0 nova_compute[265391]: 2025-09-30 18:26:04.701 2 DEBUG oslo_concurrency.lockutils [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:04 compute-0 nova_compute[265391]: 2025-09-30 18:26:04.708 2 INFO nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Sep 30 18:26:04 compute-0 virtqemud[265263]: Domain id=12 name='instance-00000011' uuid=05075776-ca3e-4416-bdd4-558a62d1cf69 is tainted: custom-monitor
Sep 30 18:26:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:05 compute-0 nova_compute[265391]: 2025-09-30 18:26:05.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:05 compute-0 nova_compute[265391]: 2025-09-30 18:26:05.718 2 INFO nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Sep 30 18:26:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1424: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:26:06 compute-0 ceph-mon[73755]: pgmap v1424: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:26:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:06.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:06 compute-0 nova_compute[265391]: 2025-09-30 18:26:06.726 2 INFO nova.virt.libvirt.driver [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Sep 30 18:26:06 compute-0 nova_compute[265391]: 2025-09-30 18:26:06.731 2 DEBUG nova.compute.manager [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:26:07 compute-0 nova_compute[265391]: 2025-09-30 18:26:07.243 2 DEBUG nova.objects.instance [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Sep 30 18:26:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:07.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:26:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:26:07 compute-0 nova_compute[265391]: 2025-09-30 18:26:07.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0384003b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:26:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:26:07 compute-0 podman[324442]: 2025-09-30 18:26:07.544303275 +0000 UTC m=+0.077466105 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:26:07 compute-0 podman[324441]: 2025-09-30 18:26:07.578631917 +0000 UTC m=+0.114595599 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002276674521390897 of space, bias 1.0, pg target 0.45533490427817935 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:26:07
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.nfs', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'vms', 'volumes']
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:26:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:26:08 compute-0 nova_compute[265391]: 2025-09-30 18:26:08.262 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:08.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_48] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:08 compute-0 nova_compute[265391]: 2025-09-30 18:26:08.373 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:08 compute-0 nova_compute[265391]: 2025-09-30 18:26:08.374 2 WARNING neutronclient.v2_0.client [None req-0b86d113-34eb-4c36-aeb5-1552c729aee1 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1425: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:26:08 compute-0 ceph-mon[73755]: pgmap v1425: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:26:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:08.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:08] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:26:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:08] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:26:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:10.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1426: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 5 op/s
Sep 30 18:26:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1603123180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:10 compute-0 ceph-mon[73755]: pgmap v1426: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 5 op/s
Sep 30 18:26:10 compute-0 podman[324500]: 2025-09-30 18:26:10.518411303 +0000 UTC m=+0.054370595 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:26:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:10.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:10 compute-0 nova_compute[265391]: 2025-09-30 18:26:10.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:11 compute-0 nova_compute[265391]: 2025-09-30 18:26:11.908 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "05075776-ca3e-4416-bdd4-558a62d1cf69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:11 compute-0 nova_compute[265391]: 2025-09-30 18:26:11.909 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "05075776-ca3e-4416-bdd4-558a62d1cf69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:11 compute-0 nova_compute[265391]: 2025-09-30 18:26:11.910 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:11 compute-0 nova_compute[265391]: 2025-09-30 18:26:11.910 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:11 compute-0 nova_compute[265391]: 2025-09-30 18:26:11.910 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:11 compute-0 nova_compute[265391]: 2025-09-30 18:26:11.931 2 INFO nova.compute.manager [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Terminating instance
Sep 30 18:26:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:12.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1427: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 5 op/s
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.451 2 DEBUG nova.compute.manager [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:26:12 compute-0 ceph-mon[73755]: pgmap v1427: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 5 op/s
Sep 30 18:26:12 compute-0 kernel: tap295c346d-8d (unregistering): left promiscuous mode
Sep 30 18:26:12 compute-0 NetworkManager[45059]: <info>  [1759256772.5100] device (tap295c346d-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:12 compute-0 ovn_controller[156242]: 2025-09-30T18:26:12Z|00144|binding|INFO|Releasing lport 295c346d-8de9-4a50-883e-9a7e1ccdccc7 from this chassis (sb_readonly=0)
Sep 30 18:26:12 compute-0 ovn_controller[156242]: 2025-09-30T18:26:12Z|00145|binding|INFO|Setting lport 295c346d-8de9-4a50-883e-9a7e1ccdccc7 down in Southbound
Sep 30 18:26:12 compute-0 ovn_controller[156242]: 2025-09-30T18:26:12Z|00146|binding|INFO|Removing iface tap295c346d-8d ovn-installed in OVS
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.528 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:82:fd 10.100.0.9'], port_security=['fa:16:3e:ec:82:fd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '05075776-ca3e-4416-bdd4-558a62d1cf69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '17', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=295c346d-8de9-4a50-883e-9a7e1ccdccc7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.529 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 295c346d-8de9-4a50-883e-9a7e1ccdccc7 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.532 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.549 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2be95d2c-a2eb-4b41-96a9-1582ea9139a1]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:12 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000011.scope: Deactivated successfully.
Sep 30 18:26:12 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000011.scope: Consumed 1.824s CPU time.
Sep 30 18:26:12 compute-0 systemd-machined[219917]: Machine qemu-12-instance-00000011 terminated.
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.581 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[fd925b88-278d-4925-a6cf-a48fcd8d3465]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.584 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0752dc-3201-493f-a80c-7c017b67fe3a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.618 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[e88e27a9-9148-4dc9-ac28-b910d51de93f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.637 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8161711c-2ed0-4f09-986d-9fa141d06bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528275, 'reachable_time': 37779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324534, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:26:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:12.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.659 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[726c02c2-03ea-467e-9cff-41f0f8662921]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528291, 'tstamp': 528291}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324535, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528295, 'tstamp': 528295}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324535, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.662 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.702 2 DEBUG nova.compute.manager [req-f58477d1-4054-46c3-9215-1b498aa20383 req-430ab8df-87c1-4a22-b0fa-f6c01ecd2835 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Received event network-vif-unplugged-295c346d-8de9-4a50-883e-9a7e1ccdccc7 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.703 2 DEBUG oslo_concurrency.lockutils [req-f58477d1-4054-46c3-9215-1b498aa20383 req-430ab8df-87c1-4a22-b0fa-f6c01ecd2835 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.703 2 DEBUG oslo_concurrency.lockutils [req-f58477d1-4054-46c3-9215-1b498aa20383 req-430ab8df-87c1-4a22-b0fa-f6c01ecd2835 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.703 2 DEBUG oslo_concurrency.lockutils [req-f58477d1-4054-46c3-9215-1b498aa20383 req-430ab8df-87c1-4a22-b0fa-f6c01ecd2835 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.703 2 DEBUG nova.compute.manager [req-f58477d1-4054-46c3-9215-1b498aa20383 req-430ab8df-87c1-4a22-b0fa-f6c01ecd2835 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] No waiting events found dispatching network-vif-unplugged-295c346d-8de9-4a50-883e-9a7e1ccdccc7 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.703 2 DEBUG nova.compute.manager [req-f58477d1-4054-46c3-9215-1b498aa20383 req-430ab8df-87c1-4a22-b0fa-f6c01ecd2835 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Received event network-vif-unplugged-295c346d-8de9-4a50-883e-9a7e1ccdccc7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.709 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.709 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.709 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.710 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:26:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:12.712 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4b36d7-d21f-4f1b-a7dc-4cc485dc04ba]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.717 2 INFO nova.virt.libvirt.driver [-] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Instance destroyed successfully.
Sep 30 18:26:12 compute-0 nova_compute[265391]: 2025-09-30 18:26:12.717 2 DEBUG nova.objects.instance [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'resources' on Instance uuid 05075776-ca3e-4416-bdd4-558a62d1cf69 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.226 2 DEBUG nova.virt.libvirt.vif [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=1,config_drive='True',created_at=2025-09-30T18:24:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-763414646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-763414646',id=17,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:25:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-u3g6u6z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',clean_attempts='1',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:26:07Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=05075776-ca3e-4416-bdd4-558a62d1cf69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "address": "fa:16:3e:ec:82:fd", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295c346d-8d", "ovs_interfaceid": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.227 2 DEBUG nova.network.os_vif_util [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "address": "fa:16:3e:ec:82:fd", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap295c346d-8d", "ovs_interfaceid": "295c346d-8de9-4a50-883e-9a7e1ccdccc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.229 2 DEBUG nova.network.os_vif_util [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:82:fd,bridge_name='br-int',has_traffic_filtering=True,id=295c346d-8de9-4a50-883e-9a7e1ccdccc7,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295c346d-8d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.229 2 DEBUG os_vif [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:82:fd,bridge_name='br-int',has_traffic_filtering=True,id=295c346d-8de9-4a50-883e-9a7e1ccdccc7,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295c346d-8d') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap295c346d-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=9ed5af49-621f-4616-966c-e41e744f5fbf) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.247 2 INFO os_vif [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:82:fd,bridge_name='br-int',has_traffic_filtering=True,id=295c346d-8de9-4a50-883e-9a7e1ccdccc7,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap295c346d-8d')
Sep 30 18:26:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/459333809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:13.784Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.863 2 INFO nova.virt.libvirt.driver [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Deleting instance files /var/lib/nova/instances/05075776-ca3e-4416-bdd4-558a62d1cf69_del
Sep 30 18:26:13 compute-0 nova_compute[265391]: 2025-09-30 18:26:13.864 2 INFO nova.virt.libvirt.driver [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Deletion of /var/lib/nova/instances/05075776-ca3e-4416-bdd4-558a62d1cf69_del complete
Sep 30 18:26:14 compute-0 sudo[324568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:26:14 compute-0 sudo[324568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:14 compute-0 sudo[324568]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:14.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c002550 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.381 2 INFO nova.compute.manager [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Took 1.93 seconds to destroy the instance on the hypervisor.
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.382 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.382 2 DEBUG nova.compute.manager [-] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.383 2 DEBUG nova.network.neutron [-] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.383 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1428: 353 pgs: 353 active+clean; 159 MiB data, 316 MiB used, 40 GiB / 40 GiB avail; 16 KiB/s rd, 8.5 KiB/s wr, 22 op/s
Sep 30 18:26:14 compute-0 ceph-mon[73755]: pgmap v1428: 353 pgs: 353 active+clean; 159 MiB data, 316 MiB used, 40 GiB / 40 GiB avail; 16 KiB/s rd, 8.5 KiB/s wr, 22 op/s
Sep 30 18:26:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.785 2 DEBUG nova.compute.manager [req-fc14e5ec-cb73-4e95-8f89-9db4e69ea972 req-13cb1ed2-8ad3-4272-9ee1-b5b09484aced 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Received event network-vif-unplugged-295c346d-8de9-4a50-883e-9a7e1ccdccc7 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.785 2 DEBUG oslo_concurrency.lockutils [req-fc14e5ec-cb73-4e95-8f89-9db4e69ea972 req-13cb1ed2-8ad3-4272-9ee1-b5b09484aced 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.786 2 DEBUG oslo_concurrency.lockutils [req-fc14e5ec-cb73-4e95-8f89-9db4e69ea972 req-13cb1ed2-8ad3-4272-9ee1-b5b09484aced 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.786 2 DEBUG oslo_concurrency.lockutils [req-fc14e5ec-cb73-4e95-8f89-9db4e69ea972 req-13cb1ed2-8ad3-4272-9ee1-b5b09484aced 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "05075776-ca3e-4416-bdd4-558a62d1cf69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.787 2 DEBUG nova.compute.manager [req-fc14e5ec-cb73-4e95-8f89-9db4e69ea972 req-13cb1ed2-8ad3-4272-9ee1-b5b09484aced 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] No waiting events found dispatching network-vif-unplugged-295c346d-8de9-4a50-883e-9a7e1ccdccc7 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:26:14 compute-0 nova_compute[265391]: 2025-09-30 18:26:14.787 2 DEBUG nova.compute.manager [req-fc14e5ec-cb73-4e95-8f89-9db4e69ea972 req-13cb1ed2-8ad3-4272-9ee1-b5b09484aced 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Received event network-vif-unplugged-295c346d-8de9-4a50-883e-9a7e1ccdccc7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:26:15 compute-0 nova_compute[265391]: 2025-09-30 18:26:15.073 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:15 compute-0 nova_compute[265391]: 2025-09-30 18:26:15.480 2 DEBUG nova.compute.manager [req-43e91c38-3794-4de9-9df9-e599cce81be5 req-c71dc809-40da-4940-a360-eac7acb547a6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Received event network-vif-deleted-295c346d-8de9-4a50-883e-9a7e1ccdccc7 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:15 compute-0 nova_compute[265391]: 2025-09-30 18:26:15.481 2 INFO nova.compute.manager [req-43e91c38-3794-4de9-9df9-e599cce81be5 req-c71dc809-40da-4940-a360-eac7acb547a6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Neutron deleted interface 295c346d-8de9-4a50-883e-9a7e1ccdccc7; detaching it from the instance and deleting it from the info cache
Sep 30 18:26:15 compute-0 nova_compute[265391]: 2025-09-30 18:26:15.481 2 DEBUG nova.network.neutron [req-43e91c38-3794-4de9-9df9-e599cce81be5 req-c71dc809-40da-4940-a360-eac7acb547a6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:26:15 compute-0 nova_compute[265391]: 2025-09-30 18:26:15.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:15 compute-0 nova_compute[265391]: 2025-09-30 18:26:15.913 2 DEBUG nova.network.neutron [-] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:26:15 compute-0 nova_compute[265391]: 2025-09-30 18:26:15.992 2 DEBUG nova.compute.manager [req-43e91c38-3794-4de9-9df9-e599cce81be5 req-c71dc809-40da-4940-a360-eac7acb547a6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Detach interface failed, port_id=295c346d-8de9-4a50-883e-9a7e1ccdccc7, reason: Instance 05075776-ca3e-4416-bdd4-558a62d1cf69 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Sep 30 18:26:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:16.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:16 compute-0 sudo[324595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:26:16 compute-0 sudo[324595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:16 compute-0 nova_compute[265391]: 2025-09-30 18:26:16.422 2 INFO nova.compute.manager [-] [instance: 05075776-ca3e-4416-bdd4-558a62d1cf69] Took 2.04 seconds to deallocate network for instance.
Sep 30 18:26:16 compute-0 sudo[324595]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1429: 353 pgs: 353 active+clean; 159 MiB data, 316 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 8.5 KiB/s wr, 17 op/s
Sep 30 18:26:16 compute-0 ceph-mon[73755]: pgmap v1429: 353 pgs: 353 active+clean; 159 MiB data, 316 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 8.5 KiB/s wr, 17 op/s
Sep 30 18:26:16 compute-0 sudo[324622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:26:16 compute-0 sudo[324622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:16.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:16 compute-0 nova_compute[265391]: 2025-09-30 18:26:16.952 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:16 compute-0 nova_compute[265391]: 2025-09-30 18:26:16.953 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:16 compute-0 nova_compute[265391]: 2025-09-30 18:26:16.959 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:16 compute-0 nova_compute[265391]: 2025-09-30 18:26:16.993 2 INFO nova.scheduler.client.report [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Deleted allocations for instance 05075776-ca3e-4416-bdd4-558a62d1cf69
Sep 30 18:26:17 compute-0 sudo[324622]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:17.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:26:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:26:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:26:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:26:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:26:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:26:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:26:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:26:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:26:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:26:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:26:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:26:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:26:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:26:17 compute-0 sudo[324678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:26:17 compute-0 sudo[324678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:17 compute-0 sudo[324678]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:17 compute-0 sudo[324703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:26:17 compute-0 sudo[324703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:26:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:26:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:26:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:26:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:26:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:26:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:26:17 compute-0 sshd-session[324618]: Invalid user violet from 45.252.249.158 port 34966
Sep 30 18:26:17 compute-0 sshd-session[324618]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:26:17 compute-0 sshd-session[324618]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:26:17 compute-0 podman[324773]: 2025-09-30 18:26:17.883602746 +0000 UTC m=+0.043532113 container create 62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:26:17 compute-0 systemd[1]: Started libpod-conmon-62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0.scope.
Sep 30 18:26:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:26:17 compute-0 podman[324773]: 2025-09-30 18:26:17.861517382 +0000 UTC m=+0.021446739 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:26:17 compute-0 podman[324773]: 2025-09-30 18:26:17.984039496 +0000 UTC m=+0.143968853 container init 62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 18:26:18 compute-0 podman[324773]: 2025-09-30 18:26:18.001154371 +0000 UTC m=+0.161083718 container start 62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pare, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:26:18 compute-0 podman[324773]: 2025-09-30 18:26:18.005262398 +0000 UTC m=+0.165191725 container attach 62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pare, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:26:18 compute-0 clever_pare[324789]: 167 167
Sep 30 18:26:18 compute-0 systemd[1]: libpod-62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0.scope: Deactivated successfully.
Sep 30 18:26:18 compute-0 podman[324773]: 2025-09-30 18:26:18.013370608 +0000 UTC m=+0.173299975 container died 62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:26:18 compute-0 nova_compute[265391]: 2025-09-30 18:26:18.026 2 DEBUG oslo_concurrency.lockutils [None req-7d845c7a-eca9-4163-af49-c169e029900a 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "05075776-ca3e-4416-bdd4-558a62d1cf69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.116s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-13a8bac4bccee860efb396a8759c7a23ca83a788afba5e2627037260dbeb3c18-merged.mount: Deactivated successfully.
Sep 30 18:26:18 compute-0 podman[324773]: 2025-09-30 18:26:18.071641023 +0000 UTC m=+0.231570360 container remove 62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_pare, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 18:26:18 compute-0 systemd[1]: libpod-conmon-62daae2497979b9920d3412da8e71d02fd6324c3a7d798582c73835e062c97c0.scope: Deactivated successfully.
Sep 30 18:26:18 compute-0 nova_compute[265391]: 2025-09-30 18:26:18.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:18 compute-0 podman[324815]: 2025-09-30 18:26:18.258380746 +0000 UTC m=+0.043074460 container create 09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:26:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:18.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:18 compute-0 systemd[1]: Started libpod-conmon-09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c.scope.
Sep 30 18:26:18 compute-0 podman[324815]: 2025-09-30 18:26:18.241099277 +0000 UTC m=+0.025793021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:26:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ff9ff860732274b4f0a76533ceda847a0e79d7a727a517a9b23dad73d2f5b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ff9ff860732274b4f0a76533ceda847a0e79d7a727a517a9b23dad73d2f5b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ff9ff860732274b4f0a76533ceda847a0e79d7a727a517a9b23dad73d2f5b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ff9ff860732274b4f0a76533ceda847a0e79d7a727a517a9b23dad73d2f5b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ff9ff860732274b4f0a76533ceda847a0e79d7a727a517a9b23dad73d2f5b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:18 compute-0 podman[324815]: 2025-09-30 18:26:18.3673953 +0000 UTC m=+0.152089054 container init 09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:26:18 compute-0 podman[324815]: 2025-09-30 18:26:18.377486282 +0000 UTC m=+0.162180016 container start 09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:26:18 compute-0 podman[324815]: 2025-09-30 18:26:18.380777497 +0000 UTC m=+0.165471241 container attach 09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:26:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1430: 353 pgs: 353 active+clean; 121 MiB data, 292 MiB used, 40 GiB / 40 GiB avail; 15 KiB/s rd, 9.2 KiB/s wr, 24 op/s
Sep 30 18:26:18 compute-0 ceph-mon[73755]: pgmap v1430: 353 pgs: 353 active+clean; 121 MiB data, 292 MiB used, 40 GiB / 40 GiB avail; 15 KiB/s rd, 9.2 KiB/s wr, 24 op/s
Sep 30 18:26:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:18.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:18 compute-0 xenodochial_ardinghelli[324832]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:26:18 compute-0 xenodochial_ardinghelli[324832]: --> All data devices are unavailable
Sep 30 18:26:18 compute-0 systemd[1]: libpod-09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c.scope: Deactivated successfully.
Sep 30 18:26:18 compute-0 podman[324815]: 2025-09-30 18:26:18.784842509 +0000 UTC m=+0.569536273 container died 09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:26:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:18] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:26:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:18] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-44ff9ff860732274b4f0a76533ceda847a0e79d7a727a517a9b23dad73d2f5b5-merged.mount: Deactivated successfully.
Sep 30 18:26:18 compute-0 podman[324815]: 2025-09-30 18:26:18.837481477 +0000 UTC m=+0.622175211 container remove 09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:26:18 compute-0 systemd[1]: libpod-conmon-09a0b369e4de19d220b7b0cc2f0d3938c94407e524ef6c599bd71f9bb22e234c.scope: Deactivated successfully.
Sep 30 18:26:18 compute-0 sudo[324703]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:18 compute-0 nova_compute[265391]: 2025-09-30 18:26:18.889 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:18 compute-0 nova_compute[265391]: 2025-09-30 18:26:18.890 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:18 compute-0 nova_compute[265391]: 2025-09-30 18:26:18.891 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:18 compute-0 nova_compute[265391]: 2025-09-30 18:26:18.891 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:18 compute-0 nova_compute[265391]: 2025-09-30 18:26:18.891 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:18 compute-0 nova_compute[265391]: 2025-09-30 18:26:18.907 2 INFO nova.compute.manager [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Terminating instance
Sep 30 18:26:18 compute-0 sudo[324859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:26:18 compute-0 sudo[324859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:18 compute-0 sudo[324859]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:19 compute-0 sudo[324884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:26:19 compute-0 sudo[324884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.425 2 DEBUG nova.compute.manager [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:26:19 compute-0 kernel: tapfba1be8e-f7 (unregistering): left promiscuous mode
Sep 30 18:26:19 compute-0 NetworkManager[45059]: <info>  [1759256779.4907] device (tapfba1be8e-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00147|binding|INFO|Releasing lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 from this chassis (sb_readonly=0)
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00148|binding|INFO|Setting lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 down in Southbound
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00149|binding|INFO|Removing iface tapfba1be8e-f7 ovn-installed in OVS
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.515 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:86:5f 10.100.0.13'], port_security=['fa:16:3e:8b:86:5f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7569c2e7-ec68-4b21-b36b-7c828ac8af52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '5', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=fba1be8e-f7c4-4bd0-b8a7-6e854986df69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.516 166158 INFO neutron.agent.ovn.metadata.agent [-] Port fba1be8e-f7c4-4bd0-b8a7-6e854986df69 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.518 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.518 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[cf14e67e-e7c9-4fee-965b-1bc16d20ac63]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.519 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace which is not needed anymore
Sep 30 18:26:19 compute-0 podman[324950]: 2025-09-30 18:26:19.522861821 +0000 UTC m=+0.051280244 container create 4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:19 compute-0 systemd[1]: Started libpod-conmon-4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e.scope.
Sep 30 18:26:19 compute-0 podman[324950]: 2025-09-30 18:26:19.499022191 +0000 UTC m=+0.027440634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:26:19 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000010.scope: Deactivated successfully.
Sep 30 18:26:19 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000010.scope: Consumed 17.029s CPU time.
Sep 30 18:26:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:26:19 compute-0 systemd-machined[219917]: Machine qemu-11-instance-00000010 terminated.
Sep 30 18:26:19 compute-0 podman[324950]: 2025-09-30 18:26:19.650656192 +0000 UTC m=+0.179074655 container init 4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 18:26:19 compute-0 kernel: tapfba1be8e-f7: entered promiscuous mode
Sep 30 18:26:19 compute-0 NetworkManager[45059]: <info>  [1759256779.6560] manager: (tapfba1be8e-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Sep 30 18:26:19 compute-0 systemd-udevd[324973]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:26:19 compute-0 kernel: tapfba1be8e-f7 (unregistering): left promiscuous mode
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00150|binding|INFO|Claiming lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 for this chassis.
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00151|binding|INFO|fba1be8e-f7c4-4bd0-b8a7-6e854986df69: Claiming fa:16:3e:8b:86:5f 10.100.0.13
Sep 30 18:26:19 compute-0 podman[324950]: 2025-09-30 18:26:19.66173904 +0000 UTC m=+0.190157473 container start 4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:26:19 compute-0 podman[324950]: 2025-09-30 18:26:19.666979756 +0000 UTC m=+0.195398229 container attach 4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.668 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:86:5f 10.100.0.13'], port_security=['fa:16:3e:8b:86:5f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7569c2e7-ec68-4b21-b36b-7c828ac8af52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '5', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=fba1be8e-f7c4-4bd0-b8a7-6e854986df69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:26:19 compute-0 amazing_dijkstra[324990]: 167 167
Sep 30 18:26:19 compute-0 systemd[1]: libpod-4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e.scope: Deactivated successfully.
Sep 30 18:26:19 compute-0 conmon[324990]: conmon 4e4b2e5d03f3387ede89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e.scope/container/memory.events
Sep 30 18:26:19 compute-0 podman[324950]: 2025-09-30 18:26:19.678540167 +0000 UTC m=+0.206958610 container died 4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.688 2 INFO nova.virt.libvirt.driver [-] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Instance destroyed successfully.
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.689 2 DEBUG nova.objects.instance [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'resources' on Instance uuid 7569c2e7-ec68-4b21-b36b-7c828ac8af52 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00152|binding|INFO|Setting lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 ovn-installed in OVS
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00153|binding|INFO|Setting lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 up in Southbound
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00154|binding|INFO|Releasing lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 from this chassis (sb_readonly=1)
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00155|binding|INFO|Removing iface tapfba1be8e-f7 ovn-installed in OVS
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00156|if_status|INFO|Dropped 2 log messages in last 822 seconds (most recently, 822 seconds ago) due to excessive rate
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00157|if_status|INFO|Not setting lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 down as sb is readonly
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00158|binding|INFO|Releasing lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 from this chassis (sb_readonly=0)
Sep 30 18:26:19 compute-0 ovn_controller[156242]: 2025-09-30T18:26:19Z|00159|binding|INFO|Setting lport fba1be8e-f7c4-4bd0-b8a7-6e854986df69 down in Southbound
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.702 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:86:5f 10.100.0.13'], port_security=['fa:16:3e:8b:86:5f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7569c2e7-ec68-4b21-b36b-7c828ac8af52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '5', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=fba1be8e-f7c4-4bd0-b8a7-6e854986df69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd062874d1ff650a598b66e19c415e63407776ef4052bd4f15fa8f0ee1db104-merged.mount: Deactivated successfully.
Sep 30 18:26:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[323079]: [NOTICE]   (323091) : haproxy version is 3.0.5-8e879a5
Sep 30 18:26:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[323079]: [NOTICE]   (323091) : path to executable is /usr/sbin/haproxy
Sep 30 18:26:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[323079]: [WARNING]  (323091) : Exiting Master process...
Sep 30 18:26:19 compute-0 podman[325022]: 2025-09-30 18:26:19.727731675 +0000 UTC m=+0.084623990 container kill ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:26:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[323079]: [ALERT]    (323091) : Current worker (323093) exited with code 143 (Terminated)
Sep 30 18:26:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[323079]: [WARNING]  (323091) : All workers exited. Exiting... (0)
Sep 30 18:26:19 compute-0 systemd[1]: libpod-ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a.scope: Deactivated successfully.
Sep 30 18:26:19 compute-0 podman[324993]: 2025-09-30 18:26:19.735036455 +0000 UTC m=+0.121941220 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:26:19 compute-0 podman[324950]: 2025-09-30 18:26:19.73985359 +0000 UTC m=+0.268272013 container remove 4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_dijkstra, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:26:19 compute-0 podman[324991]: 2025-09-30 18:26:19.745869727 +0000 UTC m=+0.136549490 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:26:19 compute-0 systemd[1]: libpod-conmon-4e4b2e5d03f3387ede89e43dcb7419ac9e57e9ce723f9c32b8e59d8c1919c84e.scope: Deactivated successfully.
Sep 30 18:26:19 compute-0 podman[324994]: 2025-09-30 18:26:19.762326065 +0000 UTC m=+0.137000402 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7)
Sep 30 18:26:19 compute-0 podman[325077]: 2025-09-30 18:26:19.783434753 +0000 UTC m=+0.028721857 container died ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 18:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b187512898779f8d62265ded9c5761bdda2ea1ea8eb8e613b2c7862b17c60478-merged.mount: Deactivated successfully.
Sep 30 18:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a-userdata-shm.mount: Deactivated successfully.
Sep 30 18:26:19 compute-0 podman[325077]: 2025-09-30 18:26:19.827888799 +0000 UTC m=+0.073175893 container cleanup ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:26:19 compute-0 systemd[1]: libpod-conmon-ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a.scope: Deactivated successfully.
Sep 30 18:26:19 compute-0 podman[325083]: 2025-09-30 18:26:19.844415988 +0000 UTC m=+0.072131986 container remove ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.851 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1dbebbc0-1183-48b5-b288-5a564a0fbbdc]: (4, ("Tue Sep 30 06:26:19 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a)\nab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a\nTue Sep 30 06:26:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (ab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a)\nab3a5d69973bea5cd2d0f5e4669967f57d81f081c7c9c817f68a02484802c94a\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.853 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b5abf3d5-6fd0-450a-8ffd-a94873eff842]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.854 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.855 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[931a6f66-937d-45c1-a798-cf5adb6efe8e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.855 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:19 compute-0 kernel: tap6901f664-30: left promiscuous mode
Sep 30 18:26:19 compute-0 nova_compute[265391]: 2025-09-30 18:26:19.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.879 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[afbc3482-0ab0-4da7-86db-25bed27d3bd0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.909 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a70f7098-b243-4a0b-9288-3ee2e0e4d06f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.912 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2031074b-f74b-4d08-8bcb-8a69b1034381]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 podman[325120]: 2025-09-30 18:26:19.925032873 +0000 UTC m=+0.044704412 container create 99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.928 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a2a71b-1067-4e72-8aeb-96c66d2cc3da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528264, 'reachable_time': 32113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325138, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.930 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.930 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[95224bf6-6917-40de-9d28-dbc75d049e7a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.931 166158 INFO neutron.agent.ovn.metadata.agent [-] Port fba1be8e-f7c4-4bd0-b8a7-6e854986df69 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.931 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:26:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d6901f664\x2d336b\x2d42d2\x2dbbf7\x2d58951befc8d1.mount: Deactivated successfully.
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.932 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a3322de5-86f9-409e-b90e-f49d10f0c8a5]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.932 166158 INFO neutron.agent.ovn.metadata.agent [-] Port fba1be8e-f7c4-4bd0-b8a7-6e854986df69 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.933 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:26:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:19.933 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cbdf9c-a93a-4026-8bfd-164f6a0baf67]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:19 compute-0 sshd-session[324618]: Failed password for invalid user violet from 45.252.249.158 port 34966 ssh2
Sep 30 18:26:19 compute-0 systemd[1]: Started libpod-conmon-99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d.scope.
Sep 30 18:26:20 compute-0 podman[325120]: 2025-09-30 18:26:19.905549947 +0000 UTC m=+0.025221506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:26:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9482eceb51bd9ccfaa51f025bd95398fb776e0ac6f626df9b2586b719a1749f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9482eceb51bd9ccfaa51f025bd95398fb776e0ac6f626df9b2586b719a1749f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9482eceb51bd9ccfaa51f025bd95398fb776e0ac6f626df9b2586b719a1749f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9482eceb51bd9ccfaa51f025bd95398fb776e0ac6f626df9b2586b719a1749f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:20 compute-0 podman[325120]: 2025-09-30 18:26:20.034435547 +0000 UTC m=+0.154107116 container init 99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:26:20 compute-0 podman[325120]: 2025-09-30 18:26:20.043173604 +0000 UTC m=+0.162845143 container start 99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:26:20 compute-0 podman[325120]: 2025-09-30 18:26:20.046740837 +0000 UTC m=+0.166412386 container attach 99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.198 2 DEBUG nova.virt.libvirt.vif [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:24:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2011896342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2011896342',id=16,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:24:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ccq6xsdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:24:46Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=7569c2e7-ec68-4b21-b36b-7c828ac8af52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.199 2 DEBUG nova.network.os_vif_util [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "address": "fa:16:3e:8b:86:5f", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba1be8e-f7", "ovs_interfaceid": "fba1be8e-f7c4-4bd0-b8a7-6e854986df69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.200 2 DEBUG nova.network.os_vif_util [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:86:5f,bridge_name='br-int',has_traffic_filtering=True,id=fba1be8e-f7c4-4bd0-b8a7-6e854986df69,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba1be8e-f7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.200 2 DEBUG os_vif [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:86:5f,bridge_name='br-int',has_traffic_filtering=True,id=fba1be8e-f7c4-4bd0-b8a7-6e854986df69,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba1be8e-f7') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba1be8e-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=41dc3b88-b9e9-44ce-8341-713443690be4) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.266 2 INFO os_vif [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:86:5f,bridge_name='br-int',has_traffic_filtering=True,id=fba1be8e-f7c4-4bd0-b8a7-6e854986df69,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba1be8e-f7')
Sep 30 18:26:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:20.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:20 compute-0 festive_hellman[325142]: {
Sep 30 18:26:20 compute-0 festive_hellman[325142]:     "0": [
Sep 30 18:26:20 compute-0 festive_hellman[325142]:         {
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "devices": [
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "/dev/loop3"
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             ],
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "lv_name": "ceph_lv0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "lv_size": "21470642176",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "name": "ceph_lv0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "tags": {
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.cluster_name": "ceph",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.crush_device_class": "",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.encrypted": "0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.osd_id": "0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.type": "block",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.vdo": "0",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:                 "ceph.with_tpm": "0"
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             },
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "type": "block",
Sep 30 18:26:20 compute-0 festive_hellman[325142]:             "vg_name": "ceph_vg0"
Sep 30 18:26:20 compute-0 festive_hellman[325142]:         }
Sep 30 18:26:20 compute-0 festive_hellman[325142]:     ]
Sep 30 18:26:20 compute-0 festive_hellman[325142]: }
Sep 30 18:26:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:20 compute-0 systemd[1]: libpod-99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d.scope: Deactivated successfully.
Sep 30 18:26:20 compute-0 podman[325120]: 2025-09-30 18:26:20.376407755 +0000 UTC m=+0.496079334 container died 99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 18:26:20 compute-0 podman[325120]: 2025-09-30 18:26:20.428628512 +0000 UTC m=+0.548300071 container remove 99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hellman, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 18:26:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1431: 353 pgs: 353 active+clean; 121 MiB data, 292 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Sep 30 18:26:20 compute-0 systemd[1]: libpod-conmon-99c5ea0620889913db67d6335bc0e178b484f98ae0924c07d2660c98803c524d.scope: Deactivated successfully.
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.446 2 DEBUG nova.compute.manager [req-be73c419-e007-4311-b563-20a1f2194d47 req-54512e04-6393-42dc-b9d5-00ae2b185081 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-vif-unplugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.447 2 DEBUG oslo_concurrency.lockutils [req-be73c419-e007-4311-b563-20a1f2194d47 req-54512e04-6393-42dc-b9d5-00ae2b185081 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.447 2 DEBUG oslo_concurrency.lockutils [req-be73c419-e007-4311-b563-20a1f2194d47 req-54512e04-6393-42dc-b9d5-00ae2b185081 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.448 2 DEBUG oslo_concurrency.lockutils [req-be73c419-e007-4311-b563-20a1f2194d47 req-54512e04-6393-42dc-b9d5-00ae2b185081 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.448 2 DEBUG nova.compute.manager [req-be73c419-e007-4311-b563-20a1f2194d47 req-54512e04-6393-42dc-b9d5-00ae2b185081 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] No waiting events found dispatching network-vif-unplugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.448 2 DEBUG nova.compute.manager [req-be73c419-e007-4311-b563-20a1f2194d47 req-54512e04-6393-42dc-b9d5-00ae2b185081 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-vif-unplugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:26:20 compute-0 sudo[324884]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:20 compute-0 ceph-mon[73755]: pgmap v1431: 353 pgs: 353 active+clean; 121 MiB data, 292 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Sep 30 18:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9482eceb51bd9ccfaa51f025bd95398fb776e0ac6f626df9b2586b719a1749f-merged.mount: Deactivated successfully.
Sep 30 18:26:20 compute-0 sudo[325181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:26:20 compute-0 sudo[325181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:20 compute-0 sudo[325181]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:20 compute-0 sudo[325207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:26:20 compute-0 sudo[325207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:20.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.724 2 INFO nova.virt.libvirt.driver [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Deleting instance files /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52_del
Sep 30 18:26:20 compute-0 nova_compute[265391]: 2025-09-30 18:26:20.725 2 INFO nova.virt.libvirt.driver [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Deletion of /var/lib/nova/instances/7569c2e7-ec68-4b21-b36b-7c828ac8af52_del complete
Sep 30 18:26:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:20 compute-0 sshd-session[324618]: Received disconnect from 45.252.249.158 port 34966:11: Bye Bye [preauth]
Sep 30 18:26:20 compute-0 sshd-session[324618]: Disconnected from invalid user violet 45.252.249.158 port 34966 [preauth]
Sep 30 18:26:21 compute-0 podman[325272]: 2025-09-30 18:26:21.10884215 +0000 UTC m=+0.047765122 container create ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:26:21 compute-0 systemd[1]: Started libpod-conmon-ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6.scope.
Sep 30 18:26:21 compute-0 podman[325272]: 2025-09-30 18:26:21.087201198 +0000 UTC m=+0.026124180 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:26:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:26:21 compute-0 podman[325272]: 2025-09-30 18:26:21.210612695 +0000 UTC m=+0.149535747 container init ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:26:21 compute-0 podman[325272]: 2025-09-30 18:26:21.222991187 +0000 UTC m=+0.161914189 container start ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:26:21 compute-0 podman[325272]: 2025-09-30 18:26:21.227287939 +0000 UTC m=+0.166211001 container attach ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_noether, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 18:26:21 compute-0 awesome_noether[325289]: 167 167
Sep 30 18:26:21 compute-0 systemd[1]: libpod-ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6.scope: Deactivated successfully.
Sep 30 18:26:21 compute-0 podman[325272]: 2025-09-30 18:26:21.230735698 +0000 UTC m=+0.169658680 container died ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_noether, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 18:26:21 compute-0 nova_compute[265391]: 2025-09-30 18:26:21.242 2 INFO nova.compute.manager [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Took 1.82 seconds to destroy the instance on the hypervisor.
Sep 30 18:26:21 compute-0 nova_compute[265391]: 2025-09-30 18:26:21.244 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:26:21 compute-0 nova_compute[265391]: 2025-09-30 18:26:21.245 2 DEBUG nova.compute.manager [-] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:26:21 compute-0 nova_compute[265391]: 2025-09-30 18:26:21.245 2 DEBUG nova.network.neutron [-] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:26:21 compute-0 nova_compute[265391]: 2025-09-30 18:26:21.246 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea6be4ce5c9f3a3abe91d3c774bdc4034c8f01df74db97a210f7750239bb2dc1-merged.mount: Deactivated successfully.
Sep 30 18:26:21 compute-0 podman[325272]: 2025-09-30 18:26:21.282448352 +0000 UTC m=+0.221371344 container remove ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_noether, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 18:26:21 compute-0 systemd[1]: libpod-conmon-ad9e87e194822856ccf709f713a67b6dde1d34f1d4a7db290ac9308e2272e7f6.scope: Deactivated successfully.
Sep 30 18:26:21 compute-0 nova_compute[265391]: 2025-09-30 18:26:21.378 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:21 compute-0 podman[325312]: 2025-09-30 18:26:21.492620405 +0000 UTC m=+0.061392347 container create 7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:26:21 compute-0 systemd[1]: Started libpod-conmon-7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80.scope.
Sep 30 18:26:21 compute-0 podman[325312]: 2025-09-30 18:26:21.465675465 +0000 UTC m=+0.034447467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:26:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110d44f12f2a86d6485c309aa80ccddf012d79cdc88a63c0449ca39b843ad3c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110d44f12f2a86d6485c309aa80ccddf012d79cdc88a63c0449ca39b843ad3c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110d44f12f2a86d6485c309aa80ccddf012d79cdc88a63c0449ca39b843ad3c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110d44f12f2a86d6485c309aa80ccddf012d79cdc88a63c0449ca39b843ad3c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:21 compute-0 podman[325312]: 2025-09-30 18:26:21.601994118 +0000 UTC m=+0.170766070 container init 7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hellman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Sep 30 18:26:21 compute-0 podman[325312]: 2025-09-30 18:26:21.61479707 +0000 UTC m=+0.183568972 container start 7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hellman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:26:21 compute-0 podman[325312]: 2025-09-30 18:26:21.618544438 +0000 UTC m=+0.187316350 container attach 7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:26:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:22 compute-0 lvm[325404]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:26:22 compute-0 lvm[325404]: VG ceph_vg0 finished
Sep 30 18:26:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:26:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:26:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:22 compute-0 sad_hellman[325330]: {}
Sep 30 18:26:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:26:22 compute-0 systemd[1]: libpod-7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80.scope: Deactivated successfully.
Sep 30 18:26:22 compute-0 systemd[1]: libpod-7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80.scope: Consumed 1.307s CPU time.
Sep 30 18:26:22 compute-0 podman[325312]: 2025-09-30 18:26:22.403819757 +0000 UTC m=+0.972591669 container died 7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hellman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:26:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1432: 353 pgs: 353 active+clean; 121 MiB data, 292 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Sep 30 18:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-110d44f12f2a86d6485c309aa80ccddf012d79cdc88a63c0449ca39b843ad3c1-merged.mount: Deactivated successfully.
Sep 30 18:26:22 compute-0 podman[325312]: 2025-09-30 18:26:22.459106764 +0000 UTC m=+1.027878676 container remove 7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hellman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:26:22 compute-0 systemd[1]: libpod-conmon-7878107e61f8dbc582f40188026898781877e2ca7b8cb95f98aee1af4893db80.scope: Deactivated successfully.
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.493 2 DEBUG nova.network.neutron [-] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:26:22 compute-0 sudo[325207]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.514 2 DEBUG nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-vif-unplugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.514 2 DEBUG oslo_concurrency.lockutils [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.514 2 DEBUG oslo_concurrency.lockutils [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.515 2 DEBUG oslo_concurrency.lockutils [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.515 2 DEBUG nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] No waiting events found dispatching network-vif-unplugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.515 2 DEBUG nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-vif-unplugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.515 2 DEBUG nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.515 2 DEBUG oslo_concurrency.lockutils [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.516 2 DEBUG oslo_concurrency.lockutils [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.516 2 DEBUG oslo_concurrency.lockutils [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.516 2 DEBUG nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] No waiting events found dispatching network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.516 2 WARNING nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received unexpected event network-vif-plugged-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 for instance with vm_state active and task_state deleting.
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.517 2 DEBUG nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Received event network-vif-deleted-fba1be8e-f7c4-4bd0-b8a7-6e854986df69 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.517 2 INFO nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Neutron deleted interface fba1be8e-f7c4-4bd0-b8a7-6e854986df69; detaching it from the instance and deleting it from the info cache
Sep 30 18:26:22 compute-0 nova_compute[265391]: 2025-09-30 18:26:22.517 2 DEBUG nova.network.neutron [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:26:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:26:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:26:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:26:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:26:22 compute-0 sudo[325419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:26:22 compute-0 sudo[325419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:22 compute-0 sudo[325419]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:22.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:23 compute-0 nova_compute[265391]: 2025-09-30 18:26:23.003 2 INFO nova.compute.manager [-] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Took 1.76 seconds to deallocate network for instance.
Sep 30 18:26:23 compute-0 nova_compute[265391]: 2025-09-30 18:26:23.026 2 DEBUG nova.compute.manager [req-b5d51f35-859c-421f-9c5b-0b534405ccb1 req-d2e40182-4d2b-4e1e-92e8-e3cf4c8c805d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 7569c2e7-ec68-4b21-b36b-7c828ac8af52] Detach interface failed, port_id=fba1be8e-f7c4-4bd0-b8a7-6e854986df69, reason: Instance 7569c2e7-ec68-4b21-b36b-7c828ac8af52 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Sep 30 18:26:23 compute-0 ceph-mon[73755]: pgmap v1432: 353 pgs: 353 active+clean; 121 MiB data, 292 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Sep 30 18:26:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:26:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:26:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:23 compute-0 nova_compute[265391]: 2025-09-30 18:26:23.529 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:23 compute-0 nova_compute[265391]: 2025-09-30 18:26:23.530 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:23 compute-0 nova_compute[265391]: 2025-09-30 18:26:23.574 2 DEBUG oslo_concurrency.processutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:23.785Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:26:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1156298293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:24 compute-0 nova_compute[265391]: 2025-09-30 18:26:24.075 2 DEBUG oslo_concurrency.processutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:24 compute-0 nova_compute[265391]: 2025-09-30 18:26:24.086 2 DEBUG nova.compute.provider_tree [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:26:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1156298293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:26:24 compute-0 nova_compute[265391]: 2025-09-30 18:26:24.598 2 DEBUG nova.scheduler.client.report [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:26:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:24.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:25 compute-0 nova_compute[265391]: 2025-09-30 18:26:25.107 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.577s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:25 compute-0 nova_compute[265391]: 2025-09-30 18:26:25.128 2 INFO nova.scheduler.client.report [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Deleted allocations for instance 7569c2e7-ec68-4b21-b36b-7c828ac8af52
Sep 30 18:26:25 compute-0 nova_compute[265391]: 2025-09-30 18:26:25.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:25 compute-0 ceph-mon[73755]: pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Sep 30 18:26:25 compute-0 nova_compute[265391]: 2025-09-30 18:26:25.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:26 compute-0 nova_compute[265391]: 2025-09-30 18:26:26.154 2 DEBUG oslo_concurrency.lockutils [None req-6a73b3cd-7e58-4b4b-9c14-b82baeb69652 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "7569c2e7-ec68-4b21-b36b-7c828ac8af52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.264s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:26.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 40 op/s
Sep 30 18:26:26 compute-0 ceph-mon[73755]: pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 40 op/s
Sep 30 18:26:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:26.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:27.250Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:28.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 40 op/s
Sep 30 18:26:28 compute-0 ceph-mon[73755]: pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 40 op/s
Sep 30 18:26:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:28.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:28] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:26:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:28] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:26:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:29 compute-0 podman[276673]: time="2025-09-30T18:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:26:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:26:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10297 "" "Go-http-client/1.1"
Sep 30 18:26:30 compute-0 nova_compute[265391]: 2025-09-30 18:26:30.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:30.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Sep 30 18:26:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:30.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:30 compute-0 nova_compute[265391]: 2025-09-30 18:26:30.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:31 compute-0 openstack_network_exporter[279566]: ERROR   18:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:26:31 compute-0 openstack_network_exporter[279566]: ERROR   18:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:26:31 compute-0 openstack_network_exporter[279566]: ERROR   18:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:26:31 compute-0 openstack_network_exporter[279566]: ERROR   18:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:26:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:26:31 compute-0 openstack_network_exporter[279566]: ERROR   18:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:26:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:26:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:31 compute-0 ceph-mon[73755]: pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Sep 30 18:26:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:32.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:26:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:32.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:33 compute-0 ceph-mon[73755]: pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:26:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:33.786Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:26:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.3 total, 600.0 interval
                                           Cumulative writes: 8376 writes, 37K keys, 8373 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 8376 writes, 8373 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1512 writes, 6827 keys, 1511 commit groups, 1.0 writes per commit group, ingest: 10.57 MB, 0.02 MB/s
                                           Interval WAL: 1512 writes, 1511 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     90.8      0.57              0.15        22    0.026       0      0       0.0       0.0
                                             L6      1/0   12.03 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    193.0    164.8      1.41              0.64        21    0.067    116K    11K       0.0       0.0
                                            Sum      1/0   12.03 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    137.4    143.5      1.98              0.79        43    0.046    116K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    172.5    176.5      0.38              0.22        10    0.038     33K   3010       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    193.0    164.8      1.41              0.64        21    0.067    116K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    157.7      0.33              0.15        21    0.016       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3001.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.28 GB write, 0.09 MB/s write, 0.27 GB read, 0.09 MB/s read, 2.0 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 304.00 MB usage: 26.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000207 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1511,25.54 MB,8.40061%) FilterBlock(44,322.92 KB,0.103735%) IndexBlock(44,546.62 KB,0.175597%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 18:26:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:34.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:34 compute-0 sudo[325478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:26:34 compute-0 sudo[325478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:34 compute-0 sudo[325478]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:26:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:34.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:35 compute-0 nova_compute[265391]: 2025-09-30 18:26:35.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:35 compute-0 nova_compute[265391]: 2025-09-30 18:26:35.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:26:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_46] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:35 compute-0 ceph-mon[73755]: pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:26:35 compute-0 nova_compute[265391]: 2025-09-30 18:26:35.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:36.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:36 compute-0 nova_compute[265391]: 2025-09-30 18:26:36.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:26:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:26:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4202001670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:26:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:26:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4202001670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:26:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4202001670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:26:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4202001670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:26:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:36.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:37.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:26:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:26:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:26:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:26:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:26:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:26:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:26:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:26:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:37 compute-0 ceph-mon[73755]: pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:26:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:38.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:38 compute-0 nova_compute[265391]: 2025-09-30 18:26:38.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:26:38 compute-0 nova_compute[265391]: 2025-09-30 18:26:38.427 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:26:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:38 compute-0 podman[325508]: 2025-09-30 18:26:38.530597847 +0000 UTC m=+0.064014715 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:26:38 compute-0 podman[325507]: 2025-09-30 18:26:38.562908047 +0000 UTC m=+0.098213234 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:26:38 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3175932376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:38.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:38] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:26:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:38] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:26:39 compute-0 nova_compute[265391]: 2025-09-30 18:26:39.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:26:39 compute-0 nova_compute[265391]: 2025-09-30 18:26:39.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:26:39 compute-0 nova_compute[265391]: 2025-09-30 18:26:39.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:26:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001bd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:39 compute-0 ceph-mon[73755]: pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:39 compute-0 nova_compute[265391]: 2025-09-30 18:26:39.947 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:39 compute-0 nova_compute[265391]: 2025-09-30 18:26:39.949 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:39 compute-0 nova_compute[265391]: 2025-09-30 18:26:39.949 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:39 compute-0 nova_compute[265391]: 2025-09-30 18:26:39.949 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:26:39 compute-0 nova_compute[265391]: 2025-09-30 18:26:39.950 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:40.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004cf0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:26:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1560769864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.455 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.598 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.601 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3534214684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1560769864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.625 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.025s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.626 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4359MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.626 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.627 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:40 compute-0 nova_compute[265391]: 2025-09-30 18:26:40.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:40.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:41 compute-0 podman[325583]: 2025-09-30 18:26:41.541549522 +0000 UTC m=+0.083817570 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:26:41 compute-0 ceph-mon[73755]: pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:41 compute-0 nova_compute[265391]: 2025-09-30 18:26:41.673 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:26:41 compute-0 nova_compute[265391]: 2025-09-30 18:26:41.673 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:26:40 up  1:29,  0 user,  load average: 0.37, 0.63, 0.77\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:26:41 compute-0 nova_compute[265391]: 2025-09-30 18:26:41.688 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:26:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568289352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:42 compute-0 nova_compute[265391]: 2025-09-30 18:26:42.162 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:42 compute-0 nova_compute[265391]: 2025-09-30 18:26:42.167 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:26:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:42.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3568289352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:42 compute-0 nova_compute[265391]: 2025-09-30 18:26:42.679 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:26:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:42.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:42 compute-0 nova_compute[265391]: 2025-09-30 18:26:42.824 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:42 compute-0 nova_compute[265391]: 2025-09-30 18:26:42.825 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:43 compute-0 nova_compute[265391]: 2025-09-30 18:26:43.193 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:26:43 compute-0 nova_compute[265391]: 2025-09-30 18:26:43.193 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.566s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:43 compute-0 nova_compute[265391]: 2025-09-30 18:26:43.331 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:26:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:43 compute-0 ceph-mon[73755]: pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:43 compute-0 sshd-session[325626]: Invalid user faisal from 14.225.220.107 port 57570
Sep 30 18:26:43 compute-0 sshd-session[325626]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:26:43 compute-0 sshd-session[325626]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:26:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:43.787Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:43 compute-0 nova_compute[265391]: 2025-09-30 18:26:43.870 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:43 compute-0 nova_compute[265391]: 2025-09-30 18:26:43.871 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:43 compute-0 nova_compute[265391]: 2025-09-30 18:26:43.876 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:26:43 compute-0 nova_compute[265391]: 2025-09-30 18:26:43.877 2 INFO nova.compute.claims [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:26:44 compute-0 nova_compute[265391]: 2025-09-30 18:26:44.191 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:26:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:44 compute-0 nova_compute[265391]: 2025-09-30 18:26:44.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:26:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:26:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:44.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:44 compute-0 nova_compute[265391]: 2025-09-30 18:26:44.943 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:26:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941839852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:45 compute-0 nova_compute[265391]: 2025-09-30 18:26:45.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:45 compute-0 nova_compute[265391]: 2025-09-30 18:26:45.368 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:45 compute-0 nova_compute[265391]: 2025-09-30 18:26:45.374 2 DEBUG nova.compute.provider_tree [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:26:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:45 compute-0 ceph-mon[73755]: pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 0 op/s
Sep 30 18:26:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/941839852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:26:45 compute-0 nova_compute[265391]: 2025-09-30 18:26:45.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:45 compute-0 sshd-session[325626]: Failed password for invalid user faisal from 14.225.220.107 port 57570 ssh2
Sep 30 18:26:45 compute-0 nova_compute[265391]: 2025-09-30 18:26:45.886 2 DEBUG nova.scheduler.client.report [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:26:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:46 compute-0 sshd-session[325626]: Received disconnect from 14.225.220.107 port 57570:11: Bye Bye [preauth]
Sep 30 18:26:46 compute-0 sshd-session[325626]: Disconnected from invalid user faisal 14.225.220.107 port 57570 [preauth]
Sep 30 18:26:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:46 compute-0 nova_compute[265391]: 2025-09-30 18:26:46.396 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.526s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:46 compute-0 nova_compute[265391]: 2025-09-30 18:26:46.398 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:26:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:46.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:46 compute-0 nova_compute[265391]: 2025-09-30 18:26:46.910 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:26:46 compute-0 nova_compute[265391]: 2025-09-30 18:26:46.911 2 DEBUG nova.network.neutron [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:26:46 compute-0 nova_compute[265391]: 2025-09-30 18:26:46.911 2 WARNING neutronclient.v2_0.client [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:46 compute-0 nova_compute[265391]: 2025-09-30 18:26:46.912 2 WARNING neutronclient.v2_0.client [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:47.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:47 compute-0 nova_compute[265391]: 2025-09-30 18:26:47.419 2 INFO nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:26:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004d30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:47 compute-0 ceph-mon[73755]: pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:47 compute-0 nova_compute[265391]: 2025-09-30 18:26:47.803 2 DEBUG nova.network.neutron [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Successfully created port: ac8e3da5-cb09-4223-89d5-318d077ea35e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:26:47 compute-0 nova_compute[265391]: 2025-09-30 18:26:47.927 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:26:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:48.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.680 2 DEBUG nova.network.neutron [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Successfully updated port: ac8e3da5-cb09-4223-89d5-318d077ea35e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:26:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:48.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.743 2 DEBUG nova.compute.manager [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-changed-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.746 2 DEBUG nova.compute.manager [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Refreshing instance network info cache due to event network-changed-ac8e3da5-cb09-4223-89d5-318d077ea35e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.746 2 DEBUG oslo_concurrency.lockutils [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.747 2 DEBUG oslo_concurrency.lockutils [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.747 2 DEBUG nova.network.neutron [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Refreshing network info cache for port ac8e3da5-cb09-4223-89d5-318d077ea35e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:26:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:48] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:26:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:48] "GET /metrics HTTP/1.1" 200 46629 "" "Prometheus/2.51.0"
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.948 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.951 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.951 2 INFO nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Creating image(s)
Sep 30 18:26:48 compute-0 nova_compute[265391]: 2025-09-30 18:26:48.989 2 DEBUG nova.storage.rbd_utils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image d188c0fb-8668-4ab2-b174-49e0e20505ba_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.020 2 DEBUG nova.storage.rbd_utils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image d188c0fb-8668-4ab2-b174-49e0e20505ba_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.062 2 DEBUG nova.storage.rbd_utils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image d188c0fb-8668-4ab2-b174-49e0e20505ba_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.068 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.164 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.165 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.165 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.166 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.194 2 DEBUG nova.storage.rbd_utils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image d188c0fb-8668-4ab2-b174-49e0e20505ba_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.200 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 d188c0fb-8668-4ab2-b174-49e0e20505ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.213 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.254 2 WARNING neutronclient.v2_0.client [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.391 2 DEBUG nova.network.neutron [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:26:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.500 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 d188c0fb-8668-4ab2-b174-49e0e20505ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.562 2 DEBUG nova.network.neutron [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.568 2 DEBUG nova.storage.rbd_utils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] resizing rbd image d188c0fb-8668-4ab2-b174-49e0e20505ba_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:26:49 compute-0 ceph-mon[73755]: pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.787 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.787 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Ensure instance console log exists: /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.789 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.789 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:49 compute-0 nova_compute[265391]: 2025-09-30 18:26:49.789 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:50 compute-0 nova_compute[265391]: 2025-09-30 18:26:50.195 2 DEBUG oslo_concurrency.lockutils [req-03e93422-36db-4379-a25b-5163a45a2e43 req-1691ef38-09fc-48f7-bf3d-c5d7d09ff110 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:26:50 compute-0 nova_compute[265391]: 2025-09-30 18:26:50.196 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquired lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:26:50 compute-0 nova_compute[265391]: 2025-09-30 18:26:50.196 2 DEBUG nova.network.neutron [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:26:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:50.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:50 compute-0 nova_compute[265391]: 2025-09-30 18:26:50.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1446: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:50 compute-0 podman[325826]: 2025-09-30 18:26:50.549267284 +0000 UTC m=+0.074091656 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 18:26:50 compute-0 podman[325824]: 2025-09-30 18:26:50.561885432 +0000 UTC m=+0.084598160 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250930)
Sep 30 18:26:50 compute-0 podman[325825]: 2025-09-30 18:26:50.583460483 +0000 UTC m=+0.107446584 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4)
Sep 30 18:26:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:50.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:50 compute-0 nova_compute[265391]: 2025-09-30 18:26:50.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:51 compute-0 nova_compute[265391]: 2025-09-30 18:26:51.393 2 DEBUG nova.network.neutron [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:26:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004d50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:51 compute-0 nova_compute[265391]: 2025-09-30 18:26:51.646 2 WARNING neutronclient.v2_0.client [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:51 compute-0 ceph-mon[73755]: pgmap v1446: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:51 compute-0 nova_compute[265391]: 2025-09-30 18:26:51.827 2 DEBUG nova.network.neutron [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Updating instance_info_cache with network_info: [{"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:26:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:26:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.335 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Releasing lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.336 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Instance network_info: |[{"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.338 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Start _get_guest_xml network_info=[{"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.342 2 WARNING nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.343 2 DEBUG nova.virt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteStrategies-server-1528882058', uuid='d188c0fb-8668-4ab2-b174-49e0e20505ba'), owner=OwnerMeta(userid='623ef4a55c9e4fc28bb65e49246b5008', username='tempest-TestExecuteStrategies-1883747907-project-admin', projectid='c634e1c17ed54907969576a0eb8eff50', projectname='tempest-TestExecuteStrategies-1883747907'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759256812.3435745) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.348 2 DEBUG nova.virt.libvirt.host [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:26:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.348 2 DEBUG nova.virt.libvirt.host [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:26:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:52.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.351 2 DEBUG nova.virt.libvirt.host [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.351 2 DEBUG nova.virt.libvirt.host [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.352 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.352 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.352 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.353 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.353 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.353 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.353 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.353 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.354 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.354 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.354 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.354 2 DEBUG nova.virt.hardware [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.357 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1447: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:52.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:26:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:26:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2570281823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.812 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.845 2 DEBUG nova.storage.rbd_utils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:26:52 compute-0 nova_compute[265391]: 2025-09-30 18:26:52.849 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:26:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114652845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.320 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.322 2 DEBUG nova.virt.libvirt.vif [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1528882058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1528882058',id=18,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-1gsulz3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:26:47Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=d188c0fb-8668-4ab2-b174-49e0e20505ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.322 2 DEBUG nova.network.os_vif_util [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.323 2 DEBUG nova.network.os_vif_util [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:b1,bridge_name='br-int',has_traffic_filtering=True,id=ac8e3da5-cb09-4223-89d5-318d077ea35e,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac8e3da5-cb') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.324 2 DEBUG nova.objects.instance [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'pci_devices' on Instance uuid d188c0fb-8668-4ab2-b174-49e0e20505ba obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:26:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:53 compute-0 ceph-mon[73755]: pgmap v1447: 353 pgs: 353 active+clean; 41 MiB data, 243 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:26:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2570281823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:26:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/114652845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:26:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:53.788Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:26:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:53.788Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.836 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <uuid>d188c0fb-8668-4ab2-b174-49e0e20505ba</uuid>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <name>instance-00000012</name>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1528882058</nova:name>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:26:52</nova:creationTime>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:26:53 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:26:53 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <nova:port uuid="ac8e3da5-cb09-4223-89d5-318d077ea35e">
Sep 30 18:26:53 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <system>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <entry name="serial">d188c0fb-8668-4ab2-b174-49e0e20505ba</entry>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <entry name="uuid">d188c0fb-8668-4ab2-b174-49e0e20505ba</entry>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </system>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <os>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   </os>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <features>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   </features>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d188c0fb-8668-4ab2-b174-49e0e20505ba_disk">
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       </source>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config">
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       </source>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:26:53 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:bb:e0:b1"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <target dev="tapac8e3da5-cb"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/console.log" append="off"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <video>
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </video>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:26:53 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:26:53 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:26:53 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:26:53 compute-0 nova_compute[265391]: </domain>
Sep 30 18:26:53 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.837 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Preparing to wait for external event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.837 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.837 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.838 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.838 2 DEBUG nova.virt.libvirt.vif [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1528882058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1528882058',id=18,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-1gsulz3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:26:47Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=d188c0fb-8668-4ab2-b174-49e0e20505ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.839 2 DEBUG nova.network.os_vif_util [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.839 2 DEBUG nova.network.os_vif_util [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:b1,bridge_name='br-int',has_traffic_filtering=True,id=ac8e3da5-cb09-4223-89d5-318d077ea35e,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac8e3da5-cb') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.839 2 DEBUG os_vif [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:b1,bridge_name='br-int',has_traffic_filtering=True,id=ac8e3da5-cb09-4223-89d5-318d077ea35e,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac8e3da5-cb') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '59bbf51e-6ca6-5dd3-bbfe-36699f40d458', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac8e3da5-cb, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapac8e3da5-cb, col_values=(('qos', UUID('52525aad-96f1-4dd5-b652-e19f3bc034ad')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.849 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapac8e3da5-cb, col_values=(('external_ids', {'iface-id': 'ac8e3da5-cb09-4223-89d5-318d077ea35e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:e0:b1', 'vm-uuid': 'd188c0fb-8668-4ab2-b174-49e0e20505ba'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:53 compute-0 NetworkManager[45059]: <info>  [1759256813.8514] manager: (tapac8e3da5-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:53 compute-0 nova_compute[265391]: 2025-09-30 18:26:53.860 2 INFO os_vif [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:b1,bridge_name='br-int',has_traffic_filtering=True,id=ac8e3da5-cb09-4223-89d5-318d077ea35e,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac8e3da5-cb')
Sep 30 18:26:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:54.308 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:54.309 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:54.309 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:54.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1448: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:26:54 compute-0 sudo[325953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:26:54 compute-0 sudo[325953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:26:54 compute-0 sudo[325953]: pam_unix(sudo:session): session closed for user root
Sep 30 18:26:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:54.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:55 compute-0 nova_compute[265391]: 2025-09-30 18:26:55.408 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:26:55 compute-0 nova_compute[265391]: 2025-09-30 18:26:55.409 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:26:55 compute-0 nova_compute[265391]: 2025-09-30 18:26:55.409 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No VIF found with MAC fa:16:3e:bb:e0:b1, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:26:55 compute-0 nova_compute[265391]: 2025-09-30 18:26:55.410 2 INFO nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Using config drive
Sep 30 18:26:55 compute-0 nova_compute[265391]: 2025-09-30 18:26:55.435 2 DEBUG nova.storage.rbd_utils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:26:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004d70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:55 compute-0 nova_compute[265391]: 2025-09-30 18:26:55.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:55 compute-0 ceph-mon[73755]: pgmap v1448: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:26:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:26:55 compute-0 nova_compute[265391]: 2025-09-30 18:26:55.950 2 WARNING neutronclient.v2_0.client [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:26:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.439 2 INFO nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Creating config drive at /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/disk.config
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.444 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpa8n2xp66 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1449: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.580 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpa8n2xp66" returned: 0 in 0.136s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.611 2 DEBUG nova.storage.rbd_utils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.615 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/disk.config d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:26:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:26:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:56.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.808 2 DEBUG oslo_concurrency.processutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/disk.config d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.809 2 INFO nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Deleting local config drive /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/disk.config because it was imported into RBD.
Sep 30 18:26:56 compute-0 kernel: tapac8e3da5-cb: entered promiscuous mode
Sep 30 18:26:56 compute-0 ovn_controller[156242]: 2025-09-30T18:26:56Z|00160|binding|INFO|Claiming lport ac8e3da5-cb09-4223-89d5-318d077ea35e for this chassis.
Sep 30 18:26:56 compute-0 ovn_controller[156242]: 2025-09-30T18:26:56Z|00161|binding|INFO|ac8e3da5-cb09-4223-89d5-318d077ea35e: Claiming fa:16:3e:bb:e0:b1 10.100.0.14
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:56 compute-0 NetworkManager[45059]: <info>  [1759256816.8791] manager: (tapac8e3da5-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.884 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:e0:b1 10.100.0.14'], port_security=['fa:16:3e:bb:e0:b1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd188c0fb-8668-4ab2-b174-49e0e20505ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=ac8e3da5-cb09-4223-89d5-318d077ea35e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.885 166158 INFO neutron.agent.ovn.metadata.agent [-] Port ac8e3da5-cb09-4223-89d5-318d077ea35e in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.887 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:56 compute-0 ovn_controller[156242]: 2025-09-30T18:26:56Z|00162|binding|INFO|Setting lport ac8e3da5-cb09-4223-89d5-318d077ea35e ovn-installed in OVS
Sep 30 18:26:56 compute-0 ovn_controller[156242]: 2025-09-30T18:26:56Z|00163|binding|INFO|Setting lport ac8e3da5-cb09-4223-89d5-318d077ea35e up in Southbound
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.906 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[da47166f-bd64-4695-ae66-1830f2e5329d]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.908 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6901f664-31 in ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:26:56 compute-0 nova_compute[265391]: 2025-09-30 18:26:56.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.910 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6901f664-30 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.910 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe734ed0-b707-4568-9fdb-91baf4d5e73f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.911 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a15768f4-ce42-4009-a95a-84586cb5e501]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:56 compute-0 systemd-machined[219917]: New machine qemu-13-instance-00000012.
Sep 30 18:26:56 compute-0 systemd-udevd[326052]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.925 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[7e51209c-30a2-48b7-aa2b-1086a481086a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:56 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-00000012.
Sep 30 18:26:56 compute-0 NetworkManager[45059]: <info>  [1759256816.9352] device (tapac8e3da5-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:26:56 compute-0 NetworkManager[45059]: <info>  [1759256816.9360] device (tapac8e3da5-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.943 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d39b5277-0065-409d-ac55-1749666d2cf3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.970 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[43f49124-ac6e-4f88-983d-e01c12b7747c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:56 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:56.976 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0940cd43-51ca-4163-ba69-f57023093486]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:56 compute-0 NetworkManager[45059]: <info>  [1759256816.9775] manager: (tap6901f664-30): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.006 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd603b2-0b44-4494-be81-68213f673aff]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.009 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b2e04d-2506-4567-82de-f08666f95553]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 NetworkManager[45059]: <info>  [1759256817.0267] device (tap6901f664-30): carrier: link connected
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.031 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[5db911d6-4ff9-463f-aed6-f0aaa7eea32c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.049 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7ad0c0-2be7-4895-a159-524ec69c63da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541572, 'reachable_time': 25782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326084, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.060 2 DEBUG nova.compute.manager [req-0dc23271-177e-43b6-ba33-a1e4d721cf95 req-015e5c60-d1c4-4e3a-9dc8-dc1d0c0d9a8a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.060 2 DEBUG oslo_concurrency.lockutils [req-0dc23271-177e-43b6-ba33-a1e4d721cf95 req-015e5c60-d1c4-4e3a-9dc8-dc1d0c0d9a8a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.061 2 DEBUG oslo_concurrency.lockutils [req-0dc23271-177e-43b6-ba33-a1e4d721cf95 req-015e5c60-d1c4-4e3a-9dc8-dc1d0c0d9a8a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.061 2 DEBUG oslo_concurrency.lockutils [req-0dc23271-177e-43b6-ba33-a1e4d721cf95 req-015e5c60-d1c4-4e3a-9dc8-dc1d0c0d9a8a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.061 2 DEBUG nova.compute.manager [req-0dc23271-177e-43b6-ba33-a1e4d721cf95 req-015e5c60-d1c4-4e3a-9dc8-dc1d0c0d9a8a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Processing event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.068 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e8ab51-0436-4251-a8b8-ed78d3adede0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:412a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541572, 'tstamp': 541572}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326085, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.086 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[786be097-8e86-4704-b1de-76d5aeb4bc39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541572, 'reachable_time': 25782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326086, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.115 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[03fe6d85-ec14-4e60-87ce-33cbf6caccea]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.178 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2f0d6b-b331-42f5-830d-e50d36f74fd2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.179 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.180 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.181 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:57 compute-0 kernel: tap6901f664-30: entered promiscuous mode
Sep 30 18:26:57 compute-0 NetworkManager[45059]: <info>  [1759256817.1847] manager: (tap6901f664-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.187 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:57 compute-0 ovn_controller[156242]: 2025-09-30T18:26:57Z|00164|binding|INFO|Releasing lport 5b6cbf18-1826-41d0-920f-e9db4f1a1832 from this chassis (sb_readonly=0)
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.192 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[282106ce-9bca-4e72-84f1-f918de318e64]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.192 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.193 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.193 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 6901f664-336b-42d2-bbf7-58951befc8d1 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.193 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.193 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4af2e4c7-fa28-4737-8b7c-cd4e7cd8532e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.193 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.194 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5ffb4c-6965-4965-9990-ab63c5f1da07]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.194 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:26:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:26:57.195 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'env', 'PROCESS_TAG=haproxy-6901f664-336b-42d2-bbf7-58951befc8d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6901f664-336b-42d2-bbf7-58951befc8d1.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:26:57.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:26:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:26:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463281651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:26:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:26:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463281651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:26:57 compute-0 podman[326162]: 2025-09-30 18:26:57.647516739 +0000 UTC m=+0.050761200 container create 7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Sep 30 18:26:57 compute-0 systemd[1]: Started libpod-conmon-7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3.scope.
Sep 30 18:26:57 compute-0 podman[326162]: 2025-09-30 18:26:57.620126847 +0000 UTC m=+0.023371328 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:26:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207d859d15ddc00e99276997fd3e5eba296ccc0537e9679a76845cc18742ab8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.734 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.737 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.740 2 INFO nova.virt.libvirt.driver [-] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Instance spawned successfully.
Sep 30 18:26:57 compute-0 nova_compute[265391]: 2025-09-30 18:26:57.741 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:26:57 compute-0 podman[326162]: 2025-09-30 18:26:57.744758967 +0000 UTC m=+0.148003448 container init 7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:26:57 compute-0 podman[326162]: 2025-09-30 18:26:57.750961258 +0000 UTC m=+0.154205719 container start 7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:26:57 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[326177]: [NOTICE]   (326182) : New worker (326185) forked
Sep 30 18:26:57 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[326177]: [NOTICE]   (326182) : Loading success.
Sep 30 18:26:57 compute-0 ceph-mon[73755]: pgmap v1449: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:26:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3463281651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:26:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3463281651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.255 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.256 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.256 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.257 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.257 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.258 2 DEBUG nova.virt.libvirt.driver [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:26:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:26:58.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1450: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 420 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Sep 30 18:26:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:26:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:26:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:26:58.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.767 2 INFO nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Took 9.82 seconds to spawn the instance on the hypervisor.
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.768 2 DEBUG nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:26:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:58] "GET /metrics HTTP/1.1" 200 46625 "" "Prometheus/2.51.0"
Sep 30 18:26:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:26:58] "GET /metrics HTTP/1.1" 200 46625 "" "Prometheus/2.51.0"
Sep 30 18:26:58 compute-0 nova_compute[265391]: 2025-09-30 18:26:58.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:26:59 compute-0 nova_compute[265391]: 2025-09-30 18:26:59.308 2 INFO nova.compute.manager [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Took 15.47 seconds to build instance.
Sep 30 18:26:59 compute-0 nova_compute[265391]: 2025-09-30 18:26:59.396 2 DEBUG nova.compute.manager [req-41138173-3738-4421-bc95-0aa0763a0d44 req-fa69b1ce-edc0-4182-9249-01654a047c9d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:26:59 compute-0 nova_compute[265391]: 2025-09-30 18:26:59.397 2 DEBUG oslo_concurrency.lockutils [req-41138173-3738-4421-bc95-0aa0763a0d44 req-fa69b1ce-edc0-4182-9249-01654a047c9d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:26:59 compute-0 nova_compute[265391]: 2025-09-30 18:26:59.397 2 DEBUG oslo_concurrency.lockutils [req-41138173-3738-4421-bc95-0aa0763a0d44 req-fa69b1ce-edc0-4182-9249-01654a047c9d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:26:59 compute-0 nova_compute[265391]: 2025-09-30 18:26:59.397 2 DEBUG oslo_concurrency.lockutils [req-41138173-3738-4421-bc95-0aa0763a0d44 req-fa69b1ce-edc0-4182-9249-01654a047c9d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:26:59 compute-0 nova_compute[265391]: 2025-09-30 18:26:59.398 2 DEBUG nova.compute.manager [req-41138173-3738-4421-bc95-0aa0763a0d44 req-fa69b1ce-edc0-4182-9249-01654a047c9d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] No waiting events found dispatching network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:26:59 compute-0 nova_compute[265391]: 2025-09-30 18:26:59.398 2 WARNING nova.compute.manager [req-41138173-3738-4421-bc95-0aa0763a0d44 req-fa69b1ce-edc0-4182-9249-01654a047c9d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received unexpected event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e for instance with vm_state active and task_state None.
Sep 30 18:26:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:26:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004d90 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:26:59 compute-0 podman[276673]: time="2025-09-30T18:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:26:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:26:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10746 "" "Go-http-client/1.1"
Sep 30 18:26:59 compute-0 ceph-mon[73755]: pgmap v1450: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 420 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Sep 30 18:26:59 compute-0 nova_compute[265391]: 2025-09-30 18:26:59.824 2 DEBUG oslo_concurrency.lockutils [None req-ebe9ebaf-21b7-47c8-8081-3ece94ab6e08 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.999s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1451: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 787 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 18:27:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:00.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:00 compute-0 nova_compute[265391]: 2025-09-30 18:27:00.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:01 compute-0 openstack_network_exporter[279566]: ERROR   18:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:27:01 compute-0 openstack_network_exporter[279566]: ERROR   18:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:27:01 compute-0 openstack_network_exporter[279566]: ERROR   18:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:27:01 compute-0 openstack_network_exporter[279566]: ERROR   18:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:27:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:27:01 compute-0 openstack_network_exporter[279566]: ERROR   18:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:27:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:27:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:01 compute-0 ceph-mon[73755]: pgmap v1451: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 787 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 18:27:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:02.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1452: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 787 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 18:27:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:02.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:03.789Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:27:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:03.790Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:03 compute-0 ceph-mon[73755]: pgmap v1452: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 787 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 18:27:03 compute-0 nova_compute[265391]: 2025-09-30 18:27:03.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:04.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1453: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Sep 30 18:27:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:05 compute-0 nova_compute[265391]: 2025-09-30 18:27:05.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:05 compute-0 ceph-mon[73755]: pgmap v1453: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Sep 30 18:27:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1454: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:27:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:06.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.880948) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256826881004, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1851, "num_deletes": 251, "total_data_size": 3289425, "memory_usage": 3346824, "flush_reason": "Manual Compaction"}
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256826904474, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3204166, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36268, "largest_seqno": 38118, "table_properties": {"data_size": 3196021, "index_size": 4895, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17410, "raw_average_key_size": 20, "raw_value_size": 3179510, "raw_average_value_size": 3679, "num_data_blocks": 215, "num_entries": 864, "num_filter_entries": 864, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759256646, "oldest_key_time": 1759256646, "file_creation_time": 1759256826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 24086 microseconds, and 15142 cpu microseconds.
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.905029) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3204166 bytes OK
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.905219) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.907135) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.907164) EVENT_LOG_v1 {"time_micros": 1759256826907155, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.907190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3281713, prev total WAL file size 3281713, number of live WAL files 2.
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.909821) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3129KB)], [80(12MB)]
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256826909913, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15819797, "oldest_snapshot_seqno": -1}
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6552 keys, 13825757 bytes, temperature: kUnknown
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256826971988, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 13825757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13782008, "index_size": 26293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 169138, "raw_average_key_size": 25, "raw_value_size": 13664287, "raw_average_value_size": 2085, "num_data_blocks": 1052, "num_entries": 6552, "num_filter_entries": 6552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759256826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.972386) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 13825757 bytes
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.973717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 254.4 rd, 222.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 12.0 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(9.3) write-amplify(4.3) OK, records in: 7068, records dropped: 516 output_compression: NoCompression
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.973741) EVENT_LOG_v1 {"time_micros": 1759256826973729, "job": 46, "event": "compaction_finished", "compaction_time_micros": 62175, "compaction_time_cpu_micros": 31631, "output_level": 6, "num_output_files": 1, "total_output_size": 13825757, "num_input_records": 7068, "num_output_records": 6552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256826974572, "job": 46, "event": "table_file_deletion", "file_number": 82}
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256826977312, "job": 46, "event": "table_file_deletion", "file_number": 80}
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.909646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.977433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.977443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.977446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.977484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:27:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:27:06.977488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:27:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:07.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:27:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:27:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004dd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:07 compute-0 ceph-mon[73755]: pgmap v1454: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:27:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3567038494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:27:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005229063904082829 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:27:07
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'volumes', '.nfs', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:27:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:27:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0380002580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1455: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:27:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:08.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:27:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:27:08 compute-0 nova_compute[265391]: 2025-09-30 18:27:08.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:09.267 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:27:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:09.267 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:27:09 compute-0 nova_compute[265391]: 2025-09-30 18:27:09.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:09 compute-0 ovn_controller[156242]: 2025-09-30T18:27:09Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:e0:b1 10.100.0.14
Sep 30 18:27:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:09 compute-0 ovn_controller[156242]: 2025-09-30T18:27:09Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:e0:b1 10.100.0.14
Sep 30 18:27:09 compute-0 podman[326207]: 2025-09-30 18:27:09.548755865 +0000 UTC m=+0.075146514 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:27:09 compute-0 podman[326206]: 2025-09-30 18:27:09.562293457 +0000 UTC m=+0.095700318 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4)
Sep 30 18:27:09 compute-0 ceph-mon[73755]: pgmap v1455: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:27:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_51] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004dd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1456: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 18 KiB/s wr, 63 op/s
Sep 30 18:27:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:10.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:10 compute-0 nova_compute[265391]: 2025-09-30 18:27:10.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4004dd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:11 compute-0 ceph-mon[73755]: pgmap v1456: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 18 KiB/s wr, 63 op/s
Sep 30 18:27:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:27:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:12.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:27:12 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:12 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1457: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.2 MiB/s rd, 4.5 KiB/s wr, 43 op/s
Sep 30 18:27:12 compute-0 podman[326264]: 2025-09-30 18:27:12.542691108 +0000 UTC m=+0.073022409 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Sep 30 18:27:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:12.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:13 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:13.791Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:13 compute-0 nova_compute[265391]: 2025-09-30 18:27:13.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:13 compute-0 ceph-mon[73755]: pgmap v1457: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 40 GiB / 40 GiB avail; 1.2 MiB/s rd, 4.5 KiB/s wr, 43 op/s
Sep 30 18:27:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:14 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1458: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Sep 30 18:27:14 compute-0 sudo[326285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:27:14 compute-0 sudo[326285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:14 compute-0 sudo[326285]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:14.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:15 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:15 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:15 compute-0 nova_compute[265391]: 2025-09-30 18:27:15.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:15 compute-0 ceph-mon[73755]: pgmap v1458: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Sep 30 18:27:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:16.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:16 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:16 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_55] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1459: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:27:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:16.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:17.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:17 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:17 compute-0 ceph-mon[73755]: pgmap v1459: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:27:17 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/719707145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:27:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:18.269 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:27:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:18 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1460: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:27:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:18.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:27:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:27:18 compute-0 nova_compute[265391]: 2025-09-30 18:27:18.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1840254682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:27:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:19 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:19 compute-0 ceph-mon[73755]: pgmap v1460: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:27:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:20 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:20 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_55] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1461: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:27:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:20.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:20 compute-0 nova_compute[265391]: 2025-09-30 18:27:20.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:21 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:21 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:21 compute-0 podman[326319]: 2025-09-30 18:27:21.53842467 +0000 UTC m=+0.071611132 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid)
Sep 30 18:27:21 compute-0 podman[326318]: 2025-09-30 18:27:21.561559281 +0000 UTC m=+0.087424633 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:27:21 compute-0 podman[326321]: 2025-09-30 18:27:21.568389819 +0000 UTC m=+0.097093415 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Sep 30 18:27:22 compute-0 ceph-mon[73755]: pgmap v1461: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:27:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 18:27:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:27:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:27:22 compute-0 sshd-session[326316]: Invalid user foundry from 45.252.249.158 port 49536
Sep 30 18:27:22 compute-0 sshd-session[326316]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:27:22 compute-0 sshd-session[326316]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:27:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:22 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1462: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Sep 30 18:27:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:22.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:22 compute-0 sudo[326377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:27:22 compute-0 sudo[326377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:22 compute-0 sudo[326377]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:22 compute-0 sudo[326402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:27:22 compute-0 sudo[326402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:27:23 compute-0 sudo[326402]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:23 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_55] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:27:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:27:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:27:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:27:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:27:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:27:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:27:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:27:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:23.792Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:23 compute-0 sudo[326459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:27:23 compute-0 sudo[326459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:23 compute-0 sudo[326459]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:23 compute-0 nova_compute[265391]: 2025-09-30 18:27:23.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:23 compute-0 sudo[326484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:27:23 compute-0 sudo[326484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:24 compute-0 ceph-mon[73755]: pgmap v1462: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Sep 30 18:27:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:27:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:27:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:27:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:27:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:27:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:27:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:27:24 compute-0 podman[326549]: 2025-09-30 18:27:24.316428271 +0000 UTC m=+0.045438202 container create 14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 18:27:24 compute-0 systemd[1]: Started libpod-conmon-14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631.scope.
Sep 30 18:27:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:27:24 compute-0 podman[326549]: 2025-09-30 18:27:24.296930184 +0000 UTC m=+0.025940135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:27:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:24 compute-0 podman[326549]: 2025-09-30 18:27:24.4006546 +0000 UTC m=+0.129664551 container init 14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Sep 30 18:27:24 compute-0 podman[326549]: 2025-09-30 18:27:24.40796141 +0000 UTC m=+0.136971341 container start 14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 18:27:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:24 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_55] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:24 compute-0 podman[326549]: 2025-09-30 18:27:24.411533853 +0000 UTC m=+0.140543794 container attach 14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_einstein, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:27:24 compute-0 systemd[1]: libpod-14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631.scope: Deactivated successfully.
Sep 30 18:27:24 compute-0 confident_einstein[326566]: 167 167
Sep 30 18:27:24 compute-0 conmon[326566]: conmon 14aa01cfbc72b7952b39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631.scope/container/memory.events
Sep 30 18:27:24 compute-0 podman[326549]: 2025-09-30 18:27:24.41682489 +0000 UTC m=+0.145834861 container died 14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_einstein, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0085c32a7fc04bf3d9478647863f94d3c86a678efe9406b1cc9dae0b59967670-merged.mount: Deactivated successfully.
Sep 30 18:27:24 compute-0 podman[326549]: 2025-09-30 18:27:24.460428343 +0000 UTC m=+0.189438274 container remove 14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:27:24 compute-0 systemd[1]: libpod-conmon-14aa01cfbc72b7952b39ba49ec43e75e3afbf39217560edfbe5ca75ccb192631.scope: Deactivated successfully.
Sep 30 18:27:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1463: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Sep 30 18:27:24 compute-0 podman[326590]: 2025-09-30 18:27:24.629698683 +0000 UTC m=+0.035625387 container create 7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_banach, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:27:24 compute-0 systemd[1]: Started libpod-conmon-7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683.scope.
Sep 30 18:27:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e35909318fa28eb32bd4215aa16810452a3bdf5f3e13b496610c2efd145234f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e35909318fa28eb32bd4215aa16810452a3bdf5f3e13b496610c2efd145234f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e35909318fa28eb32bd4215aa16810452a3bdf5f3e13b496610c2efd145234f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e35909318fa28eb32bd4215aa16810452a3bdf5f3e13b496610c2efd145234f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e35909318fa28eb32bd4215aa16810452a3bdf5f3e13b496610c2efd145234f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:24 compute-0 podman[326590]: 2025-09-30 18:27:24.614010365 +0000 UTC m=+0.019937089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:27:24 compute-0 podman[326590]: 2025-09-30 18:27:24.712763022 +0000 UTC m=+0.118689746 container init 7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:27:24 compute-0 podman[326590]: 2025-09-30 18:27:24.720423421 +0000 UTC m=+0.126350115 container start 7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_banach, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:27:24 compute-0 podman[326590]: 2025-09-30 18:27:24.724020054 +0000 UTC m=+0.129946758 container attach 7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_banach, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:27:24 compute-0 sshd-session[326316]: Failed password for invalid user foundry from 45.252.249.158 port 49536 ssh2
Sep 30 18:27:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:24.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:25 compute-0 kind_banach[326606]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:27:25 compute-0 kind_banach[326606]: --> All data devices are unavailable
Sep 30 18:27:25 compute-0 systemd[1]: libpod-7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683.scope: Deactivated successfully.
Sep 30 18:27:25 compute-0 podman[326590]: 2025-09-30 18:27:25.092185923 +0000 UTC m=+0.498112627 container died 7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_banach, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e35909318fa28eb32bd4215aa16810452a3bdf5f3e13b496610c2efd145234f-merged.mount: Deactivated successfully.
Sep 30 18:27:25 compute-0 podman[326590]: 2025-09-30 18:27:25.130993562 +0000 UTC m=+0.536920266 container remove 7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_banach, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:27:25 compute-0 systemd[1]: libpod-conmon-7fe740a72b0848d369230374c865c04b0a1c77c833a3e5b4b9c7131cecc6c683.scope: Deactivated successfully.
Sep 30 18:27:25 compute-0 sudo[326484]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:25 compute-0 sudo[326634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:27:25 compute-0 sudo[326634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:25 compute-0 sudo[326634]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:25 compute-0 sudo[326659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:27:25 compute-0 sudo[326659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:25 compute-0 sshd-session[326316]: Received disconnect from 45.252.249.158 port 49536:11: Bye Bye [preauth]
Sep 30 18:27:25 compute-0 sshd-session[326316]: Disconnected from invalid user foundry 45.252.249.158 port 49536 [preauth]
Sep 30 18:27:25 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:25 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:25 compute-0 podman[326728]: 2025-09-30 18:27:25.754774933 +0000 UTC m=+0.045720769 container create 279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:27:25 compute-0 systemd[1]: Started libpod-conmon-279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907.scope.
Sep 30 18:27:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:27:25 compute-0 nova_compute[265391]: 2025-09-30 18:27:25.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:25 compute-0 podman[326728]: 2025-09-30 18:27:25.734044874 +0000 UTC m=+0.024990770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:27:25 compute-0 podman[326728]: 2025-09-30 18:27:25.850930412 +0000 UTC m=+0.141876288 container init 279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:27:25 compute-0 podman[326728]: 2025-09-30 18:27:25.865806639 +0000 UTC m=+0.156752505 container start 279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Sep 30 18:27:25 compute-0 podman[326728]: 2025-09-30 18:27:25.869626068 +0000 UTC m=+0.160571904 container attach 279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:27:25 compute-0 angry_nobel[326745]: 167 167
Sep 30 18:27:25 compute-0 systemd[1]: libpod-279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907.scope: Deactivated successfully.
Sep 30 18:27:25 compute-0 podman[326728]: 2025-09-30 18:27:25.875413859 +0000 UTC m=+0.166359785 container died 279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:27:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5b188da200e66014c6aa42fff343c9c6f38df9e8acf22efed14d3c28590ba70-merged.mount: Deactivated successfully.
Sep 30 18:27:25 compute-0 podman[326728]: 2025-09-30 18:27:25.920019908 +0000 UTC m=+0.210965754 container remove 279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:27:25 compute-0 systemd[1]: libpod-conmon-279b99d9b166f105d40fed6daf47b06ec07918eb8a5cd882b02a29483a999907.scope: Deactivated successfully.
Sep 30 18:27:26 compute-0 ceph-mon[73755]: pgmap v1463: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Sep 30 18:27:26 compute-0 podman[326772]: 2025-09-30 18:27:26.129228195 +0000 UTC m=+0.057828414 container create 43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:27:26 compute-0 systemd[1]: Started libpod-conmon-43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b.scope.
Sep 30 18:27:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:27:26 compute-0 podman[326772]: 2025-09-30 18:27:26.109202595 +0000 UTC m=+0.037802864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eee4f94ef6c411b4adc9ca94bc5b882be7ff669a2184b5799980d222f8096a61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eee4f94ef6c411b4adc9ca94bc5b882be7ff669a2184b5799980d222f8096a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eee4f94ef6c411b4adc9ca94bc5b882be7ff669a2184b5799980d222f8096a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eee4f94ef6c411b4adc9ca94bc5b882be7ff669a2184b5799980d222f8096a61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:26 compute-0 podman[326772]: 2025-09-30 18:27:26.22022385 +0000 UTC m=+0.148824139 container init 43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 18:27:26 compute-0 podman[326772]: 2025-09-30 18:27:26.228320511 +0000 UTC m=+0.156920770 container start 43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:27:26 compute-0 podman[326772]: 2025-09-30 18:27:26.233264789 +0000 UTC m=+0.161865028 container attach 43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_sutherland, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:27:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:26 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:26 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1464: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 7.3 KiB/s rd, 26 KiB/s wr, 10 op/s
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]: {
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:     "0": [
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:         {
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "devices": [
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "/dev/loop3"
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             ],
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "lv_name": "ceph_lv0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "lv_size": "21470642176",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "name": "ceph_lv0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "tags": {
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.cluster_name": "ceph",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.crush_device_class": "",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.encrypted": "0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.osd_id": "0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.type": "block",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.vdo": "0",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:                 "ceph.with_tpm": "0"
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             },
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "type": "block",
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:             "vg_name": "ceph_vg0"
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:         }
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]:     ]
Sep 30 18:27:26 compute-0 priceless_sutherland[326789]: }
Sep 30 18:27:26 compute-0 podman[326772]: 2025-09-30 18:27:26.568460111 +0000 UTC m=+0.497060350 container died 43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:27:26 compute-0 systemd[1]: libpod-43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b.scope: Deactivated successfully.
Sep 30 18:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-eee4f94ef6c411b4adc9ca94bc5b882be7ff669a2184b5799980d222f8096a61-merged.mount: Deactivated successfully.
Sep 30 18:27:26 compute-0 podman[326772]: 2025-09-30 18:27:26.606833999 +0000 UTC m=+0.535434228 container remove 43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:27:26 compute-0 systemd[1]: libpod-conmon-43cebaefb2b13fd2bf33aae7a1118fae1dc297a6fdb36e8eecbec7e97ae8d70b.scope: Deactivated successfully.
Sep 30 18:27:26 compute-0 sudo[326659]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:26 compute-0 sudo[326812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:27:26 compute-0 sudo[326812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:26 compute-0 sudo[326812]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:26.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:26 compute-0 sudo[326837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:27:26 compute-0 sudo[326837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:27 compute-0 podman[326904]: 2025-09-30 18:27:27.244711677 +0000 UTC m=+0.044772384 container create 9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:27:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:27.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:27 compute-0 systemd[1]: Started libpod-conmon-9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c.scope.
Sep 30 18:27:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:27:27 compute-0 podman[326904]: 2025-09-30 18:27:27.224253826 +0000 UTC m=+0.024314623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:27:27 compute-0 podman[326904]: 2025-09-30 18:27:27.329518061 +0000 UTC m=+0.129578778 container init 9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:27:27 compute-0 podman[326904]: 2025-09-30 18:27:27.337081858 +0000 UTC m=+0.137142565 container start 9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:27:27 compute-0 podman[326904]: 2025-09-30 18:27:27.340368114 +0000 UTC m=+0.140428911 container attach 9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_booth, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:27:27 compute-0 jovial_booth[326920]: 167 167
Sep 30 18:27:27 compute-0 systemd[1]: libpod-9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c.scope: Deactivated successfully.
Sep 30 18:27:27 compute-0 podman[326904]: 2025-09-30 18:27:27.345319422 +0000 UTC m=+0.145380129 container died 9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_booth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 18:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-672bfb6e3b3a7e038f558d6877c52e36366f86bde040a8cc1b40c40a1108ad0f-merged.mount: Deactivated successfully.
Sep 30 18:27:27 compute-0 podman[326904]: 2025-09-30 18:27:27.381452371 +0000 UTC m=+0.181513078 container remove 9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_booth, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:27:27 compute-0 systemd[1]: libpod-conmon-9d6771436a296cae67bee88c24c2455562a974eaefa7851020db1ddd3d9caa5c.scope: Deactivated successfully.
Sep 30 18:27:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:27 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_55] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:27 compute-0 podman[326945]: 2025-09-30 18:27:27.54800383 +0000 UTC m=+0.041714575 container create 29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:27:27 compute-0 systemd[1]: Started libpod-conmon-29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b.scope.
Sep 30 18:27:27 compute-0 podman[326945]: 2025-09-30 18:27:27.532559819 +0000 UTC m=+0.026270594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:27:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bfbbb35cf748b66c854458624779216187410bbb9717e08025571a61f2d7b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bfbbb35cf748b66c854458624779216187410bbb9717e08025571a61f2d7b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bfbbb35cf748b66c854458624779216187410bbb9717e08025571a61f2d7b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38bfbbb35cf748b66c854458624779216187410bbb9717e08025571a61f2d7b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:27:27 compute-0 podman[326945]: 2025-09-30 18:27:27.646662204 +0000 UTC m=+0.140372969 container init 29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:27:27 compute-0 podman[326945]: 2025-09-30 18:27:27.660168615 +0000 UTC m=+0.153879390 container start 29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_thompson, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:27:27 compute-0 podman[326945]: 2025-09-30 18:27:27.664681423 +0000 UTC m=+0.158392248 container attach 29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:27:28 compute-0 ceph-mon[73755]: pgmap v1464: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 7.3 KiB/s rd, 26 KiB/s wr, 10 op/s
Sep 30 18:27:28 compute-0 lvm[327037]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:27:28 compute-0 lvm[327037]: VG ceph_vg0 finished
Sep 30 18:27:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:28 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_55] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:28 compute-0 vibrant_thompson[326961]: {}
Sep 30 18:27:28 compute-0 systemd[1]: libpod-29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b.scope: Deactivated successfully.
Sep 30 18:27:28 compute-0 systemd[1]: libpod-29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b.scope: Consumed 1.263s CPU time.
Sep 30 18:27:28 compute-0 podman[326945]: 2025-09-30 18:27:28.447455837 +0000 UTC m=+0.941166652 container died 29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:27:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1465: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.1 MiB/s rd, 26 KiB/s wr, 47 op/s
Sep 30 18:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-38bfbbb35cf748b66c854458624779216187410bbb9717e08025571a61f2d7b4-merged.mount: Deactivated successfully.
Sep 30 18:27:28 compute-0 podman[326945]: 2025-09-30 18:27:28.50296223 +0000 UTC m=+0.996672975 container remove 29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:27:28 compute-0 systemd[1]: libpod-conmon-29de8debe6286d88924da54d8b8710bcca499fa1e91f82173f2b4365810b668b.scope: Deactivated successfully.
Sep 30 18:27:28 compute-0 sudo[326837]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:27:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:27:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:27:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:27:28 compute-0 sudo[327052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:27:28 compute-0 sudo[327052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:28 compute-0 sudo[327052]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:28.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:28] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:27:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:28] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:27:28 compute-0 nova_compute[265391]: 2025-09-30 18:27:28.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:29 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:29 compute-0 ceph-mon[73755]: pgmap v1465: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.1 MiB/s rd, 26 KiB/s wr, 47 op/s
Sep 30 18:27:29 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:27:29 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:27:29 compute-0 podman[276673]: time="2025-09-30T18:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:27:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:27:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10747 "" "Go-http-client/1.1"
Sep 30 18:27:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:30 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:30 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1466: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Sep 30 18:27:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:30.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:30 compute-0 nova_compute[265391]: 2025-09-30 18:27:30.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:31 compute-0 openstack_network_exporter[279566]: ERROR   18:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:27:31 compute-0 openstack_network_exporter[279566]: ERROR   18:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:27:31 compute-0 openstack_network_exporter[279566]: ERROR   18:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:27:31 compute-0 openstack_network_exporter[279566]: ERROR   18:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:27:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:27:31 compute-0 openstack_network_exporter[279566]: ERROR   18:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:27:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:27:31 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:31 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:31 compute-0 ceph-mon[73755]: pgmap v1466: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Sep 30 18:27:31 compute-0 unix_chkpwd[327082]: password check failed for user (root)
Sep 30 18:27:31 compute-0 sshd-session[326687]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.125.120.7  user=root
Sep 30 18:27:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:32 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:32 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1467: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:27:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:32.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:32 compute-0 nova_compute[265391]: 2025-09-30 18:27:32.844 2 DEBUG nova.compute.manager [None req-31579029-4c7d-4de7-a893-e966719cab5d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:635
Sep 30 18:27:32 compute-0 nova_compute[265391]: 2025-09-30 18:27:32.892 2 DEBUG nova.compute.provider_tree [None req-31579029-4c7d-4de7-a893-e966719cab5d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 28 to 31 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:27:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:33 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_56] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:33 compute-0 ceph-mon[73755]: pgmap v1467: 353 pgs: 353 active+clean; 167 MiB data, 310 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Sep 30 18:27:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:33.792Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:33 compute-0 sshd-session[326687]: Failed password for root from 154.125.120.7 port 43666 ssh2
Sep 30 18:27:33 compute-0 nova_compute[265391]: 2025-09-30 18:27:33.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:34 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1468: 353 pgs: 353 active+clean; 167 MiB data, 311 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:27:34 compute-0 sudo[327085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:27:34 compute-0 sudo[327085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:34 compute-0 sudo[327085]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:34.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:35 compute-0 sshd-session[326687]: Received disconnect from 154.125.120.7 port 43666:11: Bye Bye [preauth]
Sep 30 18:27:35 compute-0 sshd-session[326687]: Disconnected from authenticating user root 154.125.120.7 port 43666 [preauth]
Sep 30 18:27:35 compute-0 nova_compute[265391]: 2025-09-30 18:27:35.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:35 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:35 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:35 compute-0 ceph-mon[73755]: pgmap v1468: 353 pgs: 353 active+clean; 167 MiB data, 311 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:27:35 compute-0 nova_compute[265391]: 2025-09-30 18:27:35.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:36 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:36 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:36 compute-0 nova_compute[265391]: 2025-09-30 18:27:36.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1469: 353 pgs: 353 active+clean; 167 MiB data, 311 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 64 op/s
Sep 30 18:27:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:27:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800605053' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:27:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:27:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800605053' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:27:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/800605053' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:27:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/800605053' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:27:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:36.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:37.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:27:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:27:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:27:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:27:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:27:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:27:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:27:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:27:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:37 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_56] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:37 compute-0 ceph-mon[73755]: pgmap v1469: 353 pgs: 353 active+clean; 167 MiB data, 311 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 64 op/s
Sep 30 18:27:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:27:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:38.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:38 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1470: 353 pgs: 353 active+clean; 188 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 107 op/s
Sep 30 18:27:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:38.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:38] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:27:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:38] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:27:38 compute-0 nova_compute[265391]: 2025-09-30 18:27:38.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:39 compute-0 nova_compute[265391]: 2025-09-30 18:27:39.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:39 compute-0 nova_compute[265391]: 2025-09-30 18:27:39.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:27:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:39 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:39 compute-0 ceph-mon[73755]: pgmap v1470: 353 pgs: 353 active+clean; 188 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 107 op/s
Sep 30 18:27:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:40.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:40 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:40 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:40 compute-0 nova_compute[265391]: 2025-09-30 18:27:40.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:40 compute-0 nova_compute[265391]: 2025-09-30 18:27:40.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1471: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Sep 30 18:27:40 compute-0 podman[327118]: 2025-09-30 18:27:40.537807099 +0000 UTC m=+0.071696464 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:27:40 compute-0 podman[327117]: 2025-09-30 18:27:40.60440501 +0000 UTC m=+0.130158154 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller)
Sep 30 18:27:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/957335910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:27:40 compute-0 nova_compute[265391]: 2025-09-30 18:27:40.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:40 compute-0 nova_compute[265391]: 2025-09-30 18:27:40.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:27:40 compute-0 nova_compute[265391]: 2025-09-30 18:27:40.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:27:40 compute-0 nova_compute[265391]: 2025-09-30 18:27:40.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:40 compute-0 nova_compute[265391]: 2025-09-30 18:27:40.942 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:27:40 compute-0 nova_compute[265391]: 2025-09-30 18:27:40.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:27:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:27:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859010351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:27:41 compute-0 nova_compute[265391]: 2025-09-30 18:27:41.461 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:27:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:41 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_56] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:41 compute-0 ceph-mon[73755]: pgmap v1471: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Sep 30 18:27:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2859010351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:27:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:42 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:42 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1472: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:27:42 compute-0 nova_compute[265391]: 2025-09-30 18:27:42.517 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:27:42 compute-0 nova_compute[265391]: 2025-09-30 18:27:42.517 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:27:42 compute-0 nova_compute[265391]: 2025-09-30 18:27:42.660 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:27:42 compute-0 nova_compute[265391]: 2025-09-30 18:27:42.661 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:27:42 compute-0 nova_compute[265391]: 2025-09-30 18:27:42.687 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.025s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:27:42 compute-0 nova_compute[265391]: 2025-09-30 18:27:42.687 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4153MB free_disk=39.90130615234375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:27:42 compute-0 nova_compute[265391]: 2025-09-30 18:27:42.688 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:27:42 compute-0 nova_compute[265391]: 2025-09-30 18:27:42.688 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:27:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:42.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:43 compute-0 nova_compute[265391]: 2025-09-30 18:27:43.226 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Check if temp file /var/lib/nova/instances/tmpextsh8xs exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:27:43 compute-0 nova_compute[265391]: 2025-09-30 18:27:43.231 2 DEBUG nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpextsh8xs',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d188c0fb-8668-4ab2-b174-49e0e20505ba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:27:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:43 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:43 compute-0 podman[327195]: 2025-09-30 18:27:43.523553349 +0000 UTC m=+0.059243590 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:27:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:43.793Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:43 compute-0 ceph-mon[73755]: pgmap v1472: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 18:27:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3835132069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:27:43 compute-0 nova_compute[265391]: 2025-09-30 18:27:43.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:44 compute-0 nova_compute[265391]: 2025-09-30 18:27:44.239 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance ff4515a6-7f9a-40de-9516-7071910d467b has allocations against this compute host but is not found in the database.
Sep 30 18:27:44 compute-0 nova_compute[265391]: 2025-09-30 18:27:44.240 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:27:44 compute-0 nova_compute[265391]: 2025-09-30 18:27:44.240 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:27:42 up  1:31,  0 user,  load average: 0.43, 0.61, 0.75\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:27:44 compute-0 nova_compute[265391]: 2025-09-30 18:27:44.302 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:27:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:44 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:44.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1473: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:27:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:44.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:27:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068927978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:27:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4068927978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:27:44 compute-0 nova_compute[265391]: 2025-09-30 18:27:44.825 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:27:44 compute-0 nova_compute[265391]: 2025-09-30 18:27:44.832 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:27:45 compute-0 nova_compute[265391]: 2025-09-30 18:27:45.340 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:27:45 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:45 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_56] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:45 compute-0 nova_compute[265391]: 2025-09-30 18:27:45.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:45 compute-0 ceph-mon[73755]: pgmap v1473: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:27:45 compute-0 nova_compute[265391]: 2025-09-30 18:27:45.851 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:27:45 compute-0 nova_compute[265391]: 2025-09-30 18:27:45.851 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.163s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:46 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:46 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_56] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:46.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1474: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:27:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:46.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:46 compute-0 nova_compute[265391]: 2025-09-30 18:27:46.851 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:46 compute-0 nova_compute[265391]: 2025-09-30 18:27:46.852 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:46 compute-0 nova_compute[265391]: 2025-09-30 18:27:46.852 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:47.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:47 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:47 compute-0 nova_compute[265391]: 2025-09-30 18:27:47.635 2 DEBUG nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Preparing to wait for external event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:27:47 compute-0 nova_compute[265391]: 2025-09-30 18:27:47.635 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:27:47 compute-0 nova_compute[265391]: 2025-09-30 18:27:47.635 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:27:47 compute-0 nova_compute[265391]: 2025-09-30 18:27:47.635 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:47 compute-0 ceph-mon[73755]: pgmap v1474: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:27:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:48 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:48.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1475: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:27:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:48.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:48] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:27:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:48] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:27:48 compute-0 nova_compute[265391]: 2025-09-30 18:27:48.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:49 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:50 compute-0 ceph-mon[73755]: pgmap v1475: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:27:50 compute-0 nova_compute[265391]: 2025-09-30 18:27:50.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:27:50 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:50 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_56] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:50.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1476: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 138 KiB/s rd, 947 KiB/s wr, 21 op/s
Sep 30 18:27:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:50.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:50 compute-0 nova_compute[265391]: 2025-09-30 18:27:50.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:51 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:51 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:52 compute-0 ovn_controller[156242]: 2025-09-30T18:27:52Z|00165|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Sep 30 18:27:52 compute-0 ceph-mon[73755]: pgmap v1476: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 138 KiB/s rd, 947 KiB/s wr, 21 op/s
Sep 30 18:27:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:27:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:27:52 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:52 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:52.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1477: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Sep 30 18:27:52 compute-0 podman[327248]: 2025-09-30 18:27:52.540361969 +0000 UTC m=+0.067828864 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:27:52 compute-0 podman[327249]: 2025-09-30 18:27:52.559916428 +0000 UTC m=+0.079734234 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:27:52 compute-0 podman[327250]: 2025-09-30 18:27:52.573784578 +0000 UTC m=+0.084964979 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter)
Sep 30 18:27:52 compute-0 unix_chkpwd[327308]: password check failed for user (root)
Sep 30 18:27:52 compute-0 sshd-session[327244]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 18:27:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:52.697 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:27:52 compute-0 nova_compute[265391]: 2025-09-30 18:27:52.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:52.699 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:27:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:52.700 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:27:52 compute-0 nova_compute[265391]: 2025-09-30 18:27:52.710 2 DEBUG nova.compute.manager [req-709e6771-d865-4bb3-99a3-a0703e79964c req-3e51a1fc-55a4-49c3-aaae-57ffbabd8221 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:27:52 compute-0 nova_compute[265391]: 2025-09-30 18:27:52.710 2 DEBUG oslo_concurrency.lockutils [req-709e6771-d865-4bb3-99a3-a0703e79964c req-3e51a1fc-55a4-49c3-aaae-57ffbabd8221 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:27:52 compute-0 nova_compute[265391]: 2025-09-30 18:27:52.711 2 DEBUG oslo_concurrency.lockutils [req-709e6771-d865-4bb3-99a3-a0703e79964c req-3e51a1fc-55a4-49c3-aaae-57ffbabd8221 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:27:52 compute-0 nova_compute[265391]: 2025-09-30 18:27:52.711 2 DEBUG oslo_concurrency.lockutils [req-709e6771-d865-4bb3-99a3-a0703e79964c req-3e51a1fc-55a4-49c3-aaae-57ffbabd8221 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:52 compute-0 nova_compute[265391]: 2025-09-30 18:27:52.711 2 DEBUG nova.compute.manager [req-709e6771-d865-4bb3-99a3-a0703e79964c req-3e51a1fc-55a4-49c3-aaae-57ffbabd8221 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] No event matching network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e in dict_keys([('network-vif-plugged', 'ac8e3da5-cb09-4223-89d5-318d077ea35e')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:27:52 compute-0 nova_compute[265391]: 2025-09-30 18:27:52.711 2 DEBUG nova.compute.manager [req-709e6771-d865-4bb3-99a3-a0703e79964c req-3e51a1fc-55a4-49c3-aaae-57ffbabd8221 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:27:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:52.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:53 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:27:53 compute-0 nova_compute[265391]: 2025-09-30 18:27:53.655 2 INFO nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Took 6.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:27:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:53.794Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:53 compute-0 nova_compute[265391]: 2025-09-30 18:27:53.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:54.310 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:27:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:54.310 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:27:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:54.310 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:54 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_56] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:54.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1478: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:27:54 compute-0 ceph-mon[73755]: pgmap v1477: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Sep 30 18:27:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:54.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.782 2 DEBUG nova.compute.manager [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.782 2 DEBUG oslo_concurrency.lockutils [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.783 2 DEBUG oslo_concurrency.lockutils [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.783 2 DEBUG oslo_concurrency.lockutils [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.783 2 DEBUG nova.compute.manager [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Processing event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.783 2 DEBUG nova.compute.manager [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-changed-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.784 2 DEBUG nova.compute.manager [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Refreshing instance network info cache due to event network-changed-ac8e3da5-cb09-4223-89d5-318d077ea35e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.784 2 DEBUG oslo_concurrency.lockutils [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.784 2 DEBUG oslo_concurrency.lockutils [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.785 2 DEBUG nova.network.neutron [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Refreshing network info cache for port ac8e3da5-cb09-4223-89d5-318d077ea35e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:27:54 compute-0 nova_compute[265391]: 2025-09-30 18:27:54.786 2 DEBUG nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:27:54 compute-0 sudo[327313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:27:54 compute-0 sudo[327313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:27:54 compute-0 sudo[327313]: pam_unix(sudo:session): session closed for user root
Sep 30 18:27:55 compute-0 sshd-session[327244]: Failed password for root from 14.225.220.107 port 38264 ssh2
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.292 2 WARNING neutronclient.v2_0.client [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.301 2 DEBUG nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpextsh8xs',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d188c0fb-8668-4ab2-b174-49e0e20505ba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(ff4515a6-7f9a-40de-9516-7071910d467b),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.308 2 DEBUG nova.objects.instance [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid d188c0fb-8668-4ab2-b174-49e0e20505ba obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.310 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.312 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.313 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:27:55 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:55 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:55 compute-0 sshd-session[327244]: Received disconnect from 14.225.220.107 port 38264:11: Bye Bye [preauth]
Sep 30 18:27:55 compute-0 sshd-session[327244]: Disconnected from authenticating user root 14.225.220.107 port 38264 [preauth]
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.702 2 WARNING neutronclient.v2_0.client [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.815 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.815 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.821 2 DEBUG nova.virt.libvirt.vif [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1528882058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1528882058',id=18,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:26:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-1gsulz3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:26:58Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=d188c0fb-8668-4ab2-b174-49e0e20505ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.821 2 DEBUG nova.network.os_vif_util [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.822 2 DEBUG nova.network.os_vif_util [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:b1,bridge_name='br-int',has_traffic_filtering=True,id=ac8e3da5-cb09-4223-89d5-318d077ea35e,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac8e3da5-cb') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.823 2 DEBUG nova.virt.libvirt.migration [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:bb:e0:b1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <target dev="tapac8e3da5-cb"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]: </interface>
Sep 30 18:27:55 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.823 2 DEBUG nova.virt.libvirt.migration [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <name>instance-00000012</name>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <uuid>d188c0fb-8668-4ab2-b174-49e0e20505ba</uuid>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1528882058</nova:name>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:26:52</nova:creationTime>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:port uuid="ac8e3da5-cb09-4223-89d5-318d077ea35e">
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <system>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="serial">d188c0fb-8668-4ab2-b174-49e0e20505ba</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="uuid">d188c0fb-8668-4ab2-b174-49e0e20505ba</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </system>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <os>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </os>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <features>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </features>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d188c0fb-8668-4ab2-b174-49e0e20505ba_disk">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:bb:e0:b1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapac8e3da5-cb"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/console.log" append="off"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </target>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/console.log" append="off"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </console>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </input>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <video>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </video>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]: </domain>
Sep 30 18:27:55 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.826 2 DEBUG nova.virt.libvirt.migration [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <name>instance-00000012</name>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <uuid>d188c0fb-8668-4ab2-b174-49e0e20505ba</uuid>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1528882058</nova:name>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:26:52</nova:creationTime>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:port uuid="ac8e3da5-cb09-4223-89d5-318d077ea35e">
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <system>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="serial">d188c0fb-8668-4ab2-b174-49e0e20505ba</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="uuid">d188c0fb-8668-4ab2-b174-49e0e20505ba</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </system>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <os>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </os>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <features>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </features>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d188c0fb-8668-4ab2-b174-49e0e20505ba_disk">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:bb:e0:b1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapac8e3da5-cb"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/console.log" append="off"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </target>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/console.log" append="off"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </console>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </input>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <video>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </video>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]: </domain>
Sep 30 18:27:55 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.828 2 DEBUG nova.virt.libvirt.migration [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <name>instance-00000012</name>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <uuid>d188c0fb-8668-4ab2-b174-49e0e20505ba</uuid>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1528882058</nova:name>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:26:52</nova:creationTime>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <nova:port uuid="ac8e3da5-cb09-4223-89d5-318d077ea35e">
Sep 30 18:27:55 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <system>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="serial">d188c0fb-8668-4ab2-b174-49e0e20505ba</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="uuid">d188c0fb-8668-4ab2-b174-49e0e20505ba</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </system>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <os>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </os>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <features>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </features>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d188c0fb-8668-4ab2-b174-49e0e20505ba_disk">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/d188c0fb-8668-4ab2-b174-49e0e20505ba_disk.config">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:bb:e0:b1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapac8e3da5-cb"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/console.log" append="off"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:27:55 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       </target>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba/console.log" append="off"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </console>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </input>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <video>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </video>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:27:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:27:55 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:27:55 compute-0 nova_compute[265391]: </domain>
Sep 30 18:27:55 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.828 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.898 2 DEBUG nova.network.neutron [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Updated VIF entry in instance network info cache for port ac8e3da5-cb09-4223-89d5-318d077ea35e. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:27:55 compute-0 nova_compute[265391]: 2025-09-30 18:27:55.898 2 DEBUG nova.network.neutron [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Updating instance_info_cache with network_info: [{"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:27:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:27:56 compute-0 ceph-mon[73755]: pgmap v1478: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:27:56 compute-0 nova_compute[265391]: 2025-09-30 18:27:56.318 2 DEBUG nova.virt.libvirt.migration [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:27:56 compute-0 nova_compute[265391]: 2025-09-30 18:27:56.318 2 INFO nova.virt.libvirt.migration [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:27:56 compute-0 nova_compute[265391]: 2025-09-30 18:27:56.407 2 DEBUG oslo_concurrency.lockutils [req-eefed5df-703c-49ed-9f67-f31041b67c46 req-710c6526-a5eb-478e-8351-b85bd472d37f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-d188c0fb-8668-4ab2-b174-49e0e20505ba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:27:56 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:56 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:27:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:56.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:27:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1479: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:27:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:56.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:27:57.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:27:57 compute-0 nova_compute[265391]: 2025-09-30 18:27:57.335 2 INFO nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:27:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:57 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:27:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203785252' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:27:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:27:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203785252' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:27:57 compute-0 nova_compute[265391]: 2025-09-30 18:27:57.839 2 DEBUG nova.virt.libvirt.migration [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:27:57 compute-0 nova_compute[265391]: 2025-09-30 18:27:57.840 2 DEBUG nova.virt.libvirt.migration [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:27:58 compute-0 ceph-mon[73755]: pgmap v1479: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 0 op/s
Sep 30 18:27:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4203785252' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:27:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4203785252' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:27:58 compute-0 kernel: tapac8e3da5-cb (unregistering): left promiscuous mode
Sep 30 18:27:58 compute-0 NetworkManager[45059]: <info>  [1759256878.4204] device (tapac8e3da5-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:58 compute-0 ovn_controller[156242]: 2025-09-30T18:27:58Z|00166|binding|INFO|Releasing lport ac8e3da5-cb09-4223-89d5-318d077ea35e from this chassis (sb_readonly=0)
Sep 30 18:27:58 compute-0 ovn_controller[156242]: 2025-09-30T18:27:58Z|00167|binding|INFO|Setting lport ac8e3da5-cb09-4223-89d5-318d077ea35e down in Southbound
Sep 30 18:27:58 compute-0 ovn_controller[156242]: 2025-09-30T18:27:58Z|00168|binding|INFO|Removing iface tapac8e3da5-cb ovn-installed in OVS
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:58 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.444 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:e0:b1 10.100.0.14'], port_security=['fa:16:3e:bb:e0:b1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd188c0fb-8668-4ab2-b174-49e0e20505ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=ac8e3da5-cb09-4223-89d5-318d077ea35e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.446 166158 INFO neutron.agent.ovn.metadata.agent [-] Port ac8e3da5-cb09-4223-89d5-318d077ea35e in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.448 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:27:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:27:58.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.452 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2e445f0e-89d7-4cc7-91c3-74c992daa7e9]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.454 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace which is not needed anymore
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:58 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000012.scope: Deactivated successfully.
Sep 30 18:27:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1480: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 9.2 KiB/s wr, 3 op/s
Sep 30 18:27:58 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000012.scope: Consumed 15.104s CPU time.
Sep 30 18:27:58 compute-0 systemd-machined[219917]: Machine qemu-13-instance-00000012 terminated.
Sep 30 18:27:58 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on d188c0fb-8668-4ab2-b174-49e0e20505ba_disk: No such file or directory
Sep 30 18:27:58 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on d188c0fb-8668-4ab2-b174-49e0e20505ba_disk: No such file or directory
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.544 2 DEBUG nova.virt.libvirt.guest [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.544 2 INFO nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Migration operation has completed
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.544 2 INFO nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] _post_live_migration() is started..
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.547 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.547 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.547 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.557 2 WARNING neutronclient.v2_0.client [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.558 2 WARNING neutronclient.v2_0.client [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:27:58 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[326177]: [NOTICE]   (326182) : haproxy version is 3.0.5-8e879a5
Sep 30 18:27:58 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[326177]: [NOTICE]   (326182) : path to executable is /usr/sbin/haproxy
Sep 30 18:27:58 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[326177]: [WARNING]  (326182) : Exiting Master process...
Sep 30 18:27:58 compute-0 podman[327377]: 2025-09-30 18:27:58.598948873 +0000 UTC m=+0.029046856 container kill 7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:27:58 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[326177]: [ALERT]    (326182) : Current worker (326185) exited with code 143 (Terminated)
Sep 30 18:27:58 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[326177]: [WARNING]  (326182) : All workers exited. Exiting... (0)
Sep 30 18:27:58 compute-0 systemd[1]: libpod-7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3.scope: Deactivated successfully.
Sep 30 18:27:58 compute-0 podman[327392]: 2025-09-30 18:27:58.643823399 +0000 UTC m=+0.025989396 container died 7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3-userdata-shm.mount: Deactivated successfully.
Sep 30 18:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-207d859d15ddc00e99276997fd3e5eba296ccc0537e9679a76845cc18742ab8e-merged.mount: Deactivated successfully.
Sep 30 18:27:58 compute-0 podman[327392]: 2025-09-30 18:27:58.689042245 +0000 UTC m=+0.071208212 container cleanup 7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_managed=true, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:27:58 compute-0 systemd[1]: libpod-conmon-7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3.scope: Deactivated successfully.
Sep 30 18:27:58 compute-0 podman[327394]: 2025-09-30 18:27:58.708511401 +0000 UTC m=+0.079739274 container remove 7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.723 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d34f024e-96ba-4830-afbb-49c0bfeb1069]: (4, ("Tue Sep 30 06:27:58 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3)\n7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3\nTue Sep 30 06:27:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3)\n7f1913d0547baa11d5113cf7291bdea0d065b4d1a9ba872dc3fda2810eba83f3\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.725 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ff843fe8-26c5-4fc9-b155-4f7a5be34d8f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.726 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.726 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[89e06212-0ade-4fa3-ba21-d77da41fcf85]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.727 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:58 compute-0 kernel: tap6901f664-30: left promiscuous mode
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.748 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad06923-b6bd-4428-aa82-90958b9f24e8]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.780 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[672185be-ff0c-4dd5-b916-60904b977a9b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.782 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0515eb11-3416-4e1c-8204-9ca2e2524364]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:27:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:27:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:27:58.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:27:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:58] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:27:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:27:58] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.797 2 DEBUG nova.compute.manager [req-ae79b906-50f4-42ba-8390-db8602cf15db req-578dd1c0-ac0e-4156-b318-b90fc19bd5fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.798 2 DEBUG oslo_concurrency.lockutils [req-ae79b906-50f4-42ba-8390-db8602cf15db req-578dd1c0-ac0e-4156-b318-b90fc19bd5fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.798 2 DEBUG oslo_concurrency.lockutils [req-ae79b906-50f4-42ba-8390-db8602cf15db req-578dd1c0-ac0e-4156-b318-b90fc19bd5fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.798 2 DEBUG oslo_concurrency.lockutils [req-ae79b906-50f4-42ba-8390-db8602cf15db req-578dd1c0-ac0e-4156-b318-b90fc19bd5fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.798 2 DEBUG nova.compute.manager [req-ae79b906-50f4-42ba-8390-db8602cf15db req-578dd1c0-ac0e-4156-b318-b90fc19bd5fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] No waiting events found dispatching network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.798 2 DEBUG nova.compute.manager [req-ae79b906-50f4-42ba-8390-db8602cf15db req-578dd1c0-ac0e-4156-b318-b90fc19bd5fc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.800 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6171311f-757b-482e-a330-8fe00f815163]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541566, 'reachable_time': 30152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327426, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.805 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:27:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:27:58.805 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[4f59ee05-dd40-45ba-942f-79775bbdf190]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:27:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d6901f664\x2d336b\x2d42d2\x2dbbf7\x2d58951befc8d1.mount: Deactivated successfully.
Sep 30 18:27:58 compute-0 nova_compute[265391]: 2025-09-30 18:27:58.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.139 2 DEBUG nova.network.neutron [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port ac8e3da5-cb09-4223-89d5-318d077ea35e and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.140 2 DEBUG nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.141 2 DEBUG nova.virt.libvirt.vif [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1528882058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1528882058',id=18,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:26:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-1gsulz3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:27:37Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=d188c0fb-8668-4ab2-b174-49e0e20505ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.142 2 DEBUG nova.network.os_vif_util [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "address": "fa:16:3e:bb:e0:b1", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac8e3da5-cb", "ovs_interfaceid": "ac8e3da5-cb09-4223-89d5-318d077ea35e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.143 2 DEBUG nova.network.os_vif_util [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:b1,bridge_name='br-int',has_traffic_filtering=True,id=ac8e3da5-cb09-4223-89d5-318d077ea35e,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac8e3da5-cb') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.144 2 DEBUG os_vif [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:b1,bridge_name='br-int',has_traffic_filtering=True,id=ac8e3da5-cb09-4223-89d5-318d077ea35e,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac8e3da5-cb') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac8e3da5-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.153 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=52525aad-96f1-4dd5-b652-e19f3bc034ad) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.161 2 INFO os_vif [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:e0:b1,bridge_name='br-int',has_traffic_filtering=True,id=ac8e3da5-cb09-4223-89d5-318d077ea35e,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac8e3da5-cb')
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.162 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.163 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.163 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.164 2 DEBUG nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.164 2 INFO nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Deleting instance files /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba_del
Sep 30 18:27:59 compute-0 nova_compute[265391]: 2025-09-30 18:27:59.165 2 INFO nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Deletion of /var/lib/nova/instances/d188c0fb-8668-4ab2-b174-49e0e20505ba_del complete
Sep 30 18:27:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:27:59 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f038c004720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:27:59 compute-0 podman[276673]: time="2025-09-30T18:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:27:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:27:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10292 "" "Go-http-client/1.1"
Sep 30 18:28:00 compute-0 ceph-mon[73755]: pgmap v1480: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 9.2 KiB/s wr, 3 op/s
Sep 30 18:28:00 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Sep 30 18:28:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:00 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_49] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:00.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1481: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:28:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.849 2 DEBUG nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.849 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.849 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.849 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.850 2 DEBUG nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] No waiting events found dispatching network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.850 2 WARNING nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received unexpected event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e for instance with vm_state active and task_state migrating.
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.850 2 DEBUG nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.850 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.850 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.850 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.850 2 DEBUG nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] No waiting events found dispatching network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.850 2 DEBUG nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-unplugged-ac8e3da5-cb09-4223-89d5-318d077ea35e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.851 2 DEBUG nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.851 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.851 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.851 2 DEBUG oslo_concurrency.lockutils [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.851 2 DEBUG nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] No waiting events found dispatching network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:28:00 compute-0 nova_compute[265391]: 2025-09-30 18:28:00.851 2 WARNING nova.compute.manager [req-20cd32c6-1856-48d9-93c1-b61f3c490592 req-41ad2f1e-4992-4563-b636-1b1b435d2143 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Received unexpected event network-vif-plugged-ac8e3da5-cb09-4223-89d5-318d077ea35e for instance with vm_state active and task_state migrating.
Sep 30 18:28:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:01 compute-0 openstack_network_exporter[279566]: ERROR   18:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:28:01 compute-0 openstack_network_exporter[279566]: ERROR   18:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:28:01 compute-0 openstack_network_exporter[279566]: ERROR   18:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:28:01 compute-0 openstack_network_exporter[279566]: ERROR   18:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:28:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:28:01 compute-0 openstack_network_exporter[279566]: ERROR   18:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:28:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:28:01 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:01 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03b400a3b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:02 compute-0 ceph-mon[73755]: pgmap v1481: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:28:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:02 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_57] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:02.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1482: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:28:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:02.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:03 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:03.794Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:04 compute-0 nova_compute[265391]: 2025-09-30 18:28:04.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:04 compute-0 ceph-mon[73755]: pgmap v1482: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:28:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:04 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_58] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:04.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1483: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:28:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:04.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:05 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:05 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:05 compute-0 nova_compute[265391]: 2025-09-30 18:28:05.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:06 compute-0 ceph-mon[73755]: pgmap v1483: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.6 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:28:06 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:06 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_57] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0390001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:06.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1484: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:28:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:06.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:07.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:28:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:28:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:07 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022762929771439734 of space, bias 1.0, pg target 0.4552585954287947 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:28:07
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.nfs', 'backups', 'default.rgw.control', 'images', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log']
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:28:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:28:08 compute-0 nova_compute[265391]: 2025-09-30 18:28:08.202 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:08 compute-0 nova_compute[265391]: 2025-09-30 18:28:08.202 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:08 compute-0 nova_compute[265391]: 2025-09-30 18:28:08.202 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "d188c0fb-8668-4ab2-b174-49e0e20505ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:08 compute-0 ceph-mon[73755]: pgmap v1484: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:28:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:28:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:08 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_58] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03840033e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:08.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1485: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:28:08 compute-0 nova_compute[265391]: 2025-09-30 18:28:08.719 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:08 compute-0 nova_compute[265391]: 2025-09-30 18:28:08.720 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:08 compute-0 nova_compute[265391]: 2025-09-30 18:28:08.721 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:08 compute-0 nova_compute[265391]: 2025-09-30 18:28:08.721 2 DEBUG nova.compute.resource_tracker [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:28:08 compute-0 nova_compute[265391]: 2025-09-30 18:28:08.722 2 DEBUG oslo_concurrency.processutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:28:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:28:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:28:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208046519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.193 2 DEBUG oslo_concurrency.processutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1208046519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.385 2 WARNING nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.386 2 DEBUG oslo_concurrency.processutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.414 2 DEBUG oslo_concurrency.processutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.028s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.415 2 DEBUG nova.compute.resource_tracker [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4359MB free_disk=39.901153564453125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.415 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.416 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:09 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_53] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.890 2 DEBUG nova.compute.manager [None req-213dbd59-e9a6-4e52-b0cd-fd7adca32fbb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:631
Sep 30 18:28:09 compute-0 nova_compute[265391]: 2025-09-30 18:28:09.962 2 DEBUG nova.compute.provider_tree [None req-213dbd59-e9a6-4e52-b0cd-fd7adca32fbb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 31 to 33 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:28:10 compute-0 ceph-mon[73755]: pgmap v1485: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 4.5 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:28:10 compute-0 nova_compute[265391]: 2025-09-30 18:28:10.436 2 DEBUG nova.compute.resource_tracker [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance d188c0fb-8668-4ab2-b174-49e0e20505ba refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:28:10 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:10 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f03a4000a70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Sep 30 18:28:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:10.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1486: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 2.4 KiB/s rd, 2.3 KiB/s wr, 3 op/s
Sep 30 18:28:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:10.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:10 compute-0 nova_compute[265391]: 2025-09-30 18:28:10.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:10 compute-0 nova_compute[265391]: 2025-09-30 18:28:10.942 2 DEBUG nova.compute.resource_tracker [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:28:10 compute-0 nova_compute[265391]: 2025-09-30 18:28:10.980 2 DEBUG nova.compute.resource_tracker [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration ff4515a6-7f9a-40de-9516-7071910d467b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:28:10 compute-0 nova_compute[265391]: 2025-09-30 18:28:10.980 2 DEBUG nova.compute.resource_tracker [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:28:10 compute-0 nova_compute[265391]: 2025-09-30 18:28:10.981 2 DEBUG nova.compute.resource_tracker [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:28:09 up  1:31,  0 user,  load average: 0.28, 0.56, 0.73\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:28:11 compute-0 nova_compute[265391]: 2025-09-30 18:28:11.026 2 DEBUG oslo_concurrency.processutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:11 compute-0 kernel: ganesha.nfsd[327432]: segfault at 50 ip 00007f0462dbc32e sp 00007f04357f9210 error 4 in libntirpc.so.5.8[7f0462da1000+2c000] likely on CPU 7 (core 0, socket 7)
Sep 30 18:28:11 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 18:28:11 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[310061]: 30/09/2025 18:28:11 : epoch 68dc1e3e : compute-0 : ganesha.nfsd-2[svc_54] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0398003ab0 fd 47 proxy ignored for local
Sep 30 18:28:11 compute-0 systemd[1]: Started Process Core Dump (PID 327506/UID 0).
Sep 30 18:28:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:28:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707156152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:11 compute-0 podman[327488]: 2025-09-30 18:28:11.553282809 +0000 UTC m=+0.077776512 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:28:11 compute-0 nova_compute[265391]: 2025-09-30 18:28:11.601 2 DEBUG oslo_concurrency.processutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:11 compute-0 nova_compute[265391]: 2025-09-30 18:28:11.607 2 DEBUG nova.compute.provider_tree [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:28:11 compute-0 podman[327487]: 2025-09-30 18:28:11.610279231 +0000 UTC m=+0.135861212 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Sep 30 18:28:12 compute-0 nova_compute[265391]: 2025-09-30 18:28:12.115 2 DEBUG nova.scheduler.client.report [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:28:12 compute-0 ceph-mon[73755]: pgmap v1486: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 2.4 KiB/s rd, 2.3 KiB/s wr, 3 op/s
Sep 30 18:28:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1707156152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:12.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1487: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:28:12 compute-0 nova_compute[265391]: 2025-09-30 18:28:12.632 2 DEBUG nova.compute.resource_tracker [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:28:12 compute-0 nova_compute[265391]: 2025-09-30 18:28:12.632 2 DEBUG oslo_concurrency.lockutils [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.216s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:12 compute-0 nova_compute[265391]: 2025-09-30 18:28:12.649 2 INFO nova.compute.manager [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:28:12 compute-0 systemd-coredump[327516]: Process 310065 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 103:
                                                    #0  0x00007f0462dbc32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 18:28:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:12 compute-0 systemd[1]: systemd-coredump@15-327506-0.service: Deactivated successfully.
Sep 30 18:28:12 compute-0 systemd[1]: systemd-coredump@15-327506-0.service: Consumed 1.251s CPU time.
Sep 30 18:28:12 compute-0 podman[327546]: 2025-09-30 18:28:12.920720499 +0000 UTC m=+0.031610363 container died cc4f545686f6a742167a6aa20553cb06d2dd8d75a807947df3eaf52b21deff1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fbadb344fc5678ed011abf7011f737b84ba3cc45182d1e6e258b4b84c49bd95-merged.mount: Deactivated successfully.
Sep 30 18:28:12 compute-0 podman[327546]: 2025-09-30 18:28:12.972364391 +0000 UTC m=+0.083254195 container remove cc4f545686f6a742167a6aa20553cb06d2dd8d75a807947df3eaf52b21deff1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:28:12 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Main process exited, code=exited, status=139/n/a
Sep 30 18:28:13 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Failed with result 'exit-code'.
Sep 30 18:28:13 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 2.790s CPU time.
Sep 30 18:28:13 compute-0 nova_compute[265391]: 2025-09-30 18:28:13.745 2 INFO nova.scheduler.client.report [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration ff4515a6-7f9a-40de-9516-7071910d467b
Sep 30 18:28:13 compute-0 nova_compute[265391]: 2025-09-30 18:28:13.746 2 DEBUG nova.virt.libvirt.driver [None req-5277a133-1ae2-45fd-93c2-84ac17b4e412 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: d188c0fb-8668-4ab2-b174-49e0e20505ba] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:28:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:13.797Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:14 compute-0 nova_compute[265391]: 2025-09-30 18:28:14.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:14 compute-0 ceph-mon[73755]: pgmap v1487: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:28:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:14.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1488: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:28:14 compute-0 podman[327592]: 2025-09-30 18:28:14.555180369 +0000 UTC m=+0.085513363 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:28:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:14 compute-0 sudo[327612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:28:14 compute-0 sudo[327612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:14 compute-0 sudo[327612]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:15 compute-0 nova_compute[265391]: 2025-09-30 18:28:15.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:15.936144) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256895936178, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 839, "num_deletes": 252, "total_data_size": 1232592, "memory_usage": 1251800, "flush_reason": "Manual Compaction"}
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256895943585, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 787897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38119, "largest_seqno": 38957, "table_properties": {"data_size": 784448, "index_size": 1229, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9568, "raw_average_key_size": 20, "raw_value_size": 776960, "raw_average_value_size": 1692, "num_data_blocks": 54, "num_entries": 459, "num_filter_entries": 459, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759256827, "oldest_key_time": 1759256827, "file_creation_time": 1759256895, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 7496 microseconds, and 2682 cpu microseconds.
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:15.943640) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 787897 bytes OK
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:15.943659) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:15.946860) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:15.946877) EVENT_LOG_v1 {"time_micros": 1759256895946872, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:15.946897) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1228471, prev total WAL file size 1228471, number of live WAL files 2.
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:15.947644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353035' seq:0, type:0; will stop at (end)
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(769KB)], [83(13MB)]
Sep 30 18:28:15 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256895947731, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 14613654, "oldest_snapshot_seqno": -1}
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6521 keys, 11169027 bytes, temperature: kUnknown
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256896030212, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 11169027, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11129495, "index_size": 22142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 168708, "raw_average_key_size": 25, "raw_value_size": 11016278, "raw_average_value_size": 1689, "num_data_blocks": 880, "num_entries": 6521, "num_filter_entries": 6521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759256895, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:16.030946) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 11169027 bytes
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:16.033481) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.1 rd, 134.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.2 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(32.7) write-amplify(14.2) OK, records in: 7011, records dropped: 490 output_compression: NoCompression
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:16.033506) EVENT_LOG_v1 {"time_micros": 1759256896033494, "job": 48, "event": "compaction_finished", "compaction_time_micros": 82976, "compaction_time_cpu_micros": 53787, "output_level": 6, "num_output_files": 1, "total_output_size": 11169027, "num_input_records": 7011, "num_output_records": 6521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256896034068, "job": 48, "event": "table_file_deletion", "file_number": 85}
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759256896037965, "job": 48, "event": "table_file_deletion", "file_number": 83}
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:15.947469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:16.038056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:16.038064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:16.038065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:16.038067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:28:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:28:16.038068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:28:16 compute-0 ceph-mon[73755]: pgmap v1488: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 170 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:28:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:16.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1489: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:28:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:16.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:17.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [WARNING] 272/182817 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 18:28:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha[98063]: [ALERT] 272/182817 (4) : backend 'backend' has no server available!
Sep 30 18:28:18 compute-0 ceph-mon[73755]: pgmap v1489: 353 pgs: 353 active+clean; 200 MiB data, 336 MiB used, 40 GiB / 40 GiB avail; 85 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:28:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:18.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1490: 353 pgs: 353 active+clean; 151 MiB data, 313 MiB used, 40 GiB / 40 GiB avail; 12 KiB/s rd, 3.5 KiB/s wr, 19 op/s
Sep 30 18:28:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:28:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:28:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:18.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:19 compute-0 nova_compute[265391]: 2025-09-30 18:28:19.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:19 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/41704762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:20.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:20 compute-0 ceph-mon[73755]: pgmap v1490: 353 pgs: 353 active+clean; 151 MiB data, 313 MiB used, 40 GiB / 40 GiB avail; 12 KiB/s rd, 3.5 KiB/s wr, 19 op/s
Sep 30 18:28:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1491: 353 pgs: 353 active+clean; 121 MiB data, 293 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 28 op/s
Sep 30 18:28:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:20 compute-0 nova_compute[265391]: 2025-09-30 18:28:20.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:21 compute-0 ceph-mon[73755]: pgmap v1491: 353 pgs: 353 active+clean; 121 MiB data, 293 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 28 op/s
Sep 30 18:28:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:28:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:28:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:22.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1492: 353 pgs: 353 active+clean; 121 MiB data, 293 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:28:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:28:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:22.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:23 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Scheduled restart job, restart counter is at 16.
Sep 30 18:28:23 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 18:28:23 compute-0 systemd[1]: ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b@nfs.cephfs.1.0.compute-0.syzvbh.service: Consumed 2.790s CPU time.
Sep 30 18:28:23 compute-0 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b...
Sep 30 18:28:23 compute-0 podman[327646]: 2025-09-30 18:28:23.55630816 +0000 UTC m=+0.083193047 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:28:23 compute-0 podman[327647]: 2025-09-30 18:28:23.561054243 +0000 UTC m=+0.086046961 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:28:23 compute-0 podman[327649]: 2025-09-30 18:28:23.566687719 +0000 UTC m=+0.097519059 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Sep 30 18:28:23 compute-0 ceph-mon[73755]: pgmap v1492: 353 pgs: 353 active+clean; 121 MiB data, 293 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:28:23 compute-0 podman[327746]: 2025-09-30 18:28:23.691433014 +0000 UTC m=+0.046678392 container create 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 18:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba557ac151f62990da84aa7bc581bb19173d668e816b4aee4fd2761bacf7dae/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba557ac151f62990da84aa7bc581bb19173d668e816b4aee4fd2761bacf7dae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba557ac151f62990da84aa7bc581bb19173d668e816b4aee4fd2761bacf7dae/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba557ac151f62990da84aa7bc581bb19173d668e816b4aee4fd2761bacf7dae/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-0.syzvbh-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:23 compute-0 podman[327746]: 2025-09-30 18:28:23.757150387 +0000 UTC m=+0.112395785 container init 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 18:28:23 compute-0 podman[327746]: 2025-09-30 18:28:23.668400056 +0000 UTC m=+0.023645454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:28:23 compute-0 podman[327746]: 2025-09-30 18:28:23.764417146 +0000 UTC m=+0.119662524 container start 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 18:28:23 compute-0 bash[327746]: 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 18:28:23 compute-0 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-0.syzvbh for 63d32c6a-fa18-54ed-8711-9a3915cc367b.
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:23.798Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 18:28:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:28:24 compute-0 nova_compute[265391]: 2025-09-30 18:28:24.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:24.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1493: 353 pgs: 353 active+clean; 121 MiB data, 293 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Sep 30 18:28:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:24.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:25 compute-0 ceph-mon[73755]: pgmap v1493: 353 pgs: 353 active+clean; 121 MiB data, 293 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Sep 30 18:28:25 compute-0 nova_compute[265391]: 2025-09-30 18:28:25.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:26.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1494: 353 pgs: 353 active+clean; 121 MiB data, 293 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Sep 30 18:28:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:28:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:26.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:28:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:27.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:27 compute-0 ceph-mon[73755]: pgmap v1494: 353 pgs: 353 active+clean; 121 MiB data, 293 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Sep 30 18:28:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:28:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 66K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 5870 syncs, 3.10 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3854 writes, 14K keys, 3854 commit groups, 1.0 writes per commit group, ingest: 16.96 MB, 0.03 MB/s
                                           Interval WAL: 3854 writes, 1517 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:28:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:28.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1495: 353 pgs: 353 active+clean; 84 MiB data, 271 MiB used, 40 GiB / 40 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Sep 30 18:28:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:28] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:28:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:28] "GET /metrics HTTP/1.1" 200 46648 "" "Prometheus/2.51.0"
Sep 30 18:28:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:28.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:28 compute-0 sudo[327809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:28:28 compute-0 sudo[327809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:28 compute-0 sudo[327809]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:28 compute-0 sudo[327834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 18:28:28 compute-0 sudo[327834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:29 compute-0 nova_compute[265391]: 2025-09-30 18:28:29.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:29 compute-0 sudo[327834]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:28:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:28:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:29 compute-0 sudo[327880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:28:29 compute-0 sudo[327880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:29 compute-0 sudo[327880]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:29 compute-0 sudo[327905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:28:29 compute-0 sudo[327905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:28:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:28:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:29 compute-0 podman[276673]: time="2025-09-30T18:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:28:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:28:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10285 "" "Go-http-client/1.1"
Sep 30 18:28:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:28:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:28:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:28:29 compute-0 sudo[327905]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:28:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:28:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:28:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:28:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:28:30 compute-0 ceph-mon[73755]: pgmap v1495: 353 pgs: 353 active+clean; 84 MiB data, 271 MiB used, 40 GiB / 40 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Sep 30 18:28:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:28:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:28:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:28:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:28:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:28:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:28:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:28:30 compute-0 sudo[327965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:28:30 compute-0 sudo[327965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:30 compute-0 sudo[327965]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:30 compute-0 sudo[327990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:28:30 compute-0 sudo[327990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1496: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Sep 30 18:28:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:30.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:30 compute-0 nova_compute[265391]: 2025-09-30 18:28:30.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:30 compute-0 podman[328057]: 2025-09-30 18:28:30.919904037 +0000 UTC m=+0.052340388 container create c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:28:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:30 compute-0 systemd[1]: Started libpod-conmon-c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5.scope.
Sep 30 18:28:30 compute-0 podman[328057]: 2025-09-30 18:28:30.891798138 +0000 UTC m=+0.024234409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:28:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:28:31 compute-0 podman[328057]: 2025-09-30 18:28:31.016116781 +0000 UTC m=+0.148553052 container init c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_spence, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:28:31 compute-0 podman[328057]: 2025-09-30 18:28:31.023913053 +0000 UTC m=+0.156349304 container start c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_spence, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:28:31 compute-0 podman[328057]: 2025-09-30 18:28:31.027540137 +0000 UTC m=+0.159976408 container attach c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:28:31 compute-0 systemd[1]: libpod-c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5.scope: Deactivated successfully.
Sep 30 18:28:31 compute-0 dazzling_spence[328073]: 167 167
Sep 30 18:28:31 compute-0 conmon[328073]: conmon c93ae7cf9f0cb686428d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5.scope/container/memory.events
Sep 30 18:28:31 compute-0 podman[328057]: 2025-09-30 18:28:31.033894532 +0000 UTC m=+0.166330843 container died c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca1d79bb77189d9bb3758222665fa6721bb163b838fb3836c451e33e0286157a-merged.mount: Deactivated successfully.
Sep 30 18:28:31 compute-0 podman[328057]: 2025-09-30 18:28:31.067063122 +0000 UTC m=+0.199499373 container remove c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:28:31 compute-0 systemd[1]: libpod-conmon-c93ae7cf9f0cb686428d81516e2ac50fe548c9ad65fe3c42b5ca6c6200b713d5.scope: Deactivated successfully.
Sep 30 18:28:31 compute-0 podman[328097]: 2025-09-30 18:28:31.284216972 +0000 UTC m=+0.046647100 container create e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaplygin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 18:28:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:28:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:28:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:28:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:28:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:28:31 compute-0 systemd[1]: Started libpod-conmon-e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a.scope.
Sep 30 18:28:31 compute-0 podman[328097]: 2025-09-30 18:28:31.267686923 +0000 UTC m=+0.030117051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:28:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0c18291b916b47652d952f9841b4faa774ccdf01f1a71b4486446af2f4fba9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0c18291b916b47652d952f9841b4faa774ccdf01f1a71b4486446af2f4fba9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0c18291b916b47652d952f9841b4faa774ccdf01f1a71b4486446af2f4fba9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0c18291b916b47652d952f9841b4faa774ccdf01f1a71b4486446af2f4fba9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0c18291b916b47652d952f9841b4faa774ccdf01f1a71b4486446af2f4fba9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:31 compute-0 podman[328097]: 2025-09-30 18:28:31.387061218 +0000 UTC m=+0.149491346 container init e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaplygin, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 18:28:31 compute-0 podman[328097]: 2025-09-30 18:28:31.397575241 +0000 UTC m=+0.160005369 container start e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaplygin, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 18:28:31 compute-0 podman[328097]: 2025-09-30 18:28:31.400462586 +0000 UTC m=+0.162892714 container attach e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:28:31 compute-0 openstack_network_exporter[279566]: ERROR   18:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:28:31 compute-0 openstack_network_exporter[279566]: ERROR   18:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:28:31 compute-0 openstack_network_exporter[279566]: ERROR   18:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:28:31 compute-0 openstack_network_exporter[279566]: ERROR   18:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:28:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:28:31 compute-0 openstack_network_exporter[279566]: ERROR   18:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:28:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:28:31 compute-0 recursing_chaplygin[328113]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:28:31 compute-0 recursing_chaplygin[328113]: --> All data devices are unavailable
Sep 30 18:28:31 compute-0 systemd[1]: libpod-e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a.scope: Deactivated successfully.
Sep 30 18:28:31 compute-0 podman[328097]: 2025-09-30 18:28:31.829063118 +0000 UTC m=+0.591493306 container died e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaplygin, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd0c18291b916b47652d952f9841b4faa774ccdf01f1a71b4486446af2f4fba9-merged.mount: Deactivated successfully.
Sep 30 18:28:31 compute-0 podman[328097]: 2025-09-30 18:28:31.887083182 +0000 UTC m=+0.649513320 container remove e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_chaplygin, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:28:31 compute-0 systemd[1]: libpod-conmon-e7be2c7f633311671a3f2899c4fc56e9af8cb94a9cb05bd1c38c9f0520d1189a.scope: Deactivated successfully.
Sep 30 18:28:31 compute-0 sudo[327990]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:32 compute-0 sudo[328141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:28:32 compute-0 sudo[328141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:32 compute-0 sudo[328141]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:32 compute-0 sudo[328166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:28:32 compute-0 sudo[328166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:32 compute-0 nova_compute[265391]: 2025-09-30 18:28:32.126 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:32 compute-0 nova_compute[265391]: 2025-09-30 18:28:32.127 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:32 compute-0 ceph-mon[73755]: pgmap v1496: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Sep 30 18:28:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1497: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Sep 30 18:28:32 compute-0 podman[328233]: 2025-09-30 18:28:32.593539707 +0000 UTC m=+0.060341825 container create 8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pike, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:28:32 compute-0 sshd-session[327644]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:28:32 compute-0 sshd-session[327644]: banner exchange: Connection from 115.190.39.222 port 34836: Connection timed out
Sep 30 18:28:32 compute-0 nova_compute[265391]: 2025-09-30 18:28:32.633 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:28:32 compute-0 systemd[1]: Started libpod-conmon-8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b.scope.
Sep 30 18:28:32 compute-0 podman[328233]: 2025-09-30 18:28:32.564432463 +0000 UTC m=+0.031234651 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:28:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:28:32 compute-0 podman[328233]: 2025-09-30 18:28:32.702279227 +0000 UTC m=+0.169081365 container init 8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:28:32 compute-0 podman[328233]: 2025-09-30 18:28:32.714437302 +0000 UTC m=+0.181239410 container start 8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 18:28:32 compute-0 mystifying_pike[328249]: 167 167
Sep 30 18:28:32 compute-0 systemd[1]: libpod-8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b.scope: Deactivated successfully.
Sep 30 18:28:32 compute-0 conmon[328249]: conmon 8e99bbca2eac3b609639 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b.scope/container/memory.events
Sep 30 18:28:32 compute-0 podman[328233]: 2025-09-30 18:28:32.721643749 +0000 UTC m=+0.188445887 container attach 8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:28:32 compute-0 podman[328233]: 2025-09-30 18:28:32.7224764 +0000 UTC m=+0.189278528 container died 8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pike, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c0b075fdf9a5ace90c8bc767fb90974ed364fcaa0db107e6b9cf7b66a993f39-merged.mount: Deactivated successfully.
Sep 30 18:28:32 compute-0 podman[328233]: 2025-09-30 18:28:32.779493288 +0000 UTC m=+0.246295396 container remove 8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_pike, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:28:32 compute-0 systemd[1]: libpod-conmon-8e99bbca2eac3b609639a94dbcb2dabc5b8454f2bcaef2949e06d4464746ca1b.scope: Deactivated successfully.
Sep 30 18:28:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:32.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:32 compute-0 podman[328274]: 2025-09-30 18:28:32.975858509 +0000 UTC m=+0.058820536 container create 5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_jones, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:28:33 compute-0 systemd[1]: Started libpod-conmon-5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611.scope.
Sep 30 18:28:33 compute-0 podman[328274]: 2025-09-30 18:28:32.942936496 +0000 UTC m=+0.025898563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:28:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a658a8bf6ff0b27ef32df07b6a5205d4d96d9905814ff761ca8d41e6f99561/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a658a8bf6ff0b27ef32df07b6a5205d4d96d9905814ff761ca8d41e6f99561/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a658a8bf6ff0b27ef32df07b6a5205d4d96d9905814ff761ca8d41e6f99561/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a658a8bf6ff0b27ef32df07b6a5205d4d96d9905814ff761ca8d41e6f99561/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:33 compute-0 podman[328274]: 2025-09-30 18:28:33.079334122 +0000 UTC m=+0.162296179 container init 5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:28:33 compute-0 podman[328274]: 2025-09-30 18:28:33.084832875 +0000 UTC m=+0.167794892 container start 5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_jones, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:28:33 compute-0 podman[328274]: 2025-09-30 18:28:33.093629983 +0000 UTC m=+0.176592030 container attach 5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 18:28:33 compute-0 nova_compute[265391]: 2025-09-30 18:28:33.193 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:33 compute-0 nova_compute[265391]: 2025-09-30 18:28:33.196 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:33 compute-0 nova_compute[265391]: 2025-09-30 18:28:33.205 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:28:33 compute-0 nova_compute[265391]: 2025-09-30 18:28:33.206 2 INFO nova.compute.claims [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:28:33 compute-0 elegant_jones[328290]: {
Sep 30 18:28:33 compute-0 elegant_jones[328290]:     "0": [
Sep 30 18:28:33 compute-0 elegant_jones[328290]:         {
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "devices": [
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "/dev/loop3"
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             ],
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "lv_name": "ceph_lv0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "lv_size": "21470642176",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "name": "ceph_lv0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "tags": {
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.cluster_name": "ceph",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.crush_device_class": "",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.encrypted": "0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.osd_id": "0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.type": "block",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.vdo": "0",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:                 "ceph.with_tpm": "0"
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             },
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "type": "block",
Sep 30 18:28:33 compute-0 elegant_jones[328290]:             "vg_name": "ceph_vg0"
Sep 30 18:28:33 compute-0 elegant_jones[328290]:         }
Sep 30 18:28:33 compute-0 elegant_jones[328290]:     ]
Sep 30 18:28:33 compute-0 elegant_jones[328290]: }
Sep 30 18:28:33 compute-0 systemd[1]: libpod-5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611.scope: Deactivated successfully.
Sep 30 18:28:33 compute-0 podman[328274]: 2025-09-30 18:28:33.3851679 +0000 UTC m=+0.468129937 container died 5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_jones, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a658a8bf6ff0b27ef32df07b6a5205d4d96d9905814ff761ca8d41e6f99561-merged.mount: Deactivated successfully.
Sep 30 18:28:33 compute-0 podman[328274]: 2025-09-30 18:28:33.434718665 +0000 UTC m=+0.517680692 container remove 5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:28:33 compute-0 systemd[1]: libpod-conmon-5d39413848cc96be84a9fe3b28a6e2ba88b6e205db7f5f8f4c98c00af6ee9611.scope: Deactivated successfully.
Sep 30 18:28:33 compute-0 sudo[328166]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:33 compute-0 sudo[328313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:28:33 compute-0 sudo[328313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:33 compute-0 sudo[328313]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:33 compute-0 sudo[328338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:28:33 compute-0 sudo[328338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:33.799Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:28:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:28:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:28:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:28:34 compute-0 podman[328402]: 2025-09-30 18:28:34.055475338 +0000 UTC m=+0.047740968 container create fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:28:34 compute-0 systemd[1]: Started libpod-conmon-fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462.scope.
Sep 30 18:28:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:28:34 compute-0 podman[328402]: 2025-09-30 18:28:34.031297352 +0000 UTC m=+0.023563002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:28:34 compute-0 podman[328402]: 2025-09-30 18:28:34.139974599 +0000 UTC m=+0.132240249 container init fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 18:28:34 compute-0 podman[328402]: 2025-09-30 18:28:34.149498396 +0000 UTC m=+0.141764026 container start fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:28:34 compute-0 podman[328402]: 2025-09-30 18:28:34.153241673 +0000 UTC m=+0.145507333 container attach fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:28:34 compute-0 priceless_solomon[328418]: 167 167
Sep 30 18:28:34 compute-0 systemd[1]: libpod-fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462.scope: Deactivated successfully.
Sep 30 18:28:34 compute-0 podman[328402]: 2025-09-30 18:28:34.154987108 +0000 UTC m=+0.147252738 container died fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-88d936782b5e59c8c37744fa27afa1f3bbbc90836066a2fc27058ffab41fc73c-merged.mount: Deactivated successfully.
Sep 30 18:28:34 compute-0 podman[328402]: 2025-09-30 18:28:34.217685734 +0000 UTC m=+0.209951354 container remove fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:28:34 compute-0 nova_compute[265391]: 2025-09-30 18:28:34.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:34 compute-0 systemd[1]: libpod-conmon-fe7d83437450d4cdcc42ac87831ee9906e2f9de316adfd1461eb7630f26fe462.scope: Deactivated successfully.
Sep 30 18:28:34 compute-0 nova_compute[265391]: 2025-09-30 18:28:34.267 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:34 compute-0 ceph-mon[73755]: pgmap v1497: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Sep 30 18:28:34 compute-0 podman[328443]: 2025-09-30 18:28:34.431032405 +0000 UTC m=+0.070699704 container create 1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hermann, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:28:34 compute-0 systemd[1]: Started libpod-conmon-1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70.scope.
Sep 30 18:28:34 compute-0 podman[328443]: 2025-09-30 18:28:34.391923521 +0000 UTC m=+0.031590840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:28:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1498: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Sep 30 18:28:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:34.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f309d5000ebf6ec49975e15e1305fd63d2cf8b08da1a31092eb2200b954f61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f309d5000ebf6ec49975e15e1305fd63d2cf8b08da1a31092eb2200b954f61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f309d5000ebf6ec49975e15e1305fd63d2cf8b08da1a31092eb2200b954f61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f309d5000ebf6ec49975e15e1305fd63d2cf8b08da1a31092eb2200b954f61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:34 compute-0 podman[328443]: 2025-09-30 18:28:34.540030911 +0000 UTC m=+0.179698230 container init 1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hermann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:28:34 compute-0 podman[328443]: 2025-09-30 18:28:34.547606927 +0000 UTC m=+0.187274216 container start 1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:28:34 compute-0 podman[328443]: 2025-09-30 18:28:34.552691799 +0000 UTC m=+0.192359118 container attach 1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:28:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:28:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049937000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:34 compute-0 nova_compute[265391]: 2025-09-30 18:28:34.741 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:34 compute-0 nova_compute[265391]: 2025-09-30 18:28:34.749 2 DEBUG nova.compute.provider_tree [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:28:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:34.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:35 compute-0 sudo[328541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:28:35 compute-0 sudo[328541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:35 compute-0 sudo[328541]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:35 compute-0 lvm[328579]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:28:35 compute-0 lvm[328579]: VG ceph_vg0 finished
Sep 30 18:28:35 compute-0 lvm[328583]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:28:35 compute-0 lvm[328583]: VG ceph_vg0 finished
Sep 30 18:28:35 compute-0 eager_hermann[328478]: {}
Sep 30 18:28:35 compute-0 systemd[1]: libpod-1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70.scope: Deactivated successfully.
Sep 30 18:28:35 compute-0 systemd[1]: libpod-1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70.scope: Consumed 1.131s CPU time.
Sep 30 18:28:35 compute-0 podman[328443]: 2025-09-30 18:28:35.253511039 +0000 UTC m=+0.893178388 container died 1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hermann, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:28:35 compute-0 nova_compute[265391]: 2025-09-30 18:28:35.258 2 DEBUG nova.scheduler.client.report [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1f309d5000ebf6ec49975e15e1305fd63d2cf8b08da1a31092eb2200b954f61-merged.mount: Deactivated successfully.
Sep 30 18:28:35 compute-0 podman[328443]: 2025-09-30 18:28:35.363593273 +0000 UTC m=+1.003260572 container remove 1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hermann, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 18:28:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1049937000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:35 compute-0 systemd[1]: libpod-conmon-1029cbac2261bc3034bd51cf03b0b1e3573cc57b31c11147528b521f5ae15d70.scope: Deactivated successfully.
Sep 30 18:28:35 compute-0 sudo[328338]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:28:35 compute-0 nova_compute[265391]: 2025-09-30 18:28:35.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:28:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:28:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:35 compute-0 sudo[328599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:28:35 compute-0 sudo[328599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:35 compute-0 sudo[328599]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:35 compute-0 nova_compute[265391]: 2025-09-30 18:28:35.768 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.573s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:35 compute-0 nova_compute[265391]: 2025-09-30 18:28:35.769 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:28:35 compute-0 nova_compute[265391]: 2025-09-30 18:28:35.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:36 compute-0 nova_compute[265391]: 2025-09-30 18:28:36.305 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:28:36 compute-0 nova_compute[265391]: 2025-09-30 18:28:36.306 2 DEBUG nova.network.neutron [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:28:36 compute-0 nova_compute[265391]: 2025-09-30 18:28:36.306 2 WARNING neutronclient.v2_0.client [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:28:36 compute-0 nova_compute[265391]: 2025-09-30 18:28:36.307 2 WARNING neutronclient.v2_0.client [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:28:36 compute-0 ceph-mon[73755]: pgmap v1498: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Sep 30 18:28:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:28:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:28:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3565038591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:28:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:28:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3565038591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:28:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1499: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Sep 30 18:28:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:36.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:36 compute-0 nova_compute[265391]: 2025-09-30 18:28:36.818 2 INFO nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:28:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:36.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:36 compute-0 nova_compute[265391]: 2025-09-30 18:28:36.852 2 DEBUG nova.network.neutron [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Successfully created port: 4cd1879d-b7f9-410d-8517-ebcb79e59e3c _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:28:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:37.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:28:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.326 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:28:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:28:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:28:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:28:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:28:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:28:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.437 2 DEBUG nova.network.neutron [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Successfully updated port: 4cd1879d-b7f9-410d-8517-ebcb79e59e3c _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:28:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3565038591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:28:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3565038591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:28:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.495 2 DEBUG nova.compute.manager [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-changed-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.495 2 DEBUG nova.compute.manager [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Refreshing instance network info cache due to event network-changed-4cd1879d-b7f9-410d-8517-ebcb79e59e3c. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.496 2 DEBUG oslo_concurrency.lockutils [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.496 2 DEBUG oslo_concurrency.lockutils [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.496 2 DEBUG nova.network.neutron [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Refreshing network info cache for port 4cd1879d-b7f9-410d-8517-ebcb79e59e3c _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:28:37 compute-0 nova_compute[265391]: 2025-09-30 18:28:37.944 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.002 2 WARNING neutronclient.v2_0.client [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.089 2 DEBUG nova.network.neutron [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.266 2 DEBUG nova.network.neutron [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.349 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.350 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.351 2 INFO nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Creating image(s)
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.373 2 DEBUG nova.storage.rbd_utils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image a39db459-001a-467e-8721-1dca3120f5ee_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.396 2 DEBUG nova.storage.rbd_utils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image a39db459-001a-467e-8721-1dca3120f5ee_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.418 2 DEBUG nova.storage.rbd_utils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image a39db459-001a-467e-8721-1dca3120f5ee_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.422 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:38 compute-0 ceph-mon[73755]: pgmap v1499: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.485 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.486 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.487 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.487 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1500: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.508 2 DEBUG nova.storage.rbd_utils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image a39db459-001a-467e-8721-1dca3120f5ee_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.512 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 a39db459-001a-467e-8721-1dca3120f5ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:38.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.776 2 DEBUG oslo_concurrency.lockutils [req-eaf1b303-39a7-4192-a68f-e115f548395a req-d889e206-94f6-4873-a3ed-9a297af85507 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.777 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquired lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.777 2 DEBUG nova.network.neutron [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:28:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:38] "GET /metrics HTTP/1.1" 200 46627 "" "Prometheus/2.51.0"
Sep 30 18:28:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:38] "GET /metrics HTTP/1.1" 200 46627 "" "Prometheus/2.51.0"
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.798 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 a39db459-001a-467e-8721-1dca3120f5ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:38.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.874 2 DEBUG nova.storage.rbd_utils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] resizing rbd image a39db459-001a-467e-8721-1dca3120f5ee_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.998 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.998 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Ensure instance console log exists: /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.999 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.999 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:38 compute-0 nova_compute[265391]: 2025-09-30 18:28:38.999 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:28:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:28:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:28:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:28:39 compute-0 nova_compute[265391]: 2025-09-30 18:28:39.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:39 compute-0 nova_compute[265391]: 2025-09-30 18:28:39.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:28:39 compute-0 nova_compute[265391]: 2025-09-30 18:28:39.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:28:39 compute-0 nova_compute[265391]: 2025-09-30 18:28:39.433 2 DEBUG nova.network.neutron [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:28:39 compute-0 ceph-mon[73755]: pgmap v1500: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Sep 30 18:28:39 compute-0 nova_compute[265391]: 2025-09-30 18:28:39.621 2 WARNING neutronclient.v2_0.client [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:28:39 compute-0 nova_compute[265391]: 2025-09-30 18:28:39.974 2 DEBUG nova.network.neutron [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Updating instance_info_cache with network_info: [{"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.481 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Releasing lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.482 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Instance network_info: |[{"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.486 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Start _get_guest_xml network_info=[{"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.491 2 WARNING nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.493 2 DEBUG nova.virt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteStrategies-server-1579591444', uuid='a39db459-001a-467e-8721-1dca3120f5ee'), owner=OwnerMeta(userid='623ef4a55c9e4fc28bb65e49246b5008', username='tempest-TestExecuteStrategies-1883747907-project-admin', projectid='c634e1c17ed54907969576a0eb8eff50', projectname='tempest-TestExecuteStrategies-1883747907'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759256920.4930434) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.497 2 DEBUG nova.virt.libvirt.host [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.498 2 DEBUG nova.virt.libvirt.host [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.502 2 DEBUG nova.virt.libvirt.host [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.503 2 DEBUG nova.virt.libvirt.host [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.504 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.504 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.504 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.505 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.505 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.505 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.505 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.505 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.505 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.506 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.506 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.506 2 DEBUG nova.virt.hardware [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.509 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1501: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 10 KiB/s rd, 938 B/s wr, 15 op/s
Sep 30 18:28:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:40.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/868335414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:40.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:28:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1228077198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.930 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.966 2 DEBUG nova.storage.rbd_utils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image a39db459-001a-467e-8721-1dca3120f5ee_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:28:40 compute-0 nova_compute[265391]: 2025-09-30 18:28:40.970 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:28:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:28:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743270422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.478 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.481 2 DEBUG nova.virt.libvirt.vif [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:28:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1579591444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1579591444',id=20,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ad15oqdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:28:37Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=a39db459-001a-467e-8721-1dca3120f5ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.481 2 DEBUG nova.network.os_vif_util [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.482 2 DEBUG nova.network.os_vif_util [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:9a:24,bridge_name='br-int',has_traffic_filtering=True,id=4cd1879d-b7f9-410d-8517-ebcb79e59e3c,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cd1879d-b7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.484 2 DEBUG nova.objects.instance [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'pci_devices' on Instance uuid a39db459-001a-467e-8721-1dca3120f5ee obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:28:41 compute-0 ceph-mon[73755]: pgmap v1501: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 10 KiB/s rd, 938 B/s wr, 15 op/s
Sep 30 18:28:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1228077198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:28:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/743270422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.955 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.955 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.956 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.956 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.956 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.993 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <uuid>a39db459-001a-467e-8721-1dca3120f5ee</uuid>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <name>instance-00000014</name>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1579591444</nova:name>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:28:40</nova:creationTime>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:28:41 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:28:41 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <nova:port uuid="4cd1879d-b7f9-410d-8517-ebcb79e59e3c">
Sep 30 18:28:41 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <system>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <entry name="serial">a39db459-001a-467e-8721-1dca3120f5ee</entry>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <entry name="uuid">a39db459-001a-467e-8721-1dca3120f5ee</entry>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </system>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <os>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   </os>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <features>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   </features>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/a39db459-001a-467e-8721-1dca3120f5ee_disk">
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       </source>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/a39db459-001a-467e-8721-1dca3120f5ee_disk.config">
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       </source>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:28:41 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:4e:9a:24"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <target dev="tap4cd1879d-b7"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/console.log" append="off"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <video>
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </video>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:28:41 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:28:41 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:28:41 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:28:41 compute-0 nova_compute[265391]: </domain>
Sep 30 18:28:41 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.994 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Preparing to wait for external event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.994 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.994 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.995 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.995 2 DEBUG nova.virt.libvirt.vif [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:28:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1579591444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1579591444',id=20,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ad15oqdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:28:37Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=a39db459-001a-467e-8721-1dca3120f5ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.996 2 DEBUG nova.network.os_vif_util [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.996 2 DEBUG nova.network.os_vif_util [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:9a:24,bridge_name='br-int',has_traffic_filtering=True,id=4cd1879d-b7f9-410d-8517-ebcb79e59e3c,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cd1879d-b7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.997 2 DEBUG os_vif [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:9a:24,bridge_name='br-int',has_traffic_filtering=True,id=4cd1879d-b7f9-410d-8517-ebcb79e59e3c,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cd1879d-b7') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:41 compute-0 nova_compute[265391]: 2025-09-30 18:28:41.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'c20c8ca2-450c-5769-8c14-f3be44346c88', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.004 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4cd1879d-b7, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap4cd1879d-b7, col_values=(('qos', UUID('ee7b29fb-20ee-48ae-aa4c-13f2a302c0e0')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap4cd1879d-b7, col_values=(('external_ids', {'iface-id': '4cd1879d-b7f9-410d-8517-ebcb79e59e3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:9a:24', 'vm-uuid': 'a39db459-001a-467e-8721-1dca3120f5ee'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:42 compute-0 NetworkManager[45059]: <info>  [1759256922.0070] manager: (tap4cd1879d-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.012 2 INFO os_vif [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:9a:24,bridge_name='br-int',has_traffic_filtering=True,id=4cd1879d-b7f9-410d-8517-ebcb79e59e3c,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cd1879d-b7')
Sep 30 18:28:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:28:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3165206529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:42 compute-0 nova_compute[265391]: 2025-09-30 18:28:42.409 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1502: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 0 op/s
Sep 30 18:28:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:42.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:42 compute-0 podman[328884]: 2025-09-30 18:28:42.523328885 +0000 UTC m=+0.059002321 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:28:42 compute-0 podman[328883]: 2025-09-30 18:28:42.573386353 +0000 UTC m=+0.111562884 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Sep 30 18:28:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/360614735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3165206529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:42.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.440 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.441 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.551 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.552 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.552 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No VIF found with MAC fa:16:3e:4e:9a:24, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.553 2 INFO nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Using config drive
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.582 2 DEBUG nova.storage.rbd_utils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image a39db459-001a-467e-8721-1dca3120f5ee_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:28:43 compute-0 ceph-mon[73755]: pgmap v1502: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 40 GiB / 40 GiB avail; 597 B/s rd, 0 op/s
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.661 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.662 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.693 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.694 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4287MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.694 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:43 compute-0 nova_compute[265391]: 2025-09-30 18:28:43.694 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:43.800Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:28:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:28:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:28:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.099 2 WARNING neutronclient.v2_0.client [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:28:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1503: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.513 2 INFO nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Creating config drive at /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/disk.config
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.520 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp1ixjso4p execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:44.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.655 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp1ixjso4p" returned: 0 in 0.134s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.684 2 DEBUG nova.storage.rbd_utils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image a39db459-001a-467e-8721-1dca3120f5ee_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.688 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/disk.config a39db459-001a-467e-8721-1dca3120f5ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.787 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance a39db459-001a-467e-8721-1dca3120f5ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.788 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.788 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:28:43 up  1:32,  0 user,  load average: 2.60, 1.23, 0.96\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_spawning': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.829 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:28:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:44.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.902 2 DEBUG oslo_concurrency.processutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/disk.config a39db459-001a-467e-8721-1dca3120f5ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.904 2 INFO nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Deleting local config drive /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/disk.config because it was imported into RBD.
Sep 30 18:28:44 compute-0 kernel: tap4cd1879d-b7: entered promiscuous mode
Sep 30 18:28:44 compute-0 ovn_controller[156242]: 2025-09-30T18:28:44Z|00169|binding|INFO|Claiming lport 4cd1879d-b7f9-410d-8517-ebcb79e59e3c for this chassis.
Sep 30 18:28:44 compute-0 ovn_controller[156242]: 2025-09-30T18:28:44Z|00170|binding|INFO|4cd1879d-b7f9-410d-8517-ebcb79e59e3c: Claiming fa:16:3e:4e:9a:24 10.100.0.8
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:44 compute-0 NetworkManager[45059]: <info>  [1759256924.9682] manager: (tap4cd1879d-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:44 compute-0 nova_compute[265391]: 2025-09-30 18:28:44.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:44.990 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:9a:24 10.100.0.8'], port_security=['fa:16:3e:4e:9a:24 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a39db459-001a-467e-8721-1dca3120f5ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=4cd1879d-b7f9-410d-8517-ebcb79e59e3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:28:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:44.991 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 4cd1879d-b7f9-410d-8517-ebcb79e59e3c in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:28:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:44.993 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:28:45 compute-0 ovn_controller[156242]: 2025-09-30T18:28:45Z|00171|binding|INFO|Setting lport 4cd1879d-b7f9-410d-8517-ebcb79e59e3c ovn-installed in OVS
Sep 30 18:28:45 compute-0 ovn_controller[156242]: 2025-09-30T18:28:45Z|00172|binding|INFO|Setting lport 4cd1879d-b7f9-410d-8517-ebcb79e59e3c up in Southbound
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.008 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd2e7f8-4d7e-4ec8-a692-89c5d13bb204]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.009 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6901f664-31 in ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.012 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6901f664-30 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.012 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[83cc0c91-f5a3-414d-b0e1-1ea0e5957b7f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.012 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[30048dd2-210e-40b2-bc99-ec9d8190cd8c]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 systemd-udevd[329026]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:28:45 compute-0 systemd-machined[219917]: New machine qemu-14-instance-00000014.
Sep 30 18:28:45 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-00000014.
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.032 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6f0930-0133-4782-a764-b72c3c424727]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 NetworkManager[45059]: <info>  [1759256925.0390] device (tap4cd1879d-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:28:45 compute-0 NetworkManager[45059]: <info>  [1759256925.0415] device (tap4cd1879d-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:28:45 compute-0 podman[329005]: 2025-09-30 18:28:45.051486858 +0000 UTC m=+0.066304950 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.051 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb5a232-3ed8-4e3b-a3b8-54bd05be0c96]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.091 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[5836c264-5d2b-49a7-8002-28a1c2837b9c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.097 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e7571219-f82f-447f-b954-4cd01a401881]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 NetworkManager[45059]: <info>  [1759256925.0978] manager: (tap6901f664-30): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Sep 30 18:28:45 compute-0 systemd-udevd[329051]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.133 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[a2afed8b-e1c8-43d0-9cac-f6ef34de38f0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.135 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[a06f7d7c-41e2-4b7a-8afc-efcc6f4a472e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 NetworkManager[45059]: <info>  [1759256925.1592] device (tap6901f664-30): carrier: link connected
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.169 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[84c992f1-342a-4ea2-8f48-192f2c409680]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.190 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9f8ab3-6c0b-4c7f-a31f-781a02beb98a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552385, 'reachable_time': 38492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329079, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.211 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe7ffa6-ac82-44b9-8aa4-6431ef9240bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:412a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552385, 'tstamp': 552385}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329087, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.221 2 DEBUG nova.compute.manager [req-aabded19-8ad0-43de-8fee-d98eb9e9c343 req-1eeb7268-cd4b-4fb0-9c6d-5016951f7ba9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.222 2 DEBUG oslo_concurrency.lockutils [req-aabded19-8ad0-43de-8fee-d98eb9e9c343 req-1eeb7268-cd4b-4fb0-9c6d-5016951f7ba9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.222 2 DEBUG oslo_concurrency.lockutils [req-aabded19-8ad0-43de-8fee-d98eb9e9c343 req-1eeb7268-cd4b-4fb0-9c6d-5016951f7ba9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.223 2 DEBUG oslo_concurrency.lockutils [req-aabded19-8ad0-43de-8fee-d98eb9e9c343 req-1eeb7268-cd4b-4fb0-9c6d-5016951f7ba9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.223 2 DEBUG nova.compute.manager [req-aabded19-8ad0-43de-8fee-d98eb9e9c343 req-1eeb7268-cd4b-4fb0-9c6d-5016951f7ba9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Processing event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.232 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a3478c-d93b-4c40-a9cf-7faf194e8f74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552385, 'reachable_time': 38492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329097, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.272 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[326484fe-a5e5-4b97-9df7-01dd8a197f59]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:28:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401032979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.339 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7dd41d-ef12-49b9-a435-12701d489382]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.341 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.341 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.341 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:45 compute-0 NetworkManager[45059]: <info>  [1759256925.3438] manager: (tap6901f664-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Sep 30 18:28:45 compute-0 kernel: tap6901f664-30: entered promiscuous mode
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.345 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:45 compute-0 ovn_controller[156242]: 2025-09-30T18:28:45Z|00173|binding|INFO|Releasing lport 5b6cbf18-1826-41d0-920f-e9db4f1a1832 from this chassis (sb_readonly=0)
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.348 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3742cd33-b44e-4439-be15-f7056c4ca3d9]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.355 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.356 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.356 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 6901f664-336b-42d2-bbf7-58951befc8d1 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.356 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.356 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[686ff844-2a21-4501-be39-7aaca8ad8e32]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.357 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.357 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b94aa98c-dbc3-42e1-8f60-8f1f87573a5f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.357 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:28:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:45.358 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'env', 'PROCESS_TAG=haproxy-6901f664-336b-42d2-bbf7-58951befc8d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6901f664-336b-42d2-bbf7-58951befc8d1.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.390 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.397 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:28:45 compute-0 ceph-mon[73755]: pgmap v1503: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:28:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1401032979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.784 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.788 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.791 2 INFO nova.virt.libvirt.driver [-] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Instance spawned successfully.
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.792 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:28:45 compute-0 podman[329159]: 2025-09-30 18:28:45.815040614 +0000 UTC m=+0.061991348 container create 25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:28:45 compute-0 systemd[1]: Started libpod-conmon-25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab.scope.
Sep 30 18:28:45 compute-0 sshd-session[328953]: Invalid user cma from 45.252.249.158 port 34116
Sep 30 18:28:45 compute-0 podman[329159]: 2025-09-30 18:28:45.781379962 +0000 UTC m=+0.028330716 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:28:45 compute-0 sshd-session[328953]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:28:45 compute-0 sshd-session[328953]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054f6af12218a8a29cc9c09c7d8be13cf7c3621445666a9ad6c5f0a644dcffc0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:28:45 compute-0 nova_compute[265391]: 2025-09-30 18:28:45.905 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:28:45 compute-0 podman[329159]: 2025-09-30 18:28:45.916990577 +0000 UTC m=+0.163941331 container init 25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:28:45 compute-0 podman[329159]: 2025-09-30 18:28:45.922959802 +0000 UTC m=+0.169910536 container start 25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:28:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:45 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[329175]: [NOTICE]   (329179) : New worker (329181) forked
Sep 30 18:28:45 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[329175]: [NOTICE]   (329179) : Loading success.
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.313 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.314 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.315 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.316 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.317 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.317 2 DEBUG nova.virt.libvirt.driver [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.418 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.418 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.724s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1504: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:28:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:46.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:46.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.842 2 INFO nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Took 8.49 seconds to spawn the instance on the hypervisor.
Sep 30 18:28:46 compute-0 nova_compute[265391]: 2025-09-30 18:28:46.843 2 DEBUG nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:47.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.290 2 DEBUG nova.compute.manager [req-432bddcf-2dbd-4d7d-9dd3-d18edec8c9a5 req-89bca5bb-f8d0-46f6-b39a-813e153f8ab3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.290 2 DEBUG oslo_concurrency.lockutils [req-432bddcf-2dbd-4d7d-9dd3-d18edec8c9a5 req-89bca5bb-f8d0-46f6-b39a-813e153f8ab3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.291 2 DEBUG oslo_concurrency.lockutils [req-432bddcf-2dbd-4d7d-9dd3-d18edec8c9a5 req-89bca5bb-f8d0-46f6-b39a-813e153f8ab3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.291 2 DEBUG oslo_concurrency.lockutils [req-432bddcf-2dbd-4d7d-9dd3-d18edec8c9a5 req-89bca5bb-f8d0-46f6-b39a-813e153f8ab3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.291 2 DEBUG nova.compute.manager [req-432bddcf-2dbd-4d7d-9dd3-d18edec8c9a5 req-89bca5bb-f8d0-46f6-b39a-813e153f8ab3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] No waiting events found dispatching network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.291 2 WARNING nova.compute.manager [req-432bddcf-2dbd-4d7d-9dd3-d18edec8c9a5 req-89bca5bb-f8d0-46f6-b39a-813e153f8ab3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received unexpected event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c for instance with vm_state active and task_state None.
Sep 30 18:28:47 compute-0 sshd-session[328953]: Failed password for invalid user cma from 45.252.249.158 port 34116 ssh2
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.383 2 INFO nova.compute.manager [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Took 14.24 seconds to build instance.
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.418 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.422 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:28:47 compute-0 ceph-mon[73755]: pgmap v1504: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:28:47 compute-0 nova_compute[265391]: 2025-09-30 18:28:47.887 2 DEBUG oslo_concurrency.lockutils [None req-580fb608-8aea-4b98-8b44-5042b9552680 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.760s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:47 compute-0 sshd-session[328953]: Received disconnect from 45.252.249.158 port 34116:11: Bye Bye [preauth]
Sep 30 18:28:47 compute-0 sshd-session[328953]: Disconnected from invalid user cma 45.252.249.158 port 34116 [preauth]
Sep 30 18:28:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1505: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 655 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Sep 30 18:28:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:48.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:48] "GET /metrics HTTP/1.1" 200 46627 "" "Prometheus/2.51.0"
Sep 30 18:28:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:48] "GET /metrics HTTP/1.1" 200 46627 "" "Prometheus/2.51.0"
Sep 30 18:28:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:48.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:28:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:28:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:28:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:28:49 compute-0 ceph-mon[73755]: pgmap v1505: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 655 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Sep 30 18:28:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1506: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:28:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:50.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:28:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:50.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:28:50 compute-0 nova_compute[265391]: 2025-09-30 18:28:50.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:51 compute-0 ceph-mon[73755]: pgmap v1506: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:28:52 compute-0 nova_compute[265391]: 2025-09-30 18:28:52.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:28:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:28:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1507: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:28:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:28:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:52.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:28:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:28:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/509009086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:28:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:52.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:53.801Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:53 compute-0 ceph-mon[73755]: pgmap v1507: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:28:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:28:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:28:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:28:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:28:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:54.311 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:28:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:54.311 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:28:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:28:54.312 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:28:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1508: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:28:54 compute-0 podman[329200]: 2025-09-30 18:28:54.528577579 +0000 UTC m=+0.067027519 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:28:54 compute-0 podman[329199]: 2025-09-30 18:28:54.532447439 +0000 UTC m=+0.071724360 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Sep 30 18:28:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:28:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:28:54 compute-0 podman[329201]: 2025-09-30 18:28:54.568303859 +0000 UTC m=+0.104320266 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Sep 30 18:28:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:54.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:55 compute-0 sudo[329254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:28:55 compute-0 sudo[329254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:28:55 compute-0 sudo[329254]: pam_unix(sudo:session): session closed for user root
Sep 30 18:28:55 compute-0 ceph-mon[73755]: pgmap v1508: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:28:55 compute-0 nova_compute[265391]: 2025-09-30 18:28:55.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:28:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1509: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:28:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:28:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:56.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:28:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:56.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:57 compute-0 nova_compute[265391]: 2025-09-30 18:28:57.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:28:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:28:57.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:28:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:28:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554884993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:28:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:28:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554884993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:28:57 compute-0 ceph-mon[73755]: pgmap v1509: 353 pgs: 353 active+clean; 88 MiB data, 272 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:28:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/554884993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:28:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/554884993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:28:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1510: 353 pgs: 353 active+clean; 133 MiB data, 288 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 113 op/s
Sep 30 18:28:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:28:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:58] "GET /metrics HTTP/1.1" 200 46642 "" "Prometheus/2.51.0"
Sep 30 18:28:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:28:58] "GET /metrics HTTP/1.1" 200 46642 "" "Prometheus/2.51.0"
Sep 30 18:28:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:28:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:28:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:28:58.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:28:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:28:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:28:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:28:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:28:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:28:59 compute-0 ovn_controller[156242]: 2025-09-30T18:28:59Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:9a:24 10.100.0.8
Sep 30 18:28:59 compute-0 ovn_controller[156242]: 2025-09-30T18:28:59Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:9a:24 10.100.0.8
Sep 30 18:28:59 compute-0 podman[276673]: time="2025-09-30T18:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:28:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:28:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10754 "" "Go-http-client/1.1"
Sep 30 18:28:59 compute-0 ceph-mon[73755]: pgmap v1510: 353 pgs: 353 active+clean; 133 MiB data, 288 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 113 op/s
Sep 30 18:29:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1511: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 135 op/s
Sep 30 18:29:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:00.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:00.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:00 compute-0 nova_compute[265391]: 2025-09-30 18:29:00.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:01 compute-0 sshd-session[329284]: Invalid user ubuntu from 14.225.220.107 port 50354
Sep 30 18:29:01 compute-0 sshd-session[329284]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:29:01 compute-0 sshd-session[329284]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:29:01 compute-0 openstack_network_exporter[279566]: ERROR   18:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:29:01 compute-0 openstack_network_exporter[279566]: ERROR   18:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:29:01 compute-0 openstack_network_exporter[279566]: ERROR   18:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:29:01 compute-0 openstack_network_exporter[279566]: ERROR   18:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:29:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:29:01 compute-0 openstack_network_exporter[279566]: ERROR   18:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:29:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:29:01 compute-0 ceph-mon[73755]: pgmap v1511: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 135 op/s
Sep 30 18:29:02 compute-0 nova_compute[265391]: 2025-09-30 18:29:02.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1512: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 210 KiB/s rd, 3.9 MiB/s wr, 84 op/s
Sep 30 18:29:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:02.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2129834140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:29:03 compute-0 sshd-session[329284]: Failed password for invalid user ubuntu from 14.225.220.107 port 50354 ssh2
Sep 30 18:29:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:03.802Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:04 compute-0 ceph-mon[73755]: pgmap v1512: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 210 KiB/s rd, 3.9 MiB/s wr, 84 op/s
Sep 30 18:29:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1810602024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:29:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1513: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 211 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Sep 30 18:29:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:05 compute-0 sshd-session[329284]: Received disconnect from 14.225.220.107 port 50354:11: Bye Bye [preauth]
Sep 30 18:29:05 compute-0 sshd-session[329284]: Disconnected from invalid user ubuntu 14.225.220.107 port 50354 [preauth]
Sep 30 18:29:05 compute-0 nova_compute[265391]: 2025-09-30 18:29:05.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:06 compute-0 ceph-mon[73755]: pgmap v1513: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 211 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Sep 30 18:29:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1514: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 211 KiB/s rd, 3.9 MiB/s wr, 84 op/s
Sep 30 18:29:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:06.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:07 compute-0 nova_compute[265391]: 2025-09-30 18:29:07.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:07.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:29:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:07.267Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:29:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:29:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016533266079800742 of space, bias 1.0, pg target 0.3306653215960148 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:29:07
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.control', '.mgr', 'vms', '.nfs', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:29:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:29:08 compute-0 ceph-mon[73755]: pgmap v1514: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 211 KiB/s rd, 3.9 MiB/s wr, 84 op/s
Sep 30 18:29:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:29:08 compute-0 sshd-session[329294]: Invalid user steam from 80.94.95.115 port 21084
Sep 30 18:29:08 compute-0 sshd-session[329294]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:29:08 compute-0 sshd-session[329294]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.115
Sep 30 18:29:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1515: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 213 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:29:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:29:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:29:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:29:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:08] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:29:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:10 compute-0 ceph-mon[73755]: pgmap v1515: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 40 GiB / 40 GiB avail; 213 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Sep 30 18:29:10 compute-0 sshd-session[329294]: Failed password for invalid user steam from 80.94.95.115 port 21084 ssh2
Sep 30 18:29:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1516: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 127 KiB/s rd, 2.5 MiB/s wr, 56 op/s
Sep 30 18:29:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:10.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:10 compute-0 sshd-session[329294]: Connection closed by invalid user steam 80.94.95.115 port 21084 [preauth]
Sep 30 18:29:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:10 compute-0 nova_compute[265391]: 2025-09-30 18:29:10.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:12 compute-0 nova_compute[265391]: 2025-09-30 18:29:12.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:12 compute-0 ceph-mon[73755]: pgmap v1516: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 127 KiB/s rd, 2.5 MiB/s wr, 56 op/s
Sep 30 18:29:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1517: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 8.6 KiB/s rd, 25 KiB/s wr, 11 op/s
Sep 30 18:29:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:12.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:12.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:13 compute-0 podman[329304]: 2025-09-30 18:29:13.577045424 +0000 UTC m=+0.101576764 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:29:13 compute-0 podman[329303]: 2025-09-30 18:29:13.598423449 +0000 UTC m=+0.124456778 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:29:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:13.802Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:14 compute-0 ceph-mon[73755]: pgmap v1517: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 8.6 KiB/s rd, 25 KiB/s wr, 11 op/s
Sep 30 18:29:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1518: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 75 op/s
Sep 30 18:29:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:14.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:14.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:15 compute-0 sudo[329355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:29:15 compute-0 sudo[329355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:15 compute-0 sudo[329355]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:15 compute-0 podman[329379]: 2025-09-30 18:29:15.399429221 +0000 UTC m=+0.065914130 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4)
Sep 30 18:29:15 compute-0 nova_compute[265391]: 2025-09-30 18:29:15.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:16 compute-0 ceph-mon[73755]: pgmap v1518: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 75 op/s
Sep 30 18:29:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1519: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 74 op/s
Sep 30 18:29:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:16.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:16.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:17 compute-0 nova_compute[265391]: 2025-09-30 18:29:17.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:17.268Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:18 compute-0 ceph-mon[73755]: pgmap v1519: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 74 op/s
Sep 30 18:29:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1520: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 75 op/s
Sep 30 18:29:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:18.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:29:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:18] "GET /metrics HTTP/1.1" 200 46646 "" "Prometheus/2.51.0"
Sep 30 18:29:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:18.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:20 compute-0 ceph-mon[73755]: pgmap v1520: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 75 op/s
Sep 30 18:29:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1521: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 72 op/s
Sep 30 18:29:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:20.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:20 compute-0 nova_compute[265391]: 2025-09-30 18:29:20.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:22 compute-0 nova_compute[265391]: 2025-09-30 18:29:22.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:22 compute-0 ceph-mon[73755]: pgmap v1521: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 72 op/s
Sep 30 18:29:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:29:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:29:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1522: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 65 op/s
Sep 30 18:29:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:22.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:22 compute-0 nova_compute[265391]: 2025-09-30 18:29:22.610 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Check if temp file /var/lib/nova/instances/tmp45k9azpg exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:29:22 compute-0 nova_compute[265391]: 2025-09-30 18:29:22.617 2 DEBUG nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp45k9azpg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a39db459-001a-467e-8721-1dca3120f5ee',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:29:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:22.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:29:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:23.803Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:24 compute-0 ceph-mon[73755]: pgmap v1522: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 65 op/s
Sep 30 18:29:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1523: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Sep 30 18:29:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:24.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:29:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:24.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:29:25 compute-0 podman[329411]: 2025-09-30 18:29:25.541376515 +0000 UTC m=+0.076596767 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Sep 30 18:29:25 compute-0 podman[329412]: 2025-09-30 18:29:25.550750058 +0000 UTC m=+0.079470211 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Sep 30 18:29:25 compute-0 podman[329410]: 2025-09-30 18:29:25.567181984 +0000 UTC m=+0.097967731 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Sep 30 18:29:25 compute-0 nova_compute[265391]: 2025-09-30 18:29:25.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:26 compute-0 ceph-mon[73755]: pgmap v1523: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Sep 30 18:29:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1524: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:29:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:26.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:26.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:27 compute-0 nova_compute[265391]: 2025-09-30 18:29:27.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:27.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:27 compute-0 nova_compute[265391]: 2025-09-30 18:29:27.554 2 DEBUG nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Preparing to wait for external event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:29:27 compute-0 nova_compute[265391]: 2025-09-30 18:29:27.554 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:27 compute-0 nova_compute[265391]: 2025-09-30 18:29:27.555 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:27 compute-0 nova_compute[265391]: 2025-09-30 18:29:27.555 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:28 compute-0 ceph-mon[73755]: pgmap v1524: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:29:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1525: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:29:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:28.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:28] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:29:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:28] "GET /metrics HTTP/1.1" 200 46644 "" "Prometheus/2.51.0"
Sep 30 18:29:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:28.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:29 compute-0 podman[276673]: time="2025-09-30T18:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:29:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:29:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10754 "" "Go-http-client/1.1"
Sep 30 18:29:30 compute-0 ceph-mon[73755]: pgmap v1525: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:29:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1526: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:29:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:30.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:30 compute-0 nova_compute[265391]: 2025-09-30 18:29:30.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:31 compute-0 openstack_network_exporter[279566]: ERROR   18:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:29:31 compute-0 openstack_network_exporter[279566]: ERROR   18:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:29:31 compute-0 openstack_network_exporter[279566]: ERROR   18:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:29:31 compute-0 openstack_network_exporter[279566]: ERROR   18:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:29:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:29:31 compute-0 openstack_network_exporter[279566]: ERROR   18:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:29:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:29:31 compute-0 nova_compute[265391]: 2025-09-30 18:29:31.808 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:32 compute-0 nova_compute[265391]: 2025-09-30 18:29:32.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:32 compute-0 ceph-mon[73755]: pgmap v1526: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:29:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1527: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:29:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:32.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:32 compute-0 nova_compute[265391]: 2025-09-30 18:29:32.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:32 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:32.775 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:29:32 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:32.776 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:29:32 compute-0 nova_compute[265391]: 2025-09-30 18:29:32.852 2 DEBUG nova.compute.manager [req-599a5c24-f23a-4a48-95bb-83ee75b82aeb req-a56a5cb6-b8bf-406a-9593-51e6ab1d5120 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:32 compute-0 nova_compute[265391]: 2025-09-30 18:29:32.853 2 DEBUG oslo_concurrency.lockutils [req-599a5c24-f23a-4a48-95bb-83ee75b82aeb req-a56a5cb6-b8bf-406a-9593-51e6ab1d5120 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:32 compute-0 nova_compute[265391]: 2025-09-30 18:29:32.853 2 DEBUG oslo_concurrency.lockutils [req-599a5c24-f23a-4a48-95bb-83ee75b82aeb req-a56a5cb6-b8bf-406a-9593-51e6ab1d5120 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:32 compute-0 nova_compute[265391]: 2025-09-30 18:29:32.853 2 DEBUG oslo_concurrency.lockutils [req-599a5c24-f23a-4a48-95bb-83ee75b82aeb req-a56a5cb6-b8bf-406a-9593-51e6ab1d5120 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:32 compute-0 nova_compute[265391]: 2025-09-30 18:29:32.853 2 DEBUG nova.compute.manager [req-599a5c24-f23a-4a48-95bb-83ee75b82aeb req-a56a5cb6-b8bf-406a-9593-51e6ab1d5120 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] No event matching network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c in dict_keys([('network-vif-plugged', '4cd1879d-b7f9-410d-8517-ebcb79e59e3c')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:29:32 compute-0 nova_compute[265391]: 2025-09-30 18:29:32.854 2 DEBUG nova.compute.manager [req-599a5c24-f23a-4a48-95bb-83ee75b82aeb req-a56a5cb6-b8bf-406a-9593-51e6ab1d5120 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:29:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:32.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:33.805Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:29:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:33.806Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:29:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.076 2 INFO nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Took 6.52 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:29:34 compute-0 ceph-mon[73755]: pgmap v1527: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:29:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1528: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:29:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:34.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:34.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.917 2 DEBUG nova.compute.manager [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.917 2 DEBUG oslo_concurrency.lockutils [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.917 2 DEBUG oslo_concurrency.lockutils [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.917 2 DEBUG oslo_concurrency.lockutils [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.918 2 DEBUG nova.compute.manager [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Processing event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.918 2 DEBUG nova.compute.manager [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-changed-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.918 2 DEBUG nova.compute.manager [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Refreshing instance network info cache due to event network-changed-4cd1879d-b7f9-410d-8517-ebcb79e59e3c. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.918 2 DEBUG oslo_concurrency.lockutils [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.918 2 DEBUG oslo_concurrency.lockutils [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.918 2 DEBUG nova.network.neutron [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Refreshing network info cache for port 4cd1879d-b7f9-410d-8517-ebcb79e59e3c _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:29:34 compute-0 nova_compute[265391]: 2025-09-30 18:29:34.919 2 DEBUG nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:29:35 compute-0 sudo[329481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:29:35 compute-0 sudo[329481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:35 compute-0 sudo[329481]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.428 2 WARNING neutronclient.v2_0.client [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.437 2 DEBUG nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp45k9azpg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a39db459-001a-467e-8721-1dca3120f5ee',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(091476e8-6365-4164-85ee-66b48227aab9),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.443 2 DEBUG nova.objects.instance [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid a39db459-001a-467e-8721-1dca3120f5ee obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.444 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.447 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.447 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:29:35 compute-0 sudo[329507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:29:35 compute-0 sudo[329507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:35 compute-0 sudo[329507]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:35 compute-0 sudo[329533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:29:35 compute-0 sudo[329533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.942 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.949 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.950 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:29:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.959 2 DEBUG nova.virt.libvirt.vif [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:28:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1579591444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1579591444',id=20,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:28:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ad15oqdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:28:46Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=a39db459-001a-467e-8721-1dca3120f5ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.959 2 DEBUG nova.network.os_vif_util [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.960 2 DEBUG nova.network.os_vif_util [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:9a:24,bridge_name='br-int',has_traffic_filtering=True,id=4cd1879d-b7f9-410d-8517-ebcb79e59e3c,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cd1879d-b7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.961 2 DEBUG nova.virt.libvirt.migration [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:4e:9a:24"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <target dev="tap4cd1879d-b7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]: </interface>
Sep 30 18:29:35 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.961 2 DEBUG nova.virt.libvirt.migration [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <name>instance-00000014</name>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <uuid>a39db459-001a-467e-8721-1dca3120f5ee</uuid>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1579591444</nova:name>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:28:40</nova:creationTime>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:port uuid="4cd1879d-b7f9-410d-8517-ebcb79e59e3c">
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <system>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="serial">a39db459-001a-467e-8721-1dca3120f5ee</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="uuid">a39db459-001a-467e-8721-1dca3120f5ee</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </system>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <os>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </os>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <features>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </features>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/a39db459-001a-467e-8721-1dca3120f5ee_disk">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </source>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/a39db459-001a-467e-8721-1dca3120f5ee_disk.config">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </source>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:4e:9a:24"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cd1879d-b7"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/console.log" append="off"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </target>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/console.log" append="off"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </console>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </input>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <video>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </video>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]: </domain>
Sep 30 18:29:35 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.963 2 DEBUG nova.virt.libvirt.migration [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <name>instance-00000014</name>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <uuid>a39db459-001a-467e-8721-1dca3120f5ee</uuid>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1579591444</nova:name>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:28:40</nova:creationTime>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:port uuid="4cd1879d-b7f9-410d-8517-ebcb79e59e3c">
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <system>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="serial">a39db459-001a-467e-8721-1dca3120f5ee</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="uuid">a39db459-001a-467e-8721-1dca3120f5ee</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </system>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <os>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </os>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <features>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </features>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/a39db459-001a-467e-8721-1dca3120f5ee_disk">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </source>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/a39db459-001a-467e-8721-1dca3120f5ee_disk.config">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </source>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:4e:9a:24"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cd1879d-b7"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/console.log" append="off"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </target>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/console.log" append="off"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </console>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </input>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <video>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </video>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]: </domain>
Sep 30 18:29:35 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:29:35 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.966 2 DEBUG nova.virt.libvirt.migration [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <name>instance-00000014</name>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <uuid>a39db459-001a-467e-8721-1dca3120f5ee</uuid>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1579591444</nova:name>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:28:40</nova:creationTime>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <nova:port uuid="4cd1879d-b7f9-410d-8517-ebcb79e59e3c">
Sep 30 18:29:35 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <system>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="serial">a39db459-001a-467e-8721-1dca3120f5ee</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="uuid">a39db459-001a-467e-8721-1dca3120f5ee</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </system>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <os>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </os>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <features>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </features>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/a39db459-001a-467e-8721-1dca3120f5ee_disk">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </source>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/a39db459-001a-467e-8721-1dca3120f5ee_disk.config">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </source>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:4e:9a:24"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4cd1879d-b7"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/console.log" append="off"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:29:35 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       </target>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee/console.log" append="off"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </console>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </input>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <video>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </video>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:29:35 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:29:35 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:29:35 compute-0 nova_compute[265391]: </domain>
Sep 30 18:29:36 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:29:36 compute-0 nova_compute[265391]: 2025-09-30 18:29:35.966 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:29:36 compute-0 ceph-mon[73755]: pgmap v1528: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:29:36 compute-0 nova_compute[265391]: 2025-09-30 18:29:36.387 2 WARNING neutronclient.v2_0.client [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:29:36 compute-0 sudo[329533]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:36 compute-0 nova_compute[265391]: 2025-09-30 18:29:36.452 2 DEBUG nova.virt.libvirt.migration [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:29:36 compute-0 nova_compute[265391]: 2025-09-30 18:29:36.452 2 INFO nova.virt.libvirt.migration [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3567577078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3567577078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:29:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1529: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:29:36 compute-0 nova_compute[265391]: 2025-09-30 18:29:36.582 2 DEBUG nova.network.neutron [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Updated VIF entry in instance network info cache for port 4cd1879d-b7f9-410d-8517-ebcb79e59e3c. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:29:36 compute-0 nova_compute[265391]: 2025-09-30 18:29:36.582 2 DEBUG nova.network.neutron [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Updating instance_info_cache with network_info: [{"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:29:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:36.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:29:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:29:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:29:36 compute-0 sudo[329588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:29:36 compute-0 sudo[329588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:36 compute-0 sudo[329588]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:36 compute-0 sudo[329613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:29:36 compute-0 sudo[329613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:36.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:37 compute-0 nova_compute[265391]: 2025-09-30 18:29:37.088 2 DEBUG oslo_concurrency.lockutils [req-06a6dbda-693f-4347-bd82-ab16eaaa554a req-ac1475ef-f37d-46ca-9990-ffe76dcb09e4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-a39db459-001a-467e-8721-1dca3120f5ee" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:29:37 compute-0 ovn_controller[156242]: 2025-09-30T18:29:37Z|00174|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Sep 30 18:29:37 compute-0 nova_compute[265391]: 2025-09-30 18:29:37.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:37 compute-0 podman[329680]: 2025-09-30 18:29:37.244055738 +0000 UTC m=+0.094080291 container create d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:29:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:37.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:37 compute-0 podman[329680]: 2025-09-30 18:29:37.184991256 +0000 UTC m=+0.035015829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:29:37 compute-0 systemd[1]: Started libpod-conmon-d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e.scope.
Sep 30 18:29:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:29:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:29:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:29:37 compute-0 podman[329680]: 2025-09-30 18:29:37.33285719 +0000 UTC m=+0.182881763 container init d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mendeleev, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:29:37 compute-0 podman[329680]: 2025-09-30 18:29:37.34058505 +0000 UTC m=+0.190609593 container start d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:29:37 compute-0 podman[329680]: 2025-09-30 18:29:37.34366575 +0000 UTC m=+0.193690293 container attach d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mendeleev, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:29:37 compute-0 eloquent_mendeleev[329696]: 167 167
Sep 30 18:29:37 compute-0 systemd[1]: libpod-d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e.scope: Deactivated successfully.
Sep 30 18:29:37 compute-0 podman[329680]: 2025-09-30 18:29:37.346936305 +0000 UTC m=+0.196960858 container died d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3567577078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3567577078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:29:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4505ffe45c3d8bf50a65bcdc8c53bb5b4352aea529175821677c2d8f7b1f83e9-merged.mount: Deactivated successfully.
Sep 30 18:29:37 compute-0 podman[329680]: 2025-09-30 18:29:37.394306503 +0000 UTC m=+0.244331046 container remove d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mendeleev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:29:37 compute-0 systemd[1]: libpod-conmon-d20ce6ee90646b5a2a23d189bc1273b5cc98ae0583718becc0a96749c8b82f6e.scope: Deactivated successfully.
Sep 30 18:29:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:29:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:29:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:29:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:29:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:29:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:29:37 compute-0 nova_compute[265391]: 2025-09-30 18:29:37.475 2 INFO nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:29:37 compute-0 podman[329721]: 2025-09-30 18:29:37.555563263 +0000 UTC m=+0.040888781 container create c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:29:37 compute-0 systemd[1]: Started libpod-conmon-c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2.scope.
Sep 30 18:29:37 compute-0 podman[329721]: 2025-09-30 18:29:37.538209634 +0000 UTC m=+0.023535172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:29:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635062057cbf7b086564db6084367f261fb63014f2e2a5c086f5454ed8ae6335/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635062057cbf7b086564db6084367f261fb63014f2e2a5c086f5454ed8ae6335/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635062057cbf7b086564db6084367f261fb63014f2e2a5c086f5454ed8ae6335/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635062057cbf7b086564db6084367f261fb63014f2e2a5c086f5454ed8ae6335/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635062057cbf7b086564db6084367f261fb63014f2e2a5c086f5454ed8ae6335/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:37 compute-0 podman[329721]: 2025-09-30 18:29:37.656732835 +0000 UTC m=+0.142058353 container init c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 18:29:37 compute-0 podman[329721]: 2025-09-30 18:29:37.675059501 +0000 UTC m=+0.160385019 container start c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_proskuriakova, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:29:37 compute-0 podman[329721]: 2025-09-30 18:29:37.678859709 +0000 UTC m=+0.164185247 container attach c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_proskuriakova, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:29:37 compute-0 nova_compute[265391]: 2025-09-30 18:29:37.979 2 DEBUG nova.virt.libvirt.migration [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:29:37 compute-0 nova_compute[265391]: 2025-09-30 18:29:37.983 2 DEBUG nova.virt.libvirt.migration [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:29:38 compute-0 pedantic_proskuriakova[329737]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:29:38 compute-0 pedantic_proskuriakova[329737]: --> All data devices are unavailable
Sep 30 18:29:38 compute-0 systemd[1]: libpod-c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2.scope: Deactivated successfully.
Sep 30 18:29:38 compute-0 podman[329756]: 2025-09-30 18:29:38.082010061 +0000 UTC m=+0.027193366 container died c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_proskuriakova, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-635062057cbf7b086564db6084367f261fb63014f2e2a5c086f5454ed8ae6335-merged.mount: Deactivated successfully.
Sep 30 18:29:38 compute-0 podman[329756]: 2025-09-30 18:29:38.127835979 +0000 UTC m=+0.073019274 container remove c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:29:38 compute-0 systemd[1]: libpod-conmon-c515bace0e2738d78708149b142f68039f6767df6b0444a935b920f56dca9bc2.scope: Deactivated successfully.
Sep 30 18:29:38 compute-0 sudo[329613]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:38 compute-0 sudo[329771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:29:38 compute-0 sudo[329771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:38 compute-0 sudo[329771]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:38 compute-0 kernel: tap4cd1879d-b7 (unregistering): left promiscuous mode
Sep 30 18:29:38 compute-0 NetworkManager[45059]: <info>  [1759256978.2933] device (tap4cd1879d-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:29:38 compute-0 ovn_controller[156242]: 2025-09-30T18:29:38Z|00175|binding|INFO|Releasing lport 4cd1879d-b7f9-410d-8517-ebcb79e59e3c from this chassis (sb_readonly=0)
Sep 30 18:29:38 compute-0 ovn_controller[156242]: 2025-09-30T18:29:38Z|00176|binding|INFO|Setting lport 4cd1879d-b7f9-410d-8517-ebcb79e59e3c down in Southbound
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:38 compute-0 ovn_controller[156242]: 2025-09-30T18:29:38Z|00177|binding|INFO|Removing iface tap4cd1879d-b7 ovn-installed in OVS
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.348 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:9a:24 10.100.0.8'], port_security=['fa:16:3e:4e:9a:24 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a39db459-001a-467e-8721-1dca3120f5ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=4cd1879d-b7f9-410d-8517-ebcb79e59e3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.350 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 4cd1879d-b7f9-410d-8517-ebcb79e59e3c in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.351 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.352 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[502ac593-c10a-4f68-90ee-f8a090ba4755]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.353 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace which is not needed anymore
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:38 compute-0 sudo[329796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:29:38 compute-0 sudo[329796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:38 compute-0 ceph-mon[73755]: pgmap v1529: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:29:38 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000014.scope: Deactivated successfully.
Sep 30 18:29:38 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000014.scope: Consumed 14.473s CPU time.
Sep 30 18:29:38 compute-0 systemd-machined[219917]: Machine qemu-14-instance-00000014 terminated.
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:38 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on a39db459-001a-467e-8721-1dca3120f5ee_disk: No such file or directory
Sep 30 18:29:38 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on a39db459-001a-467e-8721-1dca3120f5ee_disk: No such file or directory
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.484 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.485 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.485 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.486 2 DEBUG nova.virt.libvirt.guest [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'a39db459-001a-467e-8721-1dca3120f5ee' (instance-00000014) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.486 2 INFO nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Migration operation has completed
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.486 2 INFO nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] _post_live_migration() is started..
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.503 2 WARNING neutronclient.v2_0.client [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.503 2 WARNING neutronclient.v2_0.client [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:29:38 compute-0 podman[329843]: 2025-09-30 18:29:38.533776044 +0000 UTC m=+0.051781304 container kill 25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Sep 30 18:29:38 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[329175]: [NOTICE]   (329179) : haproxy version is 3.0.5-8e879a5
Sep 30 18:29:38 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[329175]: [NOTICE]   (329179) : path to executable is /usr/sbin/haproxy
Sep 30 18:29:38 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[329175]: [WARNING]  (329179) : Exiting Master process...
Sep 30 18:29:38 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[329175]: [ALERT]    (329179) : Current worker (329181) exited with code 143 (Terminated)
Sep 30 18:29:38 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[329175]: [WARNING]  (329179) : All workers exited. Exiting... (0)
Sep 30 18:29:38 compute-0 systemd[1]: libpod-25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab.scope: Deactivated successfully.
Sep 30 18:29:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1530: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 2 op/s
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.567 2 DEBUG nova.compute.manager [req-911e8cac-77ca-42ff-b33c-c809e8fb4f5b req-98362346-67b6-4af4-a192-0604658b2afc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.567 2 DEBUG oslo_concurrency.lockutils [req-911e8cac-77ca-42ff-b33c-c809e8fb4f5b req-98362346-67b6-4af4-a192-0604658b2afc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.567 2 DEBUG oslo_concurrency.lockutils [req-911e8cac-77ca-42ff-b33c-c809e8fb4f5b req-98362346-67b6-4af4-a192-0604658b2afc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.568 2 DEBUG oslo_concurrency.lockutils [req-911e8cac-77ca-42ff-b33c-c809e8fb4f5b req-98362346-67b6-4af4-a192-0604658b2afc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.568 2 DEBUG nova.compute.manager [req-911e8cac-77ca-42ff-b33c-c809e8fb4f5b req-98362346-67b6-4af4-a192-0604658b2afc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] No waiting events found dispatching network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.568 2 DEBUG nova.compute.manager [req-911e8cac-77ca-42ff-b33c-c809e8fb4f5b req-98362346-67b6-4af4-a192-0604658b2afc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:29:38 compute-0 podman[329868]: 2025-09-30 18:29:38.584754925 +0000 UTC m=+0.029484835 container died 25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:29:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:38.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab-userdata-shm.mount: Deactivated successfully.
Sep 30 18:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-054f6af12218a8a29cc9c09c7d8be13cf7c3621445666a9ad6c5f0a644dcffc0-merged.mount: Deactivated successfully.
Sep 30 18:29:38 compute-0 podman[329868]: 2025-09-30 18:29:38.624883546 +0000 UTC m=+0.069613406 container cleanup 25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:29:38 compute-0 systemd[1]: libpod-conmon-25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab.scope: Deactivated successfully.
Sep 30 18:29:38 compute-0 podman[329870]: 2025-09-30 18:29:38.644832783 +0000 UTC m=+0.080894958 container remove 25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.659 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[60d71208-965b-408b-85ae-47696d3dd245]: (4, ("Tue Sep 30 06:29:38 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab)\n25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab\nTue Sep 30 06:29:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab)\n25b90d2268c64728cec456bfe261a34d0e9712e5af6c24c0e6e1dfaf0da623ab\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.660 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5937d437-4003-4e1a-a7fc-031ff2d2da84]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.660 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.661 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b0eec9e5-0aa6-4764-80ba-c5b52341848a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.661 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:29:38 compute-0 kernel: tap6901f664-30: left promiscuous mode
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:38 compute-0 nova_compute[265391]: 2025-09-30 18:29:38.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.684 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[067dd4cc-7045-46e9-be30-1853740944b5]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.706 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[031339e0-85a3-4763-9fd5-836e9e91b280]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.708 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2bb1ac-b9a4-4c8d-8da0-2356d6a52a05]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.726 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9de5bd1a-9da2-42fe-8ae6-a072cd44766e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552377, 'reachable_time': 27115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329934, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d6901f664\x2d336b\x2d42d2\x2dbbf7\x2d58951befc8d1.mount: Deactivated successfully.
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.732 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.732 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[97893436-bbdb-46e6-adcb-7bb7fb35ad90]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:29:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:38.777 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:29:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:38] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:29:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:38] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:29:38 compute-0 podman[329940]: 2025-09-30 18:29:38.814378299 +0000 UTC m=+0.041692242 container create 6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:29:38 compute-0 systemd[1]: Started libpod-conmon-6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688.scope.
Sep 30 18:29:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:29:38 compute-0 podman[329940]: 2025-09-30 18:29:38.797758808 +0000 UTC m=+0.025072771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:29:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:38.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:38 compute-0 podman[329940]: 2025-09-30 18:29:38.907262467 +0000 UTC m=+0.134576430 container init 6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_franklin, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:29:38 compute-0 podman[329940]: 2025-09-30 18:29:38.913825877 +0000 UTC m=+0.141139820 container start 6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_franklin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 18:29:38 compute-0 podman[329940]: 2025-09-30 18:29:38.917987005 +0000 UTC m=+0.145300978 container attach 6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_franklin, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:29:38 compute-0 brave_franklin[329957]: 167 167
Sep 30 18:29:38 compute-0 systemd[1]: libpod-6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688.scope: Deactivated successfully.
Sep 30 18:29:38 compute-0 conmon[329957]: conmon 6d4e335ac1762d15abd6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688.scope/container/memory.events
Sep 30 18:29:38 compute-0 podman[329940]: 2025-09-30 18:29:38.9231996 +0000 UTC m=+0.150513543 container died 6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 18:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-835076b55f477528e49226bfe34c97814856c0001e357ca826ecf830f82c6a14-merged.mount: Deactivated successfully.
Sep 30 18:29:38 compute-0 podman[329940]: 2025-09-30 18:29:38.961584645 +0000 UTC m=+0.188898608 container remove 6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_franklin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:29:38 compute-0 systemd[1]: libpod-conmon-6d4e335ac1762d15abd671a622b447027bb00d9e0b58106513c5c90376066688.scope: Deactivated successfully.
Sep 30 18:29:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.076 2 DEBUG nova.compute.manager [req-0bf3e0d1-2d72-471d-9c0d-194cfa59984d req-90a97f8b-e945-4310-8044-3fc4f5918a95 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.077 2 DEBUG oslo_concurrency.lockutils [req-0bf3e0d1-2d72-471d-9c0d-194cfa59984d req-90a97f8b-e945-4310-8044-3fc4f5918a95 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.077 2 DEBUG oslo_concurrency.lockutils [req-0bf3e0d1-2d72-471d-9c0d-194cfa59984d req-90a97f8b-e945-4310-8044-3fc4f5918a95 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.077 2 DEBUG oslo_concurrency.lockutils [req-0bf3e0d1-2d72-471d-9c0d-194cfa59984d req-90a97f8b-e945-4310-8044-3fc4f5918a95 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.077 2 DEBUG nova.compute.manager [req-0bf3e0d1-2d72-471d-9c0d-194cfa59984d req-90a97f8b-e945-4310-8044-3fc4f5918a95 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] No waiting events found dispatching network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.077 2 DEBUG nova.compute.manager [req-0bf3e0d1-2d72-471d-9c0d-194cfa59984d req-90a97f8b-e945-4310-8044-3fc4f5918a95 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:29:39 compute-0 podman[329981]: 2025-09-30 18:29:39.156418356 +0000 UTC m=+0.038666623 container create 539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:29:39 compute-0 systemd[1]: Started libpod-conmon-539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53.scope.
Sep 30 18:29:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534309921ad4f609526295dbd7301b6fcd7d131f5972d9e761b6314dcad0c945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534309921ad4f609526295dbd7301b6fcd7d131f5972d9e761b6314dcad0c945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534309921ad4f609526295dbd7301b6fcd7d131f5972d9e761b6314dcad0c945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/534309921ad4f609526295dbd7301b6fcd7d131f5972d9e761b6314dcad0c945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:39 compute-0 podman[329981]: 2025-09-30 18:29:39.226803581 +0000 UTC m=+0.109051868 container init 539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:29:39 compute-0 podman[329981]: 2025-09-30 18:29:39.232778266 +0000 UTC m=+0.115026533 container start 539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_turing, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.232 2 DEBUG nova.network.neutron [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port 4cd1879d-b7f9-410d-8517-ebcb79e59e3c and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.234 2 DEBUG nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.234 2 DEBUG nova.virt.libvirt.vif [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:28:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1579591444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1579591444',id=20,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:28:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ad15oqdw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:29:17Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=a39db459-001a-467e-8721-1dca3120f5ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.235 2 DEBUG nova.network.os_vif_util [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "address": "fa:16:3e:4e:9a:24", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cd1879d-b7", "ovs_interfaceid": "4cd1879d-b7f9-410d-8517-ebcb79e59e3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:29:39 compute-0 podman[329981]: 2025-09-30 18:29:39.139822106 +0000 UTC m=+0.022070383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:29:39 compute-0 podman[329981]: 2025-09-30 18:29:39.23563328 +0000 UTC m=+0.117881547 container attach 539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.235 2 DEBUG nova.network.os_vif_util [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:9a:24,bridge_name='br-int',has_traffic_filtering=True,id=4cd1879d-b7f9-410d-8517-ebcb79e59e3c,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cd1879d-b7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.236 2 DEBUG os_vif [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:9a:24,bridge_name='br-int',has_traffic_filtering=True,id=4cd1879d-b7f9-410d-8517-ebcb79e59e3c,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cd1879d-b7') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cd1879d-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.242 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=ee7b29fb-20ee-48ae-aa4c-13f2a302c0e0) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.247 2 INFO os_vif [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:9a:24,bridge_name='br-int',has_traffic_filtering=True,id=4cd1879d-b7f9-410d-8517-ebcb79e59e3c,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cd1879d-b7')
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.247 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.247 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.248 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.248 2 DEBUG nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.248 2 INFO nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Deleting instance files /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee_del
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.249 2 INFO nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Deletion of /var/lib/nova/instances/a39db459-001a-467e-8721-1dca3120f5ee_del complete
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:39 compute-0 nova_compute[265391]: 2025-09-30 18:29:39.427 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:29:39 compute-0 competent_turing[329998]: {
Sep 30 18:29:39 compute-0 competent_turing[329998]:     "0": [
Sep 30 18:29:39 compute-0 competent_turing[329998]:         {
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "devices": [
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "/dev/loop3"
Sep 30 18:29:39 compute-0 competent_turing[329998]:             ],
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "lv_name": "ceph_lv0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "lv_size": "21470642176",
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "name": "ceph_lv0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "tags": {
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.cluster_name": "ceph",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.crush_device_class": "",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.encrypted": "0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.osd_id": "0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.type": "block",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.vdo": "0",
Sep 30 18:29:39 compute-0 competent_turing[329998]:                 "ceph.with_tpm": "0"
Sep 30 18:29:39 compute-0 competent_turing[329998]:             },
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "type": "block",
Sep 30 18:29:39 compute-0 competent_turing[329998]:             "vg_name": "ceph_vg0"
Sep 30 18:29:39 compute-0 competent_turing[329998]:         }
Sep 30 18:29:39 compute-0 competent_turing[329998]:     ]
Sep 30 18:29:39 compute-0 competent_turing[329998]: }
Sep 30 18:29:39 compute-0 systemd[1]: libpod-539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53.scope: Deactivated successfully.
Sep 30 18:29:39 compute-0 podman[329981]: 2025-09-30 18:29:39.573394367 +0000 UTC m=+0.455642674 container died 539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:29:39 compute-0 podman[329981]: 2025-09-30 18:29:39.623383683 +0000 UTC m=+0.505631990 container remove 539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_turing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:29:39 compute-0 systemd[1]: libpod-conmon-539c71760fe543205f2d076a7eb324fb4c1d5a5949c9f64158c38c1777893b53.scope: Deactivated successfully.
Sep 30 18:29:39 compute-0 sudo[329796]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:39 compute-0 sudo[330020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:29:39 compute-0 sudo[330020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:39 compute-0 sudo[330020]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:39 compute-0 sudo[330045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:29:39 compute-0 sudo[330045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:40 compute-0 podman[330111]: 2025-09-30 18:29:40.193788851 +0000 UTC m=+0.044884044 container create 992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:29:40 compute-0 systemd[1]: Started libpod-conmon-992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0.scope.
Sep 30 18:29:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:29:40 compute-0 podman[330111]: 2025-09-30 18:29:40.263237242 +0000 UTC m=+0.114332445 container init 992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:29:40 compute-0 podman[330111]: 2025-09-30 18:29:40.171951345 +0000 UTC m=+0.023046568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:29:40 compute-0 podman[330111]: 2025-09-30 18:29:40.270087399 +0000 UTC m=+0.121182592 container start 992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:29:40 compute-0 practical_cray[330128]: 167 167
Sep 30 18:29:40 compute-0 systemd[1]: libpod-992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0.scope: Deactivated successfully.
Sep 30 18:29:40 compute-0 podman[330111]: 2025-09-30 18:29:40.274452463 +0000 UTC m=+0.125547686 container attach 992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Sep 30 18:29:40 compute-0 podman[330111]: 2025-09-30 18:29:40.275009057 +0000 UTC m=+0.126104260 container died 992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:29:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f3f97bbd9f18375f42c46ad2d9424d429a1061519f228ff379a877f8301a040-merged.mount: Deactivated successfully.
Sep 30 18:29:40 compute-0 podman[330111]: 2025-09-30 18:29:40.31216262 +0000 UTC m=+0.163257803 container remove 992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:29:40 compute-0 systemd[1]: libpod-conmon-992a9f11df92d3dd3b481b9258cf01a147b4c80b74e8e3d206d86eac7224c3f0.scope: Deactivated successfully.
Sep 30 18:29:40 compute-0 ceph-mon[73755]: pgmap v1530: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 2 op/s
Sep 30 18:29:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1506951630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:40 compute-0 podman[330154]: 2025-09-30 18:29:40.472313042 +0000 UTC m=+0.046483756 container create 5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gould, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:29:40 compute-0 systemd[1]: Started libpod-conmon-5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13.scope.
Sep 30 18:29:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf09fca1463fac15272f8c582e054ca9e0667da3cf154eabf1febf66d5c6405/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf09fca1463fac15272f8c582e054ca9e0667da3cf154eabf1febf66d5c6405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:40 compute-0 podman[330154]: 2025-09-30 18:29:40.45564412 +0000 UTC m=+0.029814864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf09fca1463fac15272f8c582e054ca9e0667da3cf154eabf1febf66d5c6405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf09fca1463fac15272f8c582e054ca9e0667da3cf154eabf1febf66d5c6405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:29:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1531: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 4.3 KiB/s wr, 6 op/s
Sep 30 18:29:40 compute-0 podman[330154]: 2025-09-30 18:29:40.568962698 +0000 UTC m=+0.143133452 container init 5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gould, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:29:40 compute-0 podman[330154]: 2025-09-30 18:29:40.58292984 +0000 UTC m=+0.157100564 container start 5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:29:40 compute-0 podman[330154]: 2025-09-30 18:29:40.586517513 +0000 UTC m=+0.160688237 container attach 5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gould, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:29:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:40.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.620 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.621 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.622 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.622 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.622 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] No waiting events found dispatching network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.623 2 WARNING nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received unexpected event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c for instance with vm_state active and task_state migrating.
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.623 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.623 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.623 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.624 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.624 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] No waiting events found dispatching network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.624 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-unplugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.624 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.625 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.625 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.625 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.625 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] No waiting events found dispatching network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.625 2 WARNING nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received unexpected event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c for instance with vm_state active and task_state migrating.
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.626 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.626 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.626 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.626 2 DEBUG oslo_concurrency.lockutils [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.626 2 DEBUG nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] No waiting events found dispatching network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.627 2 WARNING nova.compute.manager [req-f461301a-ec0c-4401-8726-ffd3aa048479 req-d20f1c99-f3c5-4983-8d1f-0c5ea728aca0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Received unexpected event network-vif-plugged-4cd1879d-b7f9-410d-8517-ebcb79e59e3c for instance with vm_state active and task_state migrating.
Sep 30 18:29:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:40.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:40 compute-0 nova_compute[265391]: 2025-09-30 18:29:40.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:41 compute-0 lvm[330243]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:29:41 compute-0 lvm[330243]: VG ceph_vg0 finished
Sep 30 18:29:41 compute-0 romantic_gould[330170]: {}
Sep 30 18:29:41 compute-0 systemd[1]: libpod-5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13.scope: Deactivated successfully.
Sep 30 18:29:41 compute-0 podman[330154]: 2025-09-30 18:29:41.349031461 +0000 UTC m=+0.923202205 container died 5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:29:41 compute-0 systemd[1]: libpod-5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13.scope: Consumed 1.234s CPU time.
Sep 30 18:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf09fca1463fac15272f8c582e054ca9e0667da3cf154eabf1febf66d5c6405-merged.mount: Deactivated successfully.
Sep 30 18:29:41 compute-0 podman[330154]: 2025-09-30 18:29:41.412423565 +0000 UTC m=+0.986594289 container remove 5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 18:29:41 compute-0 systemd[1]: libpod-conmon-5f9286cac0a62b9ab43ca57f6eff44be16106cbe2e9e44ad15fd2f9f9de3bf13.scope: Deactivated successfully.
Sep 30 18:29:41 compute-0 nova_compute[265391]: 2025-09-30 18:29:41.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:41 compute-0 sudo[330045]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:29:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:29:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:29:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:29:41 compute-0 sudo[330260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:29:41 compute-0 sudo[330260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:41 compute-0 sudo[330260]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:41 compute-0 nova_compute[265391]: 2025-09-30 18:29:41.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:41 compute-0 nova_compute[265391]: 2025-09-30 18:29:41.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:41 compute-0 nova_compute[265391]: 2025-09-30 18:29:41.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:41 compute-0 nova_compute[265391]: 2025-09-30 18:29:41.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:29:41 compute-0 nova_compute[265391]: 2025-09-30 18:29:41.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:29:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:29:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269182293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:42 compute-0 nova_compute[265391]: 2025-09-30 18:29:42.381 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:29:42 compute-0 ceph-mon[73755]: pgmap v1531: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 4.3 KiB/s wr, 6 op/s
Sep 30 18:29:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:29:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:29:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3269182293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:42 compute-0 nova_compute[265391]: 2025-09-30 18:29:42.516 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:29:42 compute-0 nova_compute[265391]: 2025-09-30 18:29:42.518 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:29:42 compute-0 nova_compute[265391]: 2025-09-30 18:29:42.539 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:29:42 compute-0 nova_compute[265391]: 2025-09-30 18:29:42.540 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4305MB free_disk=39.901123046875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:29:42 compute-0 nova_compute[265391]: 2025-09-30 18:29:42.540 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:42 compute-0 nova_compute[265391]: 2025-09-30 18:29:42.540 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1532: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 3.3 KiB/s wr, 6 op/s
Sep 30 18:29:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:42.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:42.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.558 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Updating resource usage from migration 091476e8-6365-4164-85ee-66b48227aab9
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.586 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration 091476e8-6365-4164-85ee-66b48227aab9 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.587 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.587 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:29:42 up  1:33,  0 user,  load average: 1.35, 1.11, 0.94\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.604 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.622 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.623 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.633 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.655 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:29:43 compute-0 nova_compute[265391]: 2025-09-30 18:29:43.698 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:29:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:43.807Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:29:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575853102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:44 compute-0 nova_compute[265391]: 2025-09-30 18:29:44.155 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:29:44 compute-0 nova_compute[265391]: 2025-09-30 18:29:44.162 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:29:44 compute-0 nova_compute[265391]: 2025-09-30 18:29:44.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:44 compute-0 ceph-mon[73755]: pgmap v1532: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 3.3 KiB/s wr, 6 op/s
Sep 30 18:29:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3575853102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:44 compute-0 podman[330335]: 2025-09-30 18:29:44.529148489 +0000 UTC m=+0.071120355 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:29:44 compute-0 podman[330334]: 2025-09-30 18:29:44.549154388 +0000 UTC m=+0.091126194 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Sep 30 18:29:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1533: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 4.3 KiB/s wr, 6 op/s
Sep 30 18:29:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:44.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:44 compute-0 nova_compute[265391]: 2025-09-30 18:29:44.671 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:29:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:44.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:45 compute-0 nova_compute[265391]: 2025-09-30 18:29:45.182 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:29:45 compute-0 nova_compute[265391]: 2025-09-30 18:29:45.182 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.642s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:45 compute-0 nova_compute[265391]: 2025-09-30 18:29:45.183 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:45 compute-0 nova_compute[265391]: 2025-09-30 18:29:45.183 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:29:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4220618568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:45 compute-0 podman[330385]: 2025-09-30 18:29:45.527186613 +0000 UTC m=+0.069207295 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:29:45 compute-0 nova_compute[265391]: 2025-09-30 18:29:45.691 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:29:45 compute-0 nova_compute[265391]: 2025-09-30 18:29:45.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:46 compute-0 nova_compute[265391]: 2025-09-30 18:29:46.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:46 compute-0 nova_compute[265391]: 2025-09-30 18:29:46.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:46 compute-0 nova_compute[265391]: 2025-09-30 18:29:46.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:46 compute-0 nova_compute[265391]: 2025-09-30 18:29:46.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:46 compute-0 ceph-mon[73755]: pgmap v1533: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 4.3 KiB/s wr, 6 op/s
Sep 30 18:29:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1534: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 1023 B/s wr, 5 op/s
Sep 30 18:29:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:46.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:46.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:47.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:47 compute-0 nova_compute[265391]: 2025-09-30 18:29:47.930 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:48 compute-0 ceph-mon[73755]: pgmap v1534: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 1023 B/s wr, 5 op/s
Sep 30 18:29:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1535: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 5.1 KiB/s rd, 1023 B/s wr, 6 op/s
Sep 30 18:29:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:48.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:48] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:29:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:48] "GET /metrics HTTP/1.1" 200 46645 "" "Prometheus/2.51.0"
Sep 30 18:29:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:49 compute-0 nova_compute[265391]: 2025-09-30 18:29:49.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.287 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "a39db459-001a-467e-8721-1dca3120f5ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.288 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.288 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "a39db459-001a-467e-8721-1dca3120f5ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:50 compute-0 ceph-mon[73755]: pgmap v1535: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 5.1 KiB/s rd, 1023 B/s wr, 6 op/s
Sep 30 18:29:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1536: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 4.8 KiB/s rd, 9.1 KiB/s wr, 6 op/s
Sep 30 18:29:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:50.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.800 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.801 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.801 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.802 2 DEBUG nova.compute.resource_tracker [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.803 2 DEBUG oslo_concurrency.processutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:29:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:50 compute-0 nova_compute[265391]: 2025-09-30 18:29:50.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:29:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2085539696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:51 compute-0 nova_compute[265391]: 2025-09-30 18:29:51.244 2 DEBUG oslo_concurrency.processutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:29:51 compute-0 nova_compute[265391]: 2025-09-30 18:29:51.383 2 WARNING nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:29:51 compute-0 nova_compute[265391]: 2025-09-30 18:29:51.384 2 DEBUG oslo_concurrency.processutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:29:51 compute-0 nova_compute[265391]: 2025-09-30 18:29:51.405 2 DEBUG oslo_concurrency.processutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:29:51 compute-0 nova_compute[265391]: 2025-09-30 18:29:51.406 2 DEBUG nova.compute.resource_tracker [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4333MB free_disk=39.9011116027832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:29:51 compute-0 nova_compute[265391]: 2025-09-30 18:29:51.406 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:51 compute-0 nova_compute[265391]: 2025-09-30 18:29:51.406 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:51 compute-0 ceph-mon[73755]: pgmap v1536: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 4.8 KiB/s rd, 9.1 KiB/s wr, 6 op/s
Sep 30 18:29:51 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2085539696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:29:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:29:52 compute-0 nova_compute[265391]: 2025-09-30 18:29:52.426 2 DEBUG nova.compute.resource_tracker [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance a39db459-001a-467e-8721-1dca3120f5ee refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:29:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:29:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1537: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:29:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:52.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:52 compute-0 nova_compute[265391]: 2025-09-30 18:29:52.935 2 DEBUG nova.compute.resource_tracker [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:29:52 compute-0 nova_compute[265391]: 2025-09-30 18:29:52.970 2 DEBUG nova.compute.resource_tracker [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 091476e8-6365-4164-85ee-66b48227aab9 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:29:52 compute-0 nova_compute[265391]: 2025-09-30 18:29:52.970 2 DEBUG nova.compute.resource_tracker [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:29:52 compute-0 nova_compute[265391]: 2025-09-30 18:29:52.971 2 DEBUG nova.compute.resource_tracker [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:29:51 up  1:33,  0 user,  load average: 1.24, 1.10, 0.93\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:29:53 compute-0 nova_compute[265391]: 2025-09-30 18:29:53.021 2 DEBUG oslo_concurrency.processutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:29:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:29:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629675832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:53 compute-0 nova_compute[265391]: 2025-09-30 18:29:53.494 2 DEBUG oslo_concurrency.processutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:29:53 compute-0 nova_compute[265391]: 2025-09-30 18:29:53.501 2 DEBUG nova.compute.provider_tree [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:29:53 compute-0 ceph-mon[73755]: pgmap v1537: 353 pgs: 353 active+clean; 200 MiB data, 362 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:29:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3629675832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:53.807Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:29:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:53.809Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:54 compute-0 nova_compute[265391]: 2025-09-30 18:29:54.010 2 DEBUG nova.scheduler.client.report [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:29:54 compute-0 nova_compute[265391]: 2025-09-30 18:29:54.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:54.313 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:29:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:54.313 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:29:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:29:54.313 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:54 compute-0 nova_compute[265391]: 2025-09-30 18:29:54.523 2 DEBUG nova.compute.resource_tracker [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:29:54 compute-0 nova_compute[265391]: 2025-09-30 18:29:54.523 2 DEBUG oslo_concurrency.lockutils [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.117s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:29:54 compute-0 nova_compute[265391]: 2025-09-30 18:29:54.542 2 INFO nova.compute.manager [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:29:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1538: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 30 op/s
Sep 30 18:29:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:54.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:55 compute-0 nova_compute[265391]: 2025-09-30 18:29:55.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:55 compute-0 sudo[330461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:29:55 compute-0 sudo[330461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:29:55 compute-0 sudo[330461]: pam_unix(sudo:session): session closed for user root
Sep 30 18:29:55 compute-0 nova_compute[265391]: 2025-09-30 18:29:55.605 2 INFO nova.scheduler.client.report [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 091476e8-6365-4164-85ee-66b48227aab9
Sep 30 18:29:55 compute-0 nova_compute[265391]: 2025-09-30 18:29:55.605 2 DEBUG nova.virt.libvirt.driver [None req-932e932d-fdac-471f-b447-67e8c697892d 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: a39db459-001a-467e-8721-1dca3120f5ee] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:29:55 compute-0 ceph-mon[73755]: pgmap v1538: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 30 op/s
Sep 30 18:29:55 compute-0 nova_compute[265391]: 2025-09-30 18:29:55.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:55 compute-0 nova_compute[265391]: 2025-09-30 18:29:55.933 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:29:55 compute-0 nova_compute[265391]: 2025-09-30 18:29:55.934 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:29:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:29:56 compute-0 podman[330489]: 2025-09-30 18:29:56.526144928 +0000 UTC m=+0.060165761 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Sep 30 18:29:56 compute-0 podman[330488]: 2025-09-30 18:29:56.526142928 +0000 UTC m=+0.059344580 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:29:56 compute-0 podman[330487]: 2025-09-30 18:29:56.542986794 +0000 UTC m=+0.081854803 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:29:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1539: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:29:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/823296709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:29:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:56.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:56.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:29:57.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:29:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:29:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006712251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:29:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:29:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006712251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:29:57 compute-0 ceph-mon[73755]: pgmap v1539: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:29:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3006712251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:29:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3006712251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:29:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1540: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:29:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:29:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:29:58.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:29:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:58] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:29:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:29:58] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:29:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:29:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:29:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:29:58.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:29:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:29:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:29:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:29:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:29:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:29:59 compute-0 nova_compute[265391]: 2025-09-30 18:29:59.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:29:59 compute-0 ceph-mon[73755]: pgmap v1540: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:29:59 compute-0 podman[276673]: time="2025-09-30T18:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:29:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:29:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10288 "" "Go-http-client/1.1"
Sep 30 18:30:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:30:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1541: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:30:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:00.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:00 compute-0 ceph-mon[73755]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:30:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:00.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:00 compute-0 nova_compute[265391]: 2025-09-30 18:30:00.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:01 compute-0 openstack_network_exporter[279566]: ERROR   18:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:30:01 compute-0 openstack_network_exporter[279566]: ERROR   18:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:30:01 compute-0 openstack_network_exporter[279566]: ERROR   18:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:30:01 compute-0 openstack_network_exporter[279566]: ERROR   18:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:30:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:30:01 compute-0 openstack_network_exporter[279566]: ERROR   18:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:30:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:30:01 compute-0 ceph-mon[73755]: pgmap v1541: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:30:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1542: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:30:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:02.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:03 compute-0 ceph-mon[73755]: pgmap v1542: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:30:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:03.809Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:30:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:04 compute-0 nova_compute[265391]: 2025-09-30 18:30:04.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1543: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Sep 30 18:30:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:05 compute-0 ceph-mon[73755]: pgmap v1543: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Sep 30 18:30:05 compute-0 nova_compute[265391]: 2025-09-30 18:30:05.809 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:05 compute-0 nova_compute[265391]: 2025-09-30 18:30:05.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:06 compute-0 sshd-session[330557]: Invalid user test from 14.225.220.107 port 33902
Sep 30 18:30:06 compute-0 sshd-session[330557]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:30:06 compute-0 sshd-session[330557]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:30:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1544: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:30:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:06.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:06.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:07.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:30:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:30:07 compute-0 ceph-mon[73755]: pgmap v1544: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:30:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:30:07
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'images', '.nfs', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:30:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:30:08 compute-0 sshd-session[330557]: Failed password for invalid user test from 14.225.220.107 port 33902 ssh2
Sep 30 18:30:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1545: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:30:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:08.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:08] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:30:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:08] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:30:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:09 compute-0 sshd-session[330557]: Received disconnect from 14.225.220.107 port 33902:11: Bye Bye [preauth]
Sep 30 18:30:09 compute-0 sshd-session[330557]: Disconnected from invalid user test 14.225.220.107 port 33902 [preauth]
Sep 30 18:30:09 compute-0 nova_compute[265391]: 2025-09-30 18:30:09.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:09 compute-0 ceph-mon[73755]: pgmap v1545: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:30:10 compute-0 nova_compute[265391]: 2025-09-30 18:30:10.305 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "87f50c89-ad94-41dc-9263-c5715083d91b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:10 compute-0 nova_compute[265391]: 2025-09-30 18:30:10.306 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1546: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:30:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:10.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:10 compute-0 sshd-session[330561]: Invalid user ubuntu from 45.252.249.158 port 41422
Sep 30 18:30:10 compute-0 sshd-session[330561]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:30:10 compute-0 sshd-session[330561]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:30:10 compute-0 nova_compute[265391]: 2025-09-30 18:30:10.813 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:30:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:30:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:10.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:30:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:10 compute-0 nova_compute[265391]: 2025-09-30 18:30:10.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:11 compute-0 nova_compute[265391]: 2025-09-30 18:30:11.371 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:11 compute-0 nova_compute[265391]: 2025-09-30 18:30:11.371 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:11 compute-0 nova_compute[265391]: 2025-09-30 18:30:11.380 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:30:11 compute-0 nova_compute[265391]: 2025-09-30 18:30:11.381 2 INFO nova.compute.claims [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:30:11 compute-0 ceph-mon[73755]: pgmap v1546: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:30:12 compute-0 nova_compute[265391]: 2025-09-30 18:30:12.443 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1547: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:30:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:12.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:12 compute-0 sshd-session[330561]: Failed password for invalid user ubuntu from 45.252.249.158 port 41422 ssh2
Sep 30 18:30:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:30:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619267689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:12 compute-0 nova_compute[265391]: 2025-09-30 18:30:12.913 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:12 compute-0 nova_compute[265391]: 2025-09-30 18:30:12.918 2 DEBUG nova.compute.provider_tree [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:30:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:12.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:13 compute-0 sshd-session[330561]: Received disconnect from 45.252.249.158 port 41422:11: Bye Bye [preauth]
Sep 30 18:30:13 compute-0 sshd-session[330561]: Disconnected from invalid user ubuntu 45.252.249.158 port 41422 [preauth]
Sep 30 18:30:13 compute-0 nova_compute[265391]: 2025-09-30 18:30:13.774 2 DEBUG nova.scheduler.client.report [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:30:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:13.811Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:13 compute-0 ceph-mon[73755]: pgmap v1547: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:30:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1619267689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:14 compute-0 nova_compute[265391]: 2025-09-30 18:30:14.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:14 compute-0 nova_compute[265391]: 2025-09-30 18:30:14.286 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.915s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:14 compute-0 nova_compute[265391]: 2025-09-30 18:30:14.287 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:30:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1548: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 60 KiB/s rd, 1.2 KiB/s wr, 95 op/s
Sep 30 18:30:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:14.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:14 compute-0 nova_compute[265391]: 2025-09-30 18:30:14.801 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:30:14 compute-0 nova_compute[265391]: 2025-09-30 18:30:14.801 2 DEBUG nova.network.neutron [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:30:14 compute-0 nova_compute[265391]: 2025-09-30 18:30:14.802 2 WARNING neutronclient.v2_0.client [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:30:14 compute-0 nova_compute[265391]: 2025-09-30 18:30:14.802 2 WARNING neutronclient.v2_0.client [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:30:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:14.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:15 compute-0 nova_compute[265391]: 2025-09-30 18:30:15.319 2 INFO nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:30:15 compute-0 podman[330595]: 2025-09-30 18:30:15.554520653 +0000 UTC m=+0.079712408 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:30:15 compute-0 podman[330594]: 2025-09-30 18:30:15.612773143 +0000 UTC m=+0.140987216 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0)
Sep 30 18:30:15 compute-0 sudo[330641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:30:15 compute-0 sudo[330641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:15 compute-0 podman[330637]: 2025-09-30 18:30:15.64696109 +0000 UTC m=+0.064856443 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:30:15 compute-0 sudo[330641]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:15 compute-0 nova_compute[265391]: 2025-09-30 18:30:15.826 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:30:15 compute-0 ceph-mon[73755]: pgmap v1548: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 60 KiB/s rd, 1.2 KiB/s wr, 95 op/s
Sep 30 18:30:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:15 compute-0 nova_compute[265391]: 2025-09-30 18:30:15.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1549: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Sep 30 18:30:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:16.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:16 compute-0 nova_compute[265391]: 2025-09-30 18:30:16.848 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:30:16 compute-0 nova_compute[265391]: 2025-09-30 18:30:16.850 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:30:16 compute-0 nova_compute[265391]: 2025-09-30 18:30:16.851 2 INFO nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Creating image(s)
Sep 30 18:30:16 compute-0 nova_compute[265391]: 2025-09-30 18:30:16.885 2 DEBUG nova.storage.rbd_utils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 87f50c89-ad94-41dc-9263-c5715083d91b_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:30:16 compute-0 nova_compute[265391]: 2025-09-30 18:30:16.920 2 DEBUG nova.storage.rbd_utils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 87f50c89-ad94-41dc-9263-c5715083d91b_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:30:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:16.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:16 compute-0 nova_compute[265391]: 2025-09-30 18:30:16.954 2 DEBUG nova.storage.rbd_utils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 87f50c89-ad94-41dc-9263-c5715083d91b_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:30:16 compute-0 nova_compute[265391]: 2025-09-30 18:30:16.960 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:16 compute-0 nova_compute[265391]: 2025-09-30 18:30:16.974 2 DEBUG nova.network.neutron [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Successfully created port: 688f7db7-db25-41cc-8c99-d97dc36a693a _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.027 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.027 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.028 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.028 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.053 2 DEBUG nova.storage.rbd_utils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 87f50c89-ad94-41dc-9263-c5715083d91b_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.057 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 87f50c89-ad94-41dc-9263-c5715083d91b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:17.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.333 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 87f50c89-ad94-41dc-9263-c5715083d91b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.423 2 DEBUG nova.storage.rbd_utils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] resizing rbd image 87f50c89-ad94-41dc-9263-c5715083d91b_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.528 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.529 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Ensure instance console log exists: /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.530 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.531 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.531 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.628 2 DEBUG nova.network.neutron [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Successfully updated port: 688f7db7-db25-41cc-8c99-d97dc36a693a _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.719 2 DEBUG nova.compute.manager [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received event network-changed-688f7db7-db25-41cc-8c99-d97dc36a693a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.720 2 DEBUG nova.compute.manager [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Refreshing instance network info cache due to event network-changed-688f7db7-db25-41cc-8c99-d97dc36a693a. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.720 2 DEBUG oslo_concurrency.lockutils [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-87f50c89-ad94-41dc-9263-c5715083d91b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.720 2 DEBUG oslo_concurrency.lockutils [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-87f50c89-ad94-41dc-9263-c5715083d91b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:30:17 compute-0 nova_compute[265391]: 2025-09-30 18:30:17.720 2 DEBUG nova.network.neutron [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Refreshing network info cache for port 688f7db7-db25-41cc-8c99-d97dc36a693a _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:30:17 compute-0 ceph-mon[73755]: pgmap v1549: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 40 GiB / 40 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Sep 30 18:30:18 compute-0 nova_compute[265391]: 2025-09-30 18:30:18.135 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "refresh_cache-87f50c89-ad94-41dc-9263-c5715083d91b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:30:18 compute-0 nova_compute[265391]: 2025-09-30 18:30:18.225 2 WARNING neutronclient.v2_0.client [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:30:18 compute-0 nova_compute[265391]: 2025-09-30 18:30:18.475 2 DEBUG nova.network.neutron [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:30:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1550: 353 pgs: 353 active+clean; 65 MiB data, 279 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 901 KiB/s wr, 111 op/s
Sep 30 18:30:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:18.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:18 compute-0 nova_compute[265391]: 2025-09-30 18:30:18.701 2 DEBUG nova.network.neutron [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:30:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:18] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:30:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:18] "GET /metrics HTTP/1.1" 200 46633 "" "Prometheus/2.51.0"
Sep 30 18:30:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:18.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:19 compute-0 nova_compute[265391]: 2025-09-30 18:30:19.216 2 DEBUG oslo_concurrency.lockutils [req-b6a1b75e-3c3e-47a0-92b0-5eb804be16b3 req-66ded37b-33f7-4a1f-8fdc-fcdf8e1d961b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-87f50c89-ad94-41dc-9263-c5715083d91b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:30:19 compute-0 nova_compute[265391]: 2025-09-30 18:30:19.217 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquired lock "refresh_cache-87f50c89-ad94-41dc-9263-c5715083d91b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:30:19 compute-0 nova_compute[265391]: 2025-09-30 18:30:19.218 2 DEBUG nova.network.neutron [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:30:19 compute-0 nova_compute[265391]: 2025-09-30 18:30:19.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:19 compute-0 ceph-mon[73755]: pgmap v1550: 353 pgs: 353 active+clean; 65 MiB data, 279 MiB used, 40 GiB / 40 GiB avail; 69 KiB/s rd, 901 KiB/s wr, 111 op/s
Sep 30 18:30:20 compute-0 nova_compute[265391]: 2025-09-30 18:30:20.471 2 DEBUG nova.network.neutron [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:30:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1551: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Sep 30 18:30:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:30:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:20.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:30:20 compute-0 nova_compute[265391]: 2025-09-30 18:30:20.689 2 WARNING neutronclient.v2_0.client [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:30:20 compute-0 nova_compute[265391]: 2025-09-30 18:30:20.828 2 DEBUG nova.network.neutron [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Updating instance_info_cache with network_info: [{"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:30:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:20.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:20 compute-0 nova_compute[265391]: 2025-09-30 18:30:20.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.340 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Releasing lock "refresh_cache-87f50c89-ad94-41dc-9263-c5715083d91b" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.341 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Instance network_info: |[{"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.343 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Start _get_guest_xml network_info=[{"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.347 2 WARNING nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.348 2 DEBUG nova.virt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteStrategies-server-836614812', uuid='87f50c89-ad94-41dc-9263-c5715083d91b'), owner=OwnerMeta(userid='623ef4a55c9e4fc28bb65e49246b5008', username='tempest-TestExecuteStrategies-1883747907-project-admin', projectid='c634e1c17ed54907969576a0eb8eff50', projectname='tempest-TestExecuteStrategies-1883747907'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257021.3485372) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.352 2 DEBUG nova.virt.libvirt.host [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.353 2 DEBUG nova.virt.libvirt.host [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.357 2 DEBUG nova.virt.libvirt.host [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.358 2 DEBUG nova.virt.libvirt.host [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.359 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.359 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.360 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.361 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.361 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.361 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.362 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.362 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.363 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.363 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.364 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.364 2 DEBUG nova.virt.hardware [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.369 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:30:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12074083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.824 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.850 2 DEBUG nova.storage.rbd_utils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 87f50c89-ad94-41dc-9263-c5715083d91b_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:30:21 compute-0 nova_compute[265391]: 2025-09-30 18:30:21.856 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:21 compute-0 ceph-mon[73755]: pgmap v1551: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Sep 30 18:30:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/12074083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:30:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:30:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:30:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:30:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754482804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.349 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.351 2 DEBUG nova.virt.libvirt.vif [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-836614812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-836614812',id=22,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-12s0ytrx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:30:15Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=87f50c89-ad94-41dc-9263-c5715083d91b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.351 2 DEBUG nova.network.os_vif_util [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.352 2 DEBUG nova.network.os_vif_util [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:9e:a8,bridge_name='br-int',has_traffic_filtering=True,id=688f7db7-db25-41cc-8c99-d97dc36a693a,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap688f7db7-db') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.353 2 DEBUG nova.objects.instance [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87f50c89-ad94-41dc-9263-c5715083d91b obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:30:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1552: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Sep 30 18:30:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:22.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.870 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <uuid>87f50c89-ad94-41dc-9263-c5715083d91b</uuid>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <name>instance-00000016</name>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-836614812</nova:name>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:30:21</nova:creationTime>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:30:22 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:30:22 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <nova:port uuid="688f7db7-db25-41cc-8c99-d97dc36a693a">
Sep 30 18:30:22 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <system>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <entry name="serial">87f50c89-ad94-41dc-9263-c5715083d91b</entry>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <entry name="uuid">87f50c89-ad94-41dc-9263-c5715083d91b</entry>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </system>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <os>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   </os>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <features>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   </features>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/87f50c89-ad94-41dc-9263-c5715083d91b_disk">
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       </source>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/87f50c89-ad94-41dc-9263-c5715083d91b_disk.config">
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       </source>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:30:22 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:c5:9e:a8"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <target dev="tap688f7db7-db"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b/console.log" append="off"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <video>
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </video>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:30:22 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:30:22 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:30:22 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:30:22 compute-0 nova_compute[265391]: </domain>
Sep 30 18:30:22 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.870 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Preparing to wait for external event network-vif-plugged-688f7db7-db25-41cc-8c99-d97dc36a693a prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.871 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.871 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.872 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.873 2 DEBUG nova.virt.libvirt.vif [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-836614812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-836614812',id=22,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-12s0ytrx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:30:15Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=87f50c89-ad94-41dc-9263-c5715083d91b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.873 2 DEBUG nova.network.os_vif_util [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.874 2 DEBUG nova.network.os_vif_util [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:9e:a8,bridge_name='br-int',has_traffic_filtering=True,id=688f7db7-db25-41cc-8c99-d97dc36a693a,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap688f7db7-db') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.875 2 DEBUG os_vif [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:9e:a8,bridge_name='br-int',has_traffic_filtering=True,id=688f7db7-db25-41cc-8c99-d97dc36a693a,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap688f7db7-db') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.879 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '0e519a53-4a23-51e8-94b4-356923a5f5c5', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.886 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap688f7db7-db, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap688f7db7-db, col_values=(('qos', UUID('0d303356-f790-4e67-9602-e485bddedde6')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap688f7db7-db, col_values=(('external_ids', {'iface-id': '688f7db7-db25-41cc-8c99-d97dc36a693a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:9e:a8', 'vm-uuid': '87f50c89-ad94-41dc-9263-c5715083d91b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:22 compute-0 NetworkManager[45059]: <info>  [1759257022.8896] manager: (tap688f7db7-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:22 compute-0 nova_compute[265391]: 2025-09-30 18:30:22.899 2 INFO os_vif [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:9e:a8,bridge_name='br-int',has_traffic_filtering=True,id=688f7db7-db25-41cc-8c99-d97dc36a693a,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap688f7db7-db')
Sep 30 18:30:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:30:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3754482804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:30:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:22.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:23.812Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:23 compute-0 ceph-mon[73755]: pgmap v1552: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Sep 30 18:30:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:24 compute-0 nova_compute[265391]: 2025-09-30 18:30:24.434 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:30:24 compute-0 nova_compute[265391]: 2025-09-30 18:30:24.435 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:30:24 compute-0 nova_compute[265391]: 2025-09-30 18:30:24.435 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No VIF found with MAC fa:16:3e:c5:9e:a8, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:30:24 compute-0 nova_compute[265391]: 2025-09-30 18:30:24.435 2 INFO nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Using config drive
Sep 30 18:30:24 compute-0 nova_compute[265391]: 2025-09-30 18:30:24.467 2 DEBUG nova.storage.rbd_utils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 87f50c89-ad94-41dc-9263-c5715083d91b_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:30:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1553: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Sep 30 18:30:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:24.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:24 compute-0 nova_compute[265391]: 2025-09-30 18:30:24.989 2 WARNING neutronclient.v2_0.client [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.562 2 INFO nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Creating config drive at /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b/disk.config
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.571 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpm4cnex5j execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.722 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpm4cnex5j" returned: 0 in 0.151s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.756 2 DEBUG nova.storage.rbd_utils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 87f50c89-ad94-41dc-9263-c5715083d91b_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.762 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b/disk.config 87f50c89-ad94-41dc-9263-c5715083d91b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.940 2 DEBUG oslo_concurrency.processutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b/disk.config 87f50c89-ad94-41dc-9263-c5715083d91b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.941 2 INFO nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Deleting local config drive /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b/disk.config because it was imported into RBD.
Sep 30 18:30:25 compute-0 ceph-mon[73755]: pgmap v1553: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Sep 30 18:30:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:25 compute-0 kernel: tap688f7db7-db: entered promiscuous mode
Sep 30 18:30:25 compute-0 NetworkManager[45059]: <info>  [1759257025.9968] manager: (tap688f7db7-db): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Sep 30 18:30:25 compute-0 ovn_controller[156242]: 2025-09-30T18:30:25Z|00178|binding|INFO|Claiming lport 688f7db7-db25-41cc-8c99-d97dc36a693a for this chassis.
Sep 30 18:30:25 compute-0 ovn_controller[156242]: 2025-09-30T18:30:25Z|00179|binding|INFO|688f7db7-db25-41cc-8c99-d97dc36a693a: Claiming fa:16:3e:c5:9e:a8 10.100.0.4
Sep 30 18:30:25 compute-0 nova_compute[265391]: 2025-09-30 18:30:25.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.005 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:9e:a8 10.100.0.4'], port_security=['fa:16:3e:c5:9e:a8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '87f50c89-ad94-41dc-9263-c5715083d91b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=688f7db7-db25-41cc-8c99-d97dc36a693a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.006 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 688f7db7-db25-41cc-8c99-d97dc36a693a in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.007 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:30:26 compute-0 ovn_controller[156242]: 2025-09-30T18:30:26Z|00180|binding|INFO|Setting lport 688f7db7-db25-41cc-8c99-d97dc36a693a ovn-installed in OVS
Sep 30 18:30:26 compute-0 ovn_controller[156242]: 2025-09-30T18:30:26Z|00181|binding|INFO|Setting lport 688f7db7-db25-41cc-8c99-d97dc36a693a up in Southbound
Sep 30 18:30:26 compute-0 nova_compute[265391]: 2025-09-30 18:30:26.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.022 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a762ad81-24b5-41e9-ac0a-10b19285620e]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.023 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6901f664-31 in ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.024 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6901f664-30 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.024 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[aff9bd3b-146a-41f9-a494-51d7da274cf1]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.025 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[aca7be83-8b9f-4071-90a2-4f3699389b7f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 systemd-machined[219917]: New machine qemu-15-instance-00000016.
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.041 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[efe58cba-bad6-433f-995f-a993349f83bc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-00000016.
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.057 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[135df424-d365-45e5-a622-e747b29a051c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 systemd-udevd[331003]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:30:26 compute-0 NetworkManager[45059]: <info>  [1759257026.0797] device (tap688f7db7-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:30:26 compute-0 NetworkManager[45059]: <info>  [1759257026.0808] device (tap688f7db7-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.095 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[4c620290-0d73-4b06-8009-3f663add64c0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.100 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbd0bf1-0555-49d8-ad77-0ab11e847e2d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 NetworkManager[45059]: <info>  [1759257026.1011] manager: (tap6901f664-30): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Sep 30 18:30:26 compute-0 systemd-udevd[331007]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.134 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[b6084c69-e8e6-4759-8a1c-09b914f2db6e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.136 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[8ede313c-f5ba-4048-95de-4a512e6879f9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 NetworkManager[45059]: <info>  [1759257026.1608] device (tap6901f664-30): carrier: link connected
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.168 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c88d2b-9b76-45f5-926a-d6d027385bc1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.186 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[403949e5-f2a8-44a2-bfad-b4d4ed3229d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562485, 'reachable_time': 41624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331033, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.201 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[942a6fa2-0674-4fa8-856b-eea32febaf9a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:412a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562485, 'tstamp': 562485}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331034, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.220 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4db79151-e1d0-43e4-a14c-b61efe518702]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562485, 'reachable_time': 41624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331035, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.259 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[09f84185-5a3b-4ee0-a43d-4adbf3fe89ac]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.322 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d76cf039-2a5b-41cc-99ec-cadb75c06295]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.324 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.324 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.324 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:26 compute-0 NetworkManager[45059]: <info>  [1759257026.3270] manager: (tap6901f664-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Sep 30 18:30:26 compute-0 kernel: tap6901f664-30: entered promiscuous mode
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.329 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:26 compute-0 ovn_controller[156242]: 2025-09-30T18:30:26Z|00182|binding|INFO|Releasing lport 5b6cbf18-1826-41d0-920f-e9db4f1a1832 from this chassis (sb_readonly=0)
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.332 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8b6e2a-ecde-45b1-bf2d-8319d5290107]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.333 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.333 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.333 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 6901f664-336b-42d2-bbf7-58951befc8d1 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.333 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.333 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9b53cb-5aea-4d1f-871b-4317f0b38b9b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.334 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.334 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[379d12a9-2970-461a-8540-440c04914ace]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.335 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:30:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:26.335 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'env', 'PROCESS_TAG=haproxy-6901f664-336b-42d2-bbf7-58951befc8d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6901f664-336b-42d2-bbf7-58951befc8d1.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:30:26 compute-0 nova_compute[265391]: 2025-09-30 18:30:26.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:26 compute-0 nova_compute[265391]: 2025-09-30 18:30:26.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1554: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Sep 30 18:30:26 compute-0 nova_compute[265391]: 2025-09-30 18:30:26.580 2 DEBUG nova.compute.manager [req-ed4287da-b194-4348-91da-cc7cbca3358c req-3ef3d4a4-9a8a-4362-bfe1-5ffa36591318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received event network-vif-plugged-688f7db7-db25-41cc-8c99-d97dc36a693a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:30:26 compute-0 nova_compute[265391]: 2025-09-30 18:30:26.580 2 DEBUG oslo_concurrency.lockutils [req-ed4287da-b194-4348-91da-cc7cbca3358c req-3ef3d4a4-9a8a-4362-bfe1-5ffa36591318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:26 compute-0 nova_compute[265391]: 2025-09-30 18:30:26.580 2 DEBUG oslo_concurrency.lockutils [req-ed4287da-b194-4348-91da-cc7cbca3358c req-3ef3d4a4-9a8a-4362-bfe1-5ffa36591318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:26 compute-0 nova_compute[265391]: 2025-09-30 18:30:26.580 2 DEBUG oslo_concurrency.lockutils [req-ed4287da-b194-4348-91da-cc7cbca3358c req-3ef3d4a4-9a8a-4362-bfe1-5ffa36591318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:26 compute-0 nova_compute[265391]: 2025-09-30 18:30:26.581 2 DEBUG nova.compute.manager [req-ed4287da-b194-4348-91da-cc7cbca3358c req-3ef3d4a4-9a8a-4362-bfe1-5ffa36591318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Processing event network-vif-plugged-688f7db7-db25-41cc-8c99-d97dc36a693a _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:30:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:26.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:26 compute-0 podman[331108]: 2025-09-30 18:30:26.742399616 +0000 UTC m=+0.054227467 container create 9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:30:26 compute-0 systemd[1]: Started libpod-conmon-9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920.scope.
Sep 30 18:30:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:30:26 compute-0 podman[331108]: 2025-09-30 18:30:26.714504003 +0000 UTC m=+0.026331874 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f6eb0e1a79ffc0642adab20aa024446318f120575687c49d1f560926be481e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:26 compute-0 podman[331108]: 2025-09-30 18:30:26.828001525 +0000 UTC m=+0.139829396 container init 9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:30:26 compute-0 podman[331108]: 2025-09-30 18:30:26.834696759 +0000 UTC m=+0.146524610 container start 9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:30:26 compute-0 podman[331125]: 2025-09-30 18:30:26.861306979 +0000 UTC m=+0.073387874 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930)
Sep 30 18:30:26 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[331145]: [NOTICE]   (331181) : New worker (331189) forked
Sep 30 18:30:26 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[331145]: [NOTICE]   (331181) : Loading success.
Sep 30 18:30:26 compute-0 podman[331126]: 2025-09-30 18:30:26.864264175 +0000 UTC m=+0.079309417 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Sep 30 18:30:26 compute-0 podman[331122]: 2025-09-30 18:30:26.875872306 +0000 UTC m=+0.098846114 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Sep 30 18:30:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:26.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.154 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.159 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.163 2 INFO nova.virt.libvirt.driver [-] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Instance spawned successfully.
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.163 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:30:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:27.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.675 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.676 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.676 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.676 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.677 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.677 2 DEBUG nova.virt.libvirt.driver [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:30:27 compute-0 nova_compute[265391]: 2025-09-30 18:30:27.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:27 compute-0 ceph-mon[73755]: pgmap v1554: 353 pgs: 353 active+clean; 88 MiB data, 289 MiB used, 40 GiB / 40 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.187 2 INFO nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Took 11.34 seconds to spawn the instance on the hypervisor.
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.188 2 DEBUG nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:30:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1555: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 603 KiB/s rd, 1.8 MiB/s wr, 106 op/s
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.649 2 DEBUG nova.compute.manager [req-f4e5a7b6-0cc8-4ca9-9643-e4b6944f5b03 req-b9d08f76-0776-4f92-801c-55b425bd3814 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received event network-vif-plugged-688f7db7-db25-41cc-8c99-d97dc36a693a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.650 2 DEBUG oslo_concurrency.lockutils [req-f4e5a7b6-0cc8-4ca9-9643-e4b6944f5b03 req-b9d08f76-0776-4f92-801c-55b425bd3814 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.650 2 DEBUG oslo_concurrency.lockutils [req-f4e5a7b6-0cc8-4ca9-9643-e4b6944f5b03 req-b9d08f76-0776-4f92-801c-55b425bd3814 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.650 2 DEBUG oslo_concurrency.lockutils [req-f4e5a7b6-0cc8-4ca9-9643-e4b6944f5b03 req-b9d08f76-0776-4f92-801c-55b425bd3814 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.650 2 DEBUG nova.compute.manager [req-f4e5a7b6-0cc8-4ca9-9643-e4b6944f5b03 req-b9d08f76-0776-4f92-801c-55b425bd3814 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] No waiting events found dispatching network-vif-plugged-688f7db7-db25-41cc-8c99-d97dc36a693a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.651 2 WARNING nova.compute.manager [req-f4e5a7b6-0cc8-4ca9-9643-e4b6944f5b03 req-b9d08f76-0776-4f92-801c-55b425bd3814 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received unexpected event network-vif-plugged-688f7db7-db25-41cc-8c99-d97dc36a693a for instance with vm_state active and task_state None.
Sep 30 18:30:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:28.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:28 compute-0 nova_compute[265391]: 2025-09-30 18:30:28.728 2 INFO nova.compute.manager [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Took 17.40 seconds to build instance.
Sep 30 18:30:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:28] "GET /metrics HTTP/1.1" 200 46652 "" "Prometheus/2.51.0"
Sep 30 18:30:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:28] "GET /metrics HTTP/1.1" 200 46652 "" "Prometheus/2.51.0"
Sep 30 18:30:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:28.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:29 compute-0 nova_compute[265391]: 2025-09-30 18:30:29.234 2 DEBUG oslo_concurrency.lockutils [None req-13bcc3d8-e744-4f65-a525-3aa9c2402489 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.928s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:29 compute-0 podman[276673]: time="2025-09-30T18:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:30:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:30:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10755 "" "Go-http-client/1.1"
Sep 30 18:30:29 compute-0 ceph-mon[73755]: pgmap v1555: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 603 KiB/s rd, 1.8 MiB/s wr, 106 op/s
Sep 30 18:30:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1556: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 927 KiB/s wr, 97 op/s
Sep 30 18:30:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:30.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:30 compute-0 nova_compute[265391]: 2025-09-30 18:30:30.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:31 compute-0 openstack_network_exporter[279566]: ERROR   18:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:30:31 compute-0 openstack_network_exporter[279566]: ERROR   18:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:30:31 compute-0 openstack_network_exporter[279566]: ERROR   18:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:30:31 compute-0 openstack_network_exporter[279566]: ERROR   18:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:30:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:30:31 compute-0 openstack_network_exporter[279566]: ERROR   18:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:30:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:30:32 compute-0 ceph-mon[73755]: pgmap v1556: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 927 KiB/s wr, 97 op/s
Sep 30 18:30:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1557: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 61 op/s
Sep 30 18:30:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:32.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:32 compute-0 nova_compute[265391]: 2025-09-30 18:30:32.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:32.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:33.813Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:34 compute-0 ceph-mon[73755]: pgmap v1557: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 61 op/s
Sep 30 18:30:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1558: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:30:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2703800171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:35 compute-0 sudo[331210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:30:35 compute-0 sudo[331210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:35 compute-0 sudo[331210]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:35 compute-0 nova_compute[265391]: 2025-09-30 18:30:35.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:36 compute-0 ceph-mon[73755]: pgmap v1558: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.067176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257036067225, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1509, "num_deletes": 251, "total_data_size": 2661849, "memory_usage": 2700856, "flush_reason": "Manual Compaction"}
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257036079524, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2587406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38958, "largest_seqno": 40466, "table_properties": {"data_size": 2580464, "index_size": 3949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15023, "raw_average_key_size": 20, "raw_value_size": 2566478, "raw_average_value_size": 3454, "num_data_blocks": 173, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759256896, "oldest_key_time": 1759256896, "file_creation_time": 1759257036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 12503 microseconds, and 7012 cpu microseconds.
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.079677) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2587406 bytes OK
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.079733) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.081583) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.081600) EVENT_LOG_v1 {"time_micros": 1759257036081594, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.081621) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2655307, prev total WAL file size 2655307, number of live WAL files 2.
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.082761) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2526KB)], [86(10MB)]
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257036082817, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 13756433, "oldest_snapshot_seqno": -1}
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6746 keys, 11791029 bytes, temperature: kUnknown
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257036128658, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 11791029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11749452, "index_size": 23609, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 174111, "raw_average_key_size": 25, "raw_value_size": 11631864, "raw_average_value_size": 1724, "num_data_blocks": 937, "num_entries": 6746, "num_filter_entries": 6746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.128936) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11791029 bytes
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.129900) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 299.6 rd, 256.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.7 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(9.9) write-amplify(4.6) OK, records in: 7264, records dropped: 518 output_compression: NoCompression
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.129918) EVENT_LOG_v1 {"time_micros": 1759257036129909, "job": 50, "event": "compaction_finished", "compaction_time_micros": 45919, "compaction_time_cpu_micros": 24084, "output_level": 6, "num_output_files": 1, "total_output_size": 11791029, "num_input_records": 7264, "num_output_records": 6746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257036130430, "job": 50, "event": "table_file_deletion", "file_number": 88}
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257036132074, "job": 50, "event": "table_file_deletion", "file_number": 86}
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.082666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.132134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.132141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.132143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.132145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:36.132147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:30:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3601109290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:30:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:30:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3601109290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:30:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1559: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:30:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:36 compute-0 nova_compute[265391]: 2025-09-30 18:30:36.938 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:36.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3601109290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:30:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3601109290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:30:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:37.275Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:30:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:37.276Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:30:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:30:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:30:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:30:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:30:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:30:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:30:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:30:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:30:37 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Sep 30 18:30:37 compute-0 nova_compute[265391]: 2025-09-30 18:30:37.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:38 compute-0 ceph-mon[73755]: pgmap v1559: 353 pgs: 353 active+clean; 88 MiB data, 290 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:30:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:30:38 compute-0 nova_compute[265391]: 2025-09-30 18:30:38.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1560: 353 pgs: 353 active+clean; 102 MiB data, 321 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 102 op/s
Sep 30 18:30:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:30:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:38.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:30:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:30:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:30:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:38.948 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:30:38 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:38.949 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:30:38 compute-0 nova_compute[265391]: 2025-09-30 18:30:38.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:38.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:39 compute-0 ovn_controller[156242]: 2025-09-30T18:30:39Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:9e:a8 10.100.0.4
Sep 30 18:30:39 compute-0 ovn_controller[156242]: 2025-09-30T18:30:39Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:9e:a8 10.100.0.4
Sep 30 18:30:40 compute-0 ceph-mon[73755]: pgmap v1560: 353 pgs: 353 active+clean; 102 MiB data, 321 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 102 op/s
Sep 30 18:30:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1561: 353 pgs: 353 active+clean; 113 MiB data, 349 MiB used, 40 GiB / 40 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Sep 30 18:30:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:40.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:40.976388) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257040976428, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 297, "num_deletes": 255, "total_data_size": 81812, "memory_usage": 87624, "flush_reason": "Manual Compaction"}
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Sep 30 18:30:40 compute-0 nova_compute[265391]: 2025-09-30 18:30:40.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257040979055, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 81300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40467, "largest_seqno": 40763, "table_properties": {"data_size": 79372, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4880, "raw_average_key_size": 17, "raw_value_size": 75526, "raw_average_value_size": 271, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257037, "oldest_key_time": 1759257037, "file_creation_time": 1759257040, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 2690 microseconds, and 810 cpu microseconds.
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:40.979083) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 81300 bytes OK
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:40.979098) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:40.980390) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:40.980403) EVENT_LOG_v1 {"time_micros": 1759257040980399, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:40.980416) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 79622, prev total WAL file size 79622, number of live WAL files 2.
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:40.980780) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323532' seq:72057594037927935, type:22 .. '6C6F676D0031353033' seq:0, type:0; will stop at (end)
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(79KB)], [89(11MB)]
Sep 30 18:30:40 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257040980836, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11872329, "oldest_snapshot_seqno": -1}
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6507 keys, 11757403 bytes, temperature: kUnknown
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257041040552, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11757403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11716847, "index_size": 23166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 170089, "raw_average_key_size": 26, "raw_value_size": 11602853, "raw_average_value_size": 1783, "num_data_blocks": 915, "num_entries": 6507, "num_filter_entries": 6507, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257040, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:41.040887) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11757403 bytes
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:41.042233) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.5 rd, 196.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.2 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(290.6) write-amplify(144.6) OK, records in: 7024, records dropped: 517 output_compression: NoCompression
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:41.042256) EVENT_LOG_v1 {"time_micros": 1759257041042245, "job": 52, "event": "compaction_finished", "compaction_time_micros": 59810, "compaction_time_cpu_micros": 37935, "output_level": 6, "num_output_files": 1, "total_output_size": 11757403, "num_input_records": 7024, "num_output_records": 6507, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257041042440, "job": 52, "event": "table_file_deletion", "file_number": 91}
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257041045264, "job": 52, "event": "table_file_deletion", "file_number": 89}
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:40.980678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:41.045329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:41.045338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:41.045403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:41.045408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:41 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:30:41.045412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:30:41 compute-0 nova_compute[265391]: 2025-09-30 18:30:41.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:41 compute-0 nova_compute[265391]: 2025-09-30 18:30:41.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:30:41 compute-0 sudo[331243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:30:41 compute-0 sudo[331243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:41 compute-0 sudo[331243]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:41 compute-0 sudo[331269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:30:41 compute-0 sudo[331269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:41 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:41.951 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:30:42 compute-0 ceph-mon[73755]: pgmap v1561: 353 pgs: 353 active+clean; 113 MiB data, 349 MiB used, 40 GiB / 40 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Sep 30 18:30:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1136952851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:42 compute-0 nova_compute[265391]: 2025-09-30 18:30:42.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:42 compute-0 nova_compute[265391]: 2025-09-30 18:30:42.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:42 compute-0 sudo[331269]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1562: 353 pgs: 353 active+clean; 113 MiB data, 349 MiB used, 40 GiB / 40 GiB avail; 735 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Sep 30 18:30:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:30:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:30:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:30:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:30:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:30:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:30:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:30:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:30:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:30:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:30:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:30:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:30:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:30:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:30:42 compute-0 sudo[331325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:30:42 compute-0 sudo[331325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:42 compute-0 sudo[331325]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:42.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:42 compute-0 sudo[331350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:30:42 compute-0 sudo[331350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:42 compute-0 nova_compute[265391]: 2025-09-30 18:30:42.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:42 compute-0 nova_compute[265391]: 2025-09-30 18:30:42.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:42 compute-0 nova_compute[265391]: 2025-09-30 18:30:42.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:42 compute-0 nova_compute[265391]: 2025-09-30 18:30:42.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:42 compute-0 nova_compute[265391]: 2025-09-30 18:30:42.940 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:30:42 compute-0 nova_compute[265391]: 2025-09-30 18:30:42.940 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:42.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:43 compute-0 podman[331417]: 2025-09-30 18:30:43.110739315 +0000 UTC m=+0.042057481 container create 2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:30:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:30:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:30:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:30:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:30:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:30:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:30:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:30:43 compute-0 systemd[1]: Started libpod-conmon-2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86.scope.
Sep 30 18:30:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:30:43 compute-0 podman[331417]: 2025-09-30 18:30:43.089914225 +0000 UTC m=+0.021232411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:30:43 compute-0 podman[331417]: 2025-09-30 18:30:43.189872557 +0000 UTC m=+0.121190723 container init 2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:30:43 compute-0 podman[331417]: 2025-09-30 18:30:43.199133597 +0000 UTC m=+0.130451763 container start 2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:30:43 compute-0 podman[331417]: 2025-09-30 18:30:43.202748591 +0000 UTC m=+0.134066807 container attach 2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:30:43 compute-0 interesting_greider[331451]: 167 167
Sep 30 18:30:43 compute-0 systemd[1]: libpod-2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86.scope: Deactivated successfully.
Sep 30 18:30:43 compute-0 podman[331417]: 2025-09-30 18:30:43.207105304 +0000 UTC m=+0.138423470 container died 2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b506fa0e92539bb5660ccb1b30ba4b2d31ba707241ff74626a4b03eeba772779-merged.mount: Deactivated successfully.
Sep 30 18:30:43 compute-0 podman[331417]: 2025-09-30 18:30:43.242530372 +0000 UTC m=+0.173848538 container remove 2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:30:43 compute-0 systemd[1]: libpod-conmon-2ac26fe70eecee11d8781e716186b3393e708c84800338cb3a72ad413c820f86.scope: Deactivated successfully.
Sep 30 18:30:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:30:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2209848134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:43 compute-0 nova_compute[265391]: 2025-09-30 18:30:43.381 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:43 compute-0 podman[331479]: 2025-09-30 18:30:43.426947623 +0000 UTC m=+0.045218363 container create 6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:30:43 compute-0 systemd[1]: Started libpod-conmon-6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379.scope.
Sep 30 18:30:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:30:43 compute-0 podman[331479]: 2025-09-30 18:30:43.40906924 +0000 UTC m=+0.027340000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1961ddee3e81b21c900d1c1b44d6739501d58b41a53985a8abd430acc6ae7cec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1961ddee3e81b21c900d1c1b44d6739501d58b41a53985a8abd430acc6ae7cec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1961ddee3e81b21c900d1c1b44d6739501d58b41a53985a8abd430acc6ae7cec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1961ddee3e81b21c900d1c1b44d6739501d58b41a53985a8abd430acc6ae7cec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1961ddee3e81b21c900d1c1b44d6739501d58b41a53985a8abd430acc6ae7cec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:43 compute-0 podman[331479]: 2025-09-30 18:30:43.518296902 +0000 UTC m=+0.136567652 container init 6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:30:43 compute-0 podman[331479]: 2025-09-30 18:30:43.527022798 +0000 UTC m=+0.145293538 container start 6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:30:43 compute-0 podman[331479]: 2025-09-30 18:30:43.530520359 +0000 UTC m=+0.148791099 container attach 6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 18:30:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:43.814Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:43 compute-0 competent_galois[331497]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:30:43 compute-0 competent_galois[331497]: --> All data devices are unavailable
Sep 30 18:30:43 compute-0 systemd[1]: libpod-6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379.scope: Deactivated successfully.
Sep 30 18:30:43 compute-0 podman[331479]: 2025-09-30 18:30:43.86228033 +0000 UTC m=+0.480551110 container died 6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 18:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1961ddee3e81b21c900d1c1b44d6739501d58b41a53985a8abd430acc6ae7cec-merged.mount: Deactivated successfully.
Sep 30 18:30:43 compute-0 podman[331479]: 2025-09-30 18:30:43.903518969 +0000 UTC m=+0.521789709 container remove 6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:30:43 compute-0 systemd[1]: libpod-conmon-6ef13f3660a512f2ad25201547c6668dc7a2b29d45809cc7470535db685db379.scope: Deactivated successfully.
Sep 30 18:30:43 compute-0 sudo[331350]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:43 compute-0 sudo[331525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:30:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:44 compute-0 sudo[331525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:44 compute-0 sudo[331525]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:44 compute-0 sudo[331550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:30:44 compute-0 sudo[331550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:44 compute-0 ceph-mon[73755]: pgmap v1562: 353 pgs: 353 active+clean; 113 MiB data, 349 MiB used, 40 GiB / 40 GiB avail; 735 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Sep 30 18:30:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2209848134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:44 compute-0 nova_compute[265391]: 2025-09-30 18:30:44.432 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:30:44 compute-0 nova_compute[265391]: 2025-09-30 18:30:44.432 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:30:44 compute-0 podman[331615]: 2025-09-30 18:30:44.436012034 +0000 UTC m=+0.038572401 container create 080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:30:44 compute-0 systemd[1]: Started libpod-conmon-080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b.scope.
Sep 30 18:30:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:30:44 compute-0 podman[331615]: 2025-09-30 18:30:44.419846165 +0000 UTC m=+0.022406562 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:30:44 compute-0 podman[331615]: 2025-09-30 18:30:44.525270558 +0000 UTC m=+0.127830945 container init 080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:30:44 compute-0 podman[331615]: 2025-09-30 18:30:44.531092099 +0000 UTC m=+0.133652466 container start 080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:30:44 compute-0 podman[331615]: 2025-09-30 18:30:44.533866441 +0000 UTC m=+0.136426808 container attach 080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:30:44 compute-0 naughty_mendel[331632]: 167 167
Sep 30 18:30:44 compute-0 systemd[1]: libpod-080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b.scope: Deactivated successfully.
Sep 30 18:30:44 compute-0 podman[331637]: 2025-09-30 18:30:44.580015758 +0000 UTC m=+0.028446289 container died 080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:30:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1563: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 814 KiB/s rd, 3.9 MiB/s wr, 106 op/s
Sep 30 18:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbdcf4b94cf06619150bf9d1a2ad490ca79647017092afecb50879dfdf26a9d9-merged.mount: Deactivated successfully.
Sep 30 18:30:44 compute-0 nova_compute[265391]: 2025-09-30 18:30:44.606 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:30:44 compute-0 nova_compute[265391]: 2025-09-30 18:30:44.608 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:44 compute-0 podman[331637]: 2025-09-30 18:30:44.615136118 +0000 UTC m=+0.063566639 container remove 080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:30:44 compute-0 systemd[1]: libpod-conmon-080c0a19aa1bc64e7a865a23dbd724cc865b254b554b5eef4260bb24cf24c44b.scope: Deactivated successfully.
Sep 30 18:30:44 compute-0 nova_compute[265391]: 2025-09-30 18:30:44.638 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.029s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:44 compute-0 nova_compute[265391]: 2025-09-30 18:30:44.638 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4158MB free_disk=39.947513580322266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:30:44 compute-0 nova_compute[265391]: 2025-09-30 18:30:44.639 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:44 compute-0 nova_compute[265391]: 2025-09-30 18:30:44.640 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:44.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:44 compute-0 podman[331659]: 2025-09-30 18:30:44.78606473 +0000 UTC m=+0.040992744 container create fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:30:44 compute-0 systemd[1]: Started libpod-conmon-fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752.scope.
Sep 30 18:30:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bbdd821403493c6c292cca74d2a7780faba6bbf35c27964c8e444d29277800/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bbdd821403493c6c292cca74d2a7780faba6bbf35c27964c8e444d29277800/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bbdd821403493c6c292cca74d2a7780faba6bbf35c27964c8e444d29277800/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bbdd821403493c6c292cca74d2a7780faba6bbf35c27964c8e444d29277800/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:44 compute-0 podman[331659]: 2025-09-30 18:30:44.768458513 +0000 UTC m=+0.023386557 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:30:44 compute-0 podman[331659]: 2025-09-30 18:30:44.871250638 +0000 UTC m=+0.126178672 container init fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:30:44 compute-0 podman[331659]: 2025-09-30 18:30:44.877666125 +0000 UTC m=+0.132594139 container start fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:30:44 compute-0 podman[331659]: 2025-09-30 18:30:44.881218647 +0000 UTC m=+0.136146751 container attach fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:30:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:44.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:45 compute-0 cool_williams[331675]: {
Sep 30 18:30:45 compute-0 cool_williams[331675]:     "0": [
Sep 30 18:30:45 compute-0 cool_williams[331675]:         {
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "devices": [
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "/dev/loop3"
Sep 30 18:30:45 compute-0 cool_williams[331675]:             ],
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "lv_name": "ceph_lv0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "lv_size": "21470642176",
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "name": "ceph_lv0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "tags": {
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.cluster_name": "ceph",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.crush_device_class": "",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.encrypted": "0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.osd_id": "0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.type": "block",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.vdo": "0",
Sep 30 18:30:45 compute-0 cool_williams[331675]:                 "ceph.with_tpm": "0"
Sep 30 18:30:45 compute-0 cool_williams[331675]:             },
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "type": "block",
Sep 30 18:30:45 compute-0 cool_williams[331675]:             "vg_name": "ceph_vg0"
Sep 30 18:30:45 compute-0 cool_williams[331675]:         }
Sep 30 18:30:45 compute-0 cool_williams[331675]:     ]
Sep 30 18:30:45 compute-0 cool_williams[331675]: }
Sep 30 18:30:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2822621828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:45 compute-0 systemd[1]: libpod-fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752.scope: Deactivated successfully.
Sep 30 18:30:45 compute-0 podman[331659]: 2025-09-30 18:30:45.183705669 +0000 UTC m=+0.438633683 container died fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-21bbdd821403493c6c292cca74d2a7780faba6bbf35c27964c8e444d29277800-merged.mount: Deactivated successfully.
Sep 30 18:30:45 compute-0 podman[331659]: 2025-09-30 18:30:45.223795458 +0000 UTC m=+0.478723472 container remove fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_williams, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:30:45 compute-0 systemd[1]: libpod-conmon-fe2cf82394f91535b79640c097e4dce6ee2a9f28ccef6bd51f1271645726c752.scope: Deactivated successfully.
Sep 30 18:30:45 compute-0 sudo[331550]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:45 compute-0 sudo[331696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:30:45 compute-0 sudo[331696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:45 compute-0 sudo[331696]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:45 compute-0 sudo[331721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:30:45 compute-0 sudo[331721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:45 compute-0 nova_compute[265391]: 2025-09-30 18:30:45.747 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 87f50c89-ad94-41dc-9263-c5715083d91b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:30:45 compute-0 nova_compute[265391]: 2025-09-30 18:30:45.748 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:30:45 compute-0 nova_compute[265391]: 2025-09-30 18:30:45.748 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:30:44 up  1:34,  0 user,  load average: 0.89, 1.00, 0.91\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:30:45 compute-0 nova_compute[265391]: 2025-09-30 18:30:45.860 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:30:45 compute-0 podman[331791]: 2025-09-30 18:30:45.904569197 +0000 UTC m=+0.046570758 container create 9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:30:45 compute-0 systemd[1]: Started libpod-conmon-9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37.scope.
Sep 30 18:30:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:30:45 compute-0 podman[331791]: 2025-09-30 18:30:45.882613888 +0000 UTC m=+0.024615499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:30:45 compute-0 nova_compute[265391]: 2025-09-30 18:30:45.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:45 compute-0 podman[331791]: 2025-09-30 18:30:45.989098858 +0000 UTC m=+0.131100429 container init 9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:30:45 compute-0 podman[331791]: 2025-09-30 18:30:45.9980397 +0000 UTC m=+0.140041261 container start 9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:30:46 compute-0 objective_edison[331821]: 167 167
Sep 30 18:30:46 compute-0 podman[331791]: 2025-09-30 18:30:46.004558099 +0000 UTC m=+0.146559680 container attach 9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:30:46 compute-0 podman[331806]: 2025-09-30 18:30:46.005427582 +0000 UTC m=+0.060851609 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:30:46 compute-0 systemd[1]: libpod-9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37.scope: Deactivated successfully.
Sep 30 18:30:46 compute-0 conmon[331821]: conmon 9334702dafadfdc97e17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37.scope/container/memory.events
Sep 30 18:30:46 compute-0 podman[331791]: 2025-09-30 18:30:46.008715547 +0000 UTC m=+0.150717108 container died 9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-40c6b30b6cf861279d21a3af9b4087f2c24e2a9566ee5b5688bf165b3d8996d5-merged.mount: Deactivated successfully.
Sep 30 18:30:46 compute-0 podman[331791]: 2025-09-30 18:30:46.046520837 +0000 UTC m=+0.188522388 container remove 9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_edison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:30:46 compute-0 podman[331809]: 2025-09-30 18:30:46.048244572 +0000 UTC m=+0.102558820 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:30:46 compute-0 podman[331810]: 2025-09-30 18:30:46.054530215 +0000 UTC m=+0.103224617 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:30:46 compute-0 systemd[1]: libpod-conmon-9334702dafadfdc97e173b72b5bb9d4a60d64cff53759dca08ba059e9c81cf37.scope: Deactivated successfully.
Sep 30 18:30:46 compute-0 ceph-mon[73755]: pgmap v1563: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 814 KiB/s rd, 3.9 MiB/s wr, 106 op/s
Sep 30 18:30:46 compute-0 podman[331916]: 2025-09-30 18:30:46.210973051 +0000 UTC m=+0.040636585 container create 9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:30:46 compute-0 systemd[1]: Started libpod-conmon-9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06.scope.
Sep 30 18:30:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f214974479b6604a656e021554d33709da886fdb72ae89543167cee82a25f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f214974479b6604a656e021554d33709da886fdb72ae89543167cee82a25f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f214974479b6604a656e021554d33709da886fdb72ae89543167cee82a25f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f214974479b6604a656e021554d33709da886fdb72ae89543167cee82a25f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:30:46 compute-0 podman[331916]: 2025-09-30 18:30:46.197182933 +0000 UTC m=+0.026846487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:30:46 compute-0 podman[331916]: 2025-09-30 18:30:46.301565479 +0000 UTC m=+0.131229033 container init 9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:30:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:30:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/953508383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:46 compute-0 podman[331916]: 2025-09-30 18:30:46.311339573 +0000 UTC m=+0.141003107 container start 9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:30:46 compute-0 podman[331916]: 2025-09-30 18:30:46.314242548 +0000 UTC m=+0.143906112 container attach 9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:30:46 compute-0 nova_compute[265391]: 2025-09-30 18:30:46.334 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:30:46 compute-0 nova_compute[265391]: 2025-09-30 18:30:46.340 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:30:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1564: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Sep 30 18:30:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:46.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:46 compute-0 nova_compute[265391]: 2025-09-30 18:30:46.850 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:30:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:46.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:46 compute-0 lvm[332011]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:30:46 compute-0 lvm[332011]: VG ceph_vg0 finished
Sep 30 18:30:47 compute-0 friendly_elion[331934]: {}
Sep 30 18:30:47 compute-0 systemd[1]: libpod-9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06.scope: Deactivated successfully.
Sep 30 18:30:47 compute-0 systemd[1]: libpod-9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06.scope: Consumed 1.244s CPU time.
Sep 30 18:30:47 compute-0 podman[331916]: 2025-09-30 18:30:47.05219919 +0000 UTC m=+0.881862744 container died 9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-74f214974479b6604a656e021554d33709da886fdb72ae89543167cee82a25f8-merged.mount: Deactivated successfully.
Sep 30 18:30:47 compute-0 podman[331916]: 2025-09-30 18:30:47.093880961 +0000 UTC m=+0.923544495 container remove 9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:30:47 compute-0 systemd[1]: libpod-conmon-9f484805c8eec90e0a66bb292dfa2c0150ffe5585dadecd3bbc8bf2d57f11e06.scope: Deactivated successfully.
Sep 30 18:30:47 compute-0 sudo[331721]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:30:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:30:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:30:47 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:30:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/953508383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:30:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:30:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:30:47 compute-0 sudo[332027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:30:47 compute-0 sudo[332027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:47 compute-0 sudo[332027]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:47.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:47 compute-0 nova_compute[265391]: 2025-09-30 18:30:47.360 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:30:47 compute-0 nova_compute[265391]: 2025-09-30 18:30:47.360 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.721s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:47 compute-0 nova_compute[265391]: 2025-09-30 18:30:47.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:48 compute-0 ceph-mon[73755]: pgmap v1564: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Sep 30 18:30:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1857050196' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:30:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3195939406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:30:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1565: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Sep 30 18:30:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:48.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:30:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:30:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:48.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:49 compute-0 nova_compute[265391]: 2025-09-30 18:30:49.361 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:49 compute-0 nova_compute[265391]: 2025-09-30 18:30:49.362 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:49 compute-0 nova_compute[265391]: 2025-09-30 18:30:49.362 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:30:50 compute-0 ceph-mon[73755]: pgmap v1565: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Sep 30 18:30:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1566: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 229 KiB/s rd, 2.8 MiB/s wr, 65 op/s
Sep 30 18:30:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:50.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:50.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:50 compute-0 nova_compute[265391]: 2025-09-30 18:30:50.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:52 compute-0 ceph-mon[73755]: pgmap v1566: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 229 KiB/s rd, 2.8 MiB/s wr, 65 op/s
Sep 30 18:30:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:30:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:30:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1567: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 79 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Sep 30 18:30:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:52 compute-0 nova_compute[265391]: 2025-09-30 18:30:52.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:52.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:30:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:53.814Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:54 compute-0 ceph-mon[73755]: pgmap v1567: 353 pgs: 353 active+clean; 167 MiB data, 371 MiB used, 40 GiB / 40 GiB avail; 79 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Sep 30 18:30:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:54.315 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:30:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:54.315 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:30:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:30:54.316 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:30:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1568: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 86 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Sep 30 18:30:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:54.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:54.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:55 compute-0 sudo[332063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:30:55 compute-0 sudo[332063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:30:55 compute-0 sudo[332063]: pam_unix(sudo:session): session closed for user root
Sep 30 18:30:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:30:55 compute-0 nova_compute[265391]: 2025-09-30 18:30:55.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:56 compute-0 ceph-mon[73755]: pgmap v1568: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 86 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Sep 30 18:30:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1569: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 28 KiB/s wr, 11 op/s
Sep 30 18:30:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:56.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:30:57.280Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:30:57 compute-0 podman[332089]: 2025-09-30 18:30:57.5322027 +0000 UTC m=+0.066379001 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:30:57 compute-0 podman[332088]: 2025-09-30 18:30:57.540581418 +0000 UTC m=+0.074720418 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Sep 30 18:30:57 compute-0 podman[332090]: 2025-09-30 18:30:57.540758682 +0000 UTC m=+0.068255520 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Sep 30 18:30:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:30:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3237373477' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:30:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:30:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3237373477' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:30:57 compute-0 nova_compute[265391]: 2025-09-30 18:30:57.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:30:58 compute-0 ceph-mon[73755]: pgmap v1569: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 28 KiB/s wr, 11 op/s
Sep 30 18:30:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3237373477' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:30:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3237373477' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:30:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1570: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 706 KiB/s rd, 28 KiB/s wr, 33 op/s
Sep 30 18:30:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:30:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:30:58.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:30:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:58] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:30:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:30:58] "GET /metrics HTTP/1.1" 200 46650 "" "Prometheus/2.51.0"
Sep 30 18:30:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:30:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:30:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:30:58.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:30:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:30:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:30:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:30:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:30:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:30:59 compute-0 podman[276673]: time="2025-09-30T18:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:30:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:30:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10751 "" "Go-http-client/1.1"
Sep 30 18:31:00 compute-0 ceph-mon[73755]: pgmap v1570: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 706 KiB/s rd, 28 KiB/s wr, 33 op/s
Sep 30 18:31:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1571: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 75 op/s
Sep 30 18:31:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:00.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:00 compute-0 nova_compute[265391]: 2025-09-30 18:31:00.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:00.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:01 compute-0 openstack_network_exporter[279566]: ERROR   18:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:31:01 compute-0 openstack_network_exporter[279566]: ERROR   18:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:31:01 compute-0 openstack_network_exporter[279566]: ERROR   18:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:31:01 compute-0 openstack_network_exporter[279566]: ERROR   18:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:31:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:31:01 compute-0 openstack_network_exporter[279566]: ERROR   18:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:31:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:31:02 compute-0 ceph-mon[73755]: pgmap v1571: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 75 op/s
Sep 30 18:31:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1572: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Sep 30 18:31:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:02.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:02 compute-0 nova_compute[265391]: 2025-09-30 18:31:02.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:02.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:03.815Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:31:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:04 compute-0 ceph-mon[73755]: pgmap v1572: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Sep 30 18:31:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1573: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 75 op/s
Sep 30 18:31:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:04.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:05 compute-0 nova_compute[265391]: 2025-09-30 18:31:05.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:06 compute-0 ceph-mon[73755]: pgmap v1573: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 75 op/s
Sep 30 18:31:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1574: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 65 op/s
Sep 30 18:31:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:06.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:06.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:07.280Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:31:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:31:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:31:07 compute-0 nova_compute[265391]: 2025-09-30 18:31:07.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016617205814123852 of space, bias 1.0, pg target 0.33234411628247706 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:31:07
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.control', 'volumes', '.nfs', 'images']
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:31:07 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:31:08 compute-0 ceph-mon[73755]: pgmap v1574: 353 pgs: 353 active+clean; 167 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 65 op/s
Sep 30 18:31:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1575: 353 pgs: 353 active+clean; 186 MiB data, 387 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 102 op/s
Sep 30 18:31:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:08.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:08] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:31:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:08] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:31:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 18:31:10 compute-0 sshd-session[332156]: Invalid user ftpuser from 14.225.220.107 port 47100
Sep 30 18:31:10 compute-0 sshd-session[332156]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:31:10 compute-0 sshd-session[332156]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:31:10 compute-0 ceph-mon[73755]: pgmap v1575: 353 pgs: 353 active+clean; 186 MiB data, 387 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 102 op/s
Sep 30 18:31:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1576: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Sep 30 18:31:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:31:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:10.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:31:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:10 compute-0 nova_compute[265391]: 2025-09-30 18:31:10.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:10.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:11 compute-0 sshd-session[332160]: Invalid user ftpuser from 45.252.249.158 port 57264
Sep 30 18:31:11 compute-0 sshd-session[332160]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:31:11 compute-0 sshd-session[332160]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:31:12 compute-0 sshd-session[332156]: Failed password for invalid user ftpuser from 14.225.220.107 port 47100 ssh2
Sep 30 18:31:12 compute-0 ceph-mon[73755]: pgmap v1576: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Sep 30 18:31:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1577: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Sep 30 18:31:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:12.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:12 compute-0 nova_compute[265391]: 2025-09-30 18:31:12.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:12.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:13 compute-0 sshd-session[332160]: Failed password for invalid user ftpuser from 45.252.249.158 port 57264 ssh2
Sep 30 18:31:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:13.816Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:13 compute-0 sshd-session[332156]: Received disconnect from 14.225.220.107 port 47100:11: Bye Bye [preauth]
Sep 30 18:31:13 compute-0 sshd-session[332156]: Disconnected from invalid user ftpuser 14.225.220.107 port 47100 [preauth]
Sep 30 18:31:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:14 compute-0 ceph-mon[73755]: pgmap v1577: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Sep 30 18:31:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1578: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 184 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Sep 30 18:31:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:14.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:15 compute-0 sshd-session[332160]: Received disconnect from 45.252.249.158 port 57264:11: Bye Bye [preauth]
Sep 30 18:31:15 compute-0 sshd-session[332160]: Disconnected from invalid user ftpuser 45.252.249.158 port 57264 [preauth]
Sep 30 18:31:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:15 compute-0 sudo[332169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:31:15 compute-0 sudo[332169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:15 compute-0 nova_compute[265391]: 2025-09-30 18:31:15.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:15 compute-0 sudo[332169]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:16 compute-0 ceph-mon[73755]: pgmap v1578: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 184 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Sep 30 18:31:16 compute-0 podman[332194]: 2025-09-30 18:31:16.538297876 +0000 UTC m=+0.073704312 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:31:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1579: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 180 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Sep 30 18:31:16 compute-0 podman[332196]: 2025-09-30 18:31:16.610159439 +0000 UTC m=+0.133253656 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:31:16 compute-0 podman[332195]: 2025-09-30 18:31:16.625109357 +0000 UTC m=+0.153287125 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:31:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:16.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:17.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:17.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:17 compute-0 nova_compute[265391]: 2025-09-30 18:31:17.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:18 compute-0 ceph-mon[73755]: pgmap v1579: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 180 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Sep 30 18:31:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1580: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 180 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Sep 30 18:31:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:18.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:18] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:31:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:18] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:31:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:19.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:20 compute-0 ceph-mon[73755]: pgmap v1580: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 180 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Sep 30 18:31:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1581: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 79 KiB/s rd, 822 KiB/s wr, 21 op/s
Sep 30 18:31:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:20.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:21.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:21 compute-0 nova_compute[265391]: 2025-09-30 18:31:21.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:21 compute-0 ovn_controller[156242]: 2025-09-30T18:31:21Z|00183|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Sep 30 18:31:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:31:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:31:22 compute-0 ceph-mon[73755]: pgmap v1581: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 79 KiB/s rd, 822 KiB/s wr, 21 op/s
Sep 30 18:31:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:31:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1582: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:31:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:22.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:22 compute-0 nova_compute[265391]: 2025-09-30 18:31:22.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:23.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=cleanup t=2025-09-30T18:31:23.528692048Z level=info msg="Completed cleanup jobs" duration=31.762674ms
Sep 30 18:31:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T18:31:23.641935074Z level=info msg="Update check succeeded" duration=78.881615ms
Sep 30 18:31:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T18:31:23.64257462Z level=info msg="Update check succeeded" duration=74.715017ms
Sep 30 18:31:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:23.816Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:24 compute-0 ceph-mon[73755]: pgmap v1582: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:31:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1583: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:31:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:24.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:25.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:25 compute-0 ceph-mon[73755]: pgmap v1583: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:31:25 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:26 compute-0 nova_compute[265391]: 2025-09-30 18:31:26.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:26 compute-0 nova_compute[265391]: 2025-09-30 18:31:26.482 2 DEBUG nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Creating tmpfile /var/lib/nova/instances/tmpbmq9sq_z to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Sep 30 18:31:26 compute-0 nova_compute[265391]: 2025-09-30 18:31:26.484 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:26 compute-0 nova_compute[265391]: 2025-09-30 18:31:26.581 2 DEBUG nova.compute.manager [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbmq9sq_z',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Sep 30 18:31:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1584: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:31:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:26.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:27.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:27.282Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:27 compute-0 ceph-mon[73755]: pgmap v1584: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:31:27 compute-0 nova_compute[265391]: 2025-09-30 18:31:27.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:28 compute-0 podman[332276]: 2025-09-30 18:31:28.529600672 +0000 UTC m=+0.070339445 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:31:28 compute-0 podman[332277]: 2025-09-30 18:31:28.540379731 +0000 UTC m=+0.069379590 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:31:28 compute-0 podman[332278]: 2025-09-30 18:31:28.564423134 +0000 UTC m=+0.095951528 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Sep 30 18:31:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1585: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:31:28 compute-0 nova_compute[265391]: 2025-09-30 18:31:28.616 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:31:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:28.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:31:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:28] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:31:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:28] "GET /metrics HTTP/1.1" 200 46647 "" "Prometheus/2.51.0"
Sep 30 18:31:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:29.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:29 compute-0 ceph-mon[73755]: pgmap v1585: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:31:29 compute-0 podman[276673]: time="2025-09-30T18:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:31:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:31:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10762 "" "Go-http-client/1.1"
Sep 30 18:31:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1586: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:31:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:31.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:31 compute-0 nova_compute[265391]: 2025-09-30 18:31:31.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:31 compute-0 openstack_network_exporter[279566]: ERROR   18:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:31:31 compute-0 openstack_network_exporter[279566]: ERROR   18:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:31:31 compute-0 openstack_network_exporter[279566]: ERROR   18:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:31:31 compute-0 openstack_network_exporter[279566]: ERROR   18:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:31:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:31:31 compute-0 openstack_network_exporter[279566]: ERROR   18:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:31:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:31:31 compute-0 ceph-mon[73755]: pgmap v1586: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:31:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1587: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:31:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:32 compute-0 nova_compute[265391]: 2025-09-30 18:31:32.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:33.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:33 compute-0 nova_compute[265391]: 2025-09-30 18:31:33.127 2 DEBUG nova.compute.manager [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbmq9sq_z',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5321575c-f6c1-4500-adf7-285c22df2e73',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Sep 30 18:31:33 compute-0 ceph-mon[73755]: pgmap v1587: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:31:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:33.817Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:34 compute-0 nova_compute[265391]: 2025-09-30 18:31:34.143 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-5321575c-f6c1-4500-adf7-285c22df2e73" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:31:34 compute-0 nova_compute[265391]: 2025-09-30 18:31:34.144 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-5321575c-f6c1-4500-adf7-285c22df2e73" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:31:34 compute-0 nova_compute[265391]: 2025-09-30 18:31:34.144 2 DEBUG nova.network.neutron [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:31:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1588: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:31:34 compute-0 nova_compute[265391]: 2025-09-30 18:31:34.656 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:35.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:35 compute-0 ceph-mon[73755]: pgmap v1588: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:31:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:36 compute-0 nova_compute[265391]: 2025-09-30 18:31:36.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:36 compute-0 sudo[332345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:31:36 compute-0 sudo[332345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:36 compute-0 sudo[332345]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:36 compute-0 nova_compute[265391]: 2025-09-30 18:31:36.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:31:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4234911310' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:31:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:31:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4234911310' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:31:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1589: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:31:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4234911310' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:31:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4234911310' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:31:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:36.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:31:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:37.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:31:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:37.283Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:31:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:31:37 compute-0 nova_compute[265391]: 2025-09-30 18:31:37.350 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:31:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:31:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:31:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:31:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:31:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:31:37 compute-0 nova_compute[265391]: 2025-09-30 18:31:37.627 2 DEBUG nova.network.neutron [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Updating instance_info_cache with network_info: [{"id": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "address": "fa:16:3e:be:37:f0", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafaa4f9e-ea", "ovs_interfaceid": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:31:37 compute-0 ceph-mon[73755]: pgmap v1589: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:31:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.135 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-5321575c-f6c1-4500-adf7-285c22df2e73" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.148 2 DEBUG nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbmq9sq_z',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5321575c-f6c1-4500-adf7-285c22df2e73',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.149 2 DEBUG nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Creating instance directory: /var/lib/nova/instances/5321575c-f6c1-4500-adf7-285c22df2e73 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.149 2 DEBUG nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Ensure instance console log exists: /var/lib/nova/instances/5321575c-f6c1-4500-adf7-285c22df2e73/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.150 2 DEBUG nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.150 2 DEBUG nova.virt.libvirt.vif [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-09-30T18:30:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-876288154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-876288154',id=23,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:30:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ce22n40f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:30:54Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=5321575c-f6c1-4500-adf7-285c22df2e73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "address": "fa:16:3e:be:37:f0", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapafaa4f9e-ea", "ovs_interfaceid": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.151 2 DEBUG nova.network.os_vif_util [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "address": "fa:16:3e:be:37:f0", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapafaa4f9e-ea", "ovs_interfaceid": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.151 2 DEBUG nova.network.os_vif_util [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:37:f0,bridge_name='br-int',has_traffic_filtering=True,id=afaa4f9e-eab6-432e-9b39-d80bb074577d,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafaa4f9e-ea') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.152 2 DEBUG os_vif [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:37:f0,bridge_name='br-int',has_traffic_filtering=True,id=afaa4f9e-eab6-432e-9b39-d80bb074577d,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafaa4f9e-ea') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.153 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.153 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.154 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'af99def9-07df-5fb6-bc06-1985311686d7', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapafaa4f9e-ea, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapafaa4f9e-ea, col_values=(('qos', UUID('676aedfd-c145-45d2-97d6-d0f4cf26720b')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapafaa4f9e-ea, col_values=(('external_ids', {'iface-id': 'afaa4f9e-eab6-432e-9b39-d80bb074577d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:37:f0', 'vm-uuid': '5321575c-f6c1-4500-adf7-285c22df2e73'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:31:38 compute-0 NetworkManager[45059]: <info>  [1759257098.1628] manager: (tapafaa4f9e-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.171 2 INFO os_vif [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:37:f0,bridge_name='br-int',has_traffic_filtering=True,id=afaa4f9e-eab6-432e-9b39-d80bb074577d,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafaa4f9e-ea')
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.172 2 DEBUG nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.172 2 DEBUG nova.compute.manager [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbmq9sq_z',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5321575c-f6c1-4500-adf7-285c22df2e73',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.173 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:38 compute-0 nova_compute[265391]: 2025-09-30 18:31:38.491 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1590: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:31:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:31:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:38] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:31:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:39.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:39.050 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:31:39 compute-0 nova_compute[265391]: 2025-09-30 18:31:39.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:39.052 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:31:39 compute-0 nova_compute[265391]: 2025-09-30 18:31:39.570 2 DEBUG nova.network.neutron [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Port afaa4f9e-eab6-432e-9b39-d80bb074577d updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Sep 30 18:31:39 compute-0 nova_compute[265391]: 2025-09-30 18:31:39.583 2 DEBUG nova.compute.manager [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbmq9sq_z',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5321575c-f6c1-4500-adf7-285c22df2e73',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Sep 30 18:31:39 compute-0 ceph-mon[73755]: pgmap v1590: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:31:40 compute-0 nova_compute[265391]: 2025-09-30 18:31:40.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1591: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 1 op/s
Sep 30 18:31:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:41 compute-0 nova_compute[265391]: 2025-09-30 18:31:41.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:41.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:41 compute-0 ceph-mon[73755]: pgmap v1591: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 1 op/s
Sep 30 18:31:42 compute-0 systemd[1]: Starting libvirt proxy daemon...
Sep 30 18:31:42 compute-0 systemd[1]: Started libvirt proxy daemon.
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.054 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:42 compute-0 kernel: tapafaa4f9e-ea: entered promiscuous mode
Sep 30 18:31:42 compute-0 NetworkManager[45059]: <info>  [1759257102.1975] manager: (tapafaa4f9e-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Sep 30 18:31:42 compute-0 ovn_controller[156242]: 2025-09-30T18:31:42Z|00184|binding|INFO|Claiming lport afaa4f9e-eab6-432e-9b39-d80bb074577d for this additional chassis.
Sep 30 18:31:42 compute-0 ovn_controller[156242]: 2025-09-30T18:31:42Z|00185|binding|INFO|afaa4f9e-eab6-432e-9b39-d80bb074577d: Claiming fa:16:3e:be:37:f0 10.100.0.10
Sep 30 18:31:42 compute-0 nova_compute[265391]: 2025-09-30 18:31:42.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.204 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:37:f0 10.100.0.10'], port_security=['fa:16:3e:be:37:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5321575c-f6c1-4500-adf7-285c22df2e73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=afaa4f9e-eab6-432e-9b39-d80bb074577d) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.205 166158 INFO neutron.agent.ovn.metadata.agent [-] Port afaa4f9e-eab6-432e-9b39-d80bb074577d in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.208 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:31:42 compute-0 nova_compute[265391]: 2025-09-30 18:31:42.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:42 compute-0 ovn_controller[156242]: 2025-09-30T18:31:42Z|00186|binding|INFO|Setting lport afaa4f9e-eab6-432e-9b39-d80bb074577d ovn-installed in OVS
Sep 30 18:31:42 compute-0 nova_compute[265391]: 2025-09-30 18:31:42.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.229 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d6e805-dfa4-4441-a59a-b19146c52380]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:31:42 compute-0 systemd-machined[219917]: New machine qemu-16-instance-00000017.
Sep 30 18:31:42 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000017.
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.263 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0fc2b2-efde-4d47-8551-18efed798ca7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:31:42 compute-0 systemd-udevd[332412]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.266 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[0895353c-c945-40db-b5e8-93a692e18030]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:31:42 compute-0 NetworkManager[45059]: <info>  [1759257102.2795] device (tapafaa4f9e-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:31:42 compute-0 NetworkManager[45059]: <info>  [1759257102.2807] device (tapafaa4f9e-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.301 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[8a91a7e8-d66f-43e5-a916-09741b3303b9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.325 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcbb418-2369-49a2-bdc3-03ced3436123]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562485, 'reachable_time': 41624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332422, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.347 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e1845cf7-498c-4051-96e8-3ec8cafbc297]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562497, 'tstamp': 562497}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332424, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562500, 'tstamp': 562500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332424, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.348 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:42 compute-0 nova_compute[265391]: 2025-09-30 18:31:42.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:42 compute-0 nova_compute[265391]: 2025-09-30 18:31:42.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.351 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.351 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.352 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.352 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:31:42 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:42.353 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c87003f2-1505-41c6-801f-31f7ff89a828]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:31:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1592: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:31:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:42.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:43.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:43.818Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:43 compute-0 ceph-mon[73755]: pgmap v1592: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.941 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:31:43 compute-0 nova_compute[265391]: 2025-09-30 18:31:43.941 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:31:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:31:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3761297730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:31:44 compute-0 nova_compute[265391]: 2025-09-30 18:31:44.405 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:31:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1593: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:31:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:44.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3294471334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:31:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3761297730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:31:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:45.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.464 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.464 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.469 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.470 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.690 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.691 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.715 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.716 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4040MB free_disk=39.901119232177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.716 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:31:45 compute-0 nova_compute[265391]: 2025-09-30 18:31:45.716 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:31:45 compute-0 ceph-mon[73755]: pgmap v1593: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:31:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:46 compute-0 nova_compute[265391]: 2025-09-30 18:31:46.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1594: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:31:46 compute-0 ovn_controller[156242]: 2025-09-30T18:31:46Z|00187|binding|INFO|Claiming lport afaa4f9e-eab6-432e-9b39-d80bb074577d for this chassis.
Sep 30 18:31:46 compute-0 ovn_controller[156242]: 2025-09-30T18:31:46Z|00188|binding|INFO|afaa4f9e-eab6-432e-9b39-d80bb074577d: Claiming fa:16:3e:be:37:f0 10.100.0.10
Sep 30 18:31:46 compute-0 ovn_controller[156242]: 2025-09-30T18:31:46Z|00189|binding|INFO|Setting lport afaa4f9e-eab6-432e-9b39-d80bb074577d up in Southbound
Sep 30 18:31:46 compute-0 nova_compute[265391]: 2025-09-30 18:31:46.741 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration for instance 5321575c-f6c1-4500-adf7-285c22df2e73 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:31:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:46.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:47.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:47 compute-0 nova_compute[265391]: 2025-09-30 18:31:47.247 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Updating resource usage from migration 1f9d60b1-2650-404b-96aa-1154ab475694
Sep 30 18:31:47 compute-0 nova_compute[265391]: 2025-09-30 18:31:47.247 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Starting to track incoming migration 1f9d60b1-2650-404b-96aa-1154ab475694 with flavor c83dc7f1-0795-47db-adcb-fb90be11684a _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Sep 30 18:31:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:47.285Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:47 compute-0 sudo[332495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:31:47 compute-0 sudo[332495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:47 compute-0 sudo[332495]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:47 compute-0 podman[332515]: 2025-09-30 18:31:47.569463085 +0000 UTC m=+0.087650444 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:31:47 compute-0 sudo[332558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:31:47 compute-0 podman[332499]: 2025-09-30 18:31:47.581825455 +0000 UTC m=+0.101419830 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250930, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:31:47 compute-0 sudo[332558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:47 compute-0 podman[332510]: 2025-09-30 18:31:47.60631732 +0000 UTC m=+0.127247640 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:31:47 compute-0 nova_compute[265391]: 2025-09-30 18:31:47.781 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 87f50c89-ad94-41dc-9263-c5715083d91b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:31:47 compute-0 ceph-mon[73755]: pgmap v1594: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:31:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/749390180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:31:48 compute-0 sudo[332558]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:48 compute-0 sshd-session[331238]: error: kex_exchange_identification: read: Connection reset by peer
Sep 30 18:31:48 compute-0 sshd-session[331238]: Connection reset by 154.125.120.7 port 33628
Sep 30 18:31:48 compute-0 nova_compute[265391]: 2025-09-30 18:31:48.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 18:31:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:31:48 compute-0 nova_compute[265391]: 2025-09-30 18:31:48.288 2 WARNING nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 5321575c-f6c1-4500-adf7-285c22df2e73 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Sep 30 18:31:48 compute-0 nova_compute[265391]: 2025-09-30 18:31:48.289 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:31:48 compute-0 nova_compute[265391]: 2025-09-30 18:31:48.289 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=39GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:31:45 up  1:35,  0 user,  load average: 0.58, 0.91, 0.88\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:31:48 compute-0 nova_compute[265391]: 2025-09-30 18:31:48.326 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:31:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1595: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:31:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:31:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3912130262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:31:48 compute-0 nova_compute[265391]: 2025-09-30 18:31:48.766 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:31:48 compute-0 nova_compute[265391]: 2025-09-30 18:31:48.773 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:31:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:31:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:48] "GET /metrics HTTP/1.1" 200 46649 "" "Prometheus/2.51.0"
Sep 30 18:31:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:48 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:31:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3912130262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:31:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.036 2 INFO nova.compute.manager [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Post operation of migration started
Sep 30 18:31:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:49.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.038 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.280 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.439 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.439 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.541 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-5321575c-f6c1-4500-adf7-285c22df2e73" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.542 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-5321575c-f6c1-4500-adf7-285c22df2e73" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.542 2 DEBUG nova.network.neutron [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.792 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:31:49 compute-0 nova_compute[265391]: 2025-09-30 18:31:49.793 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.077s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:31:49 compute-0 ceph-mon[73755]: pgmap v1595: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:31:50 compute-0 nova_compute[265391]: 2025-09-30 18:31:50.050 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1596: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Sep 30 18:31:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:31:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:31:50 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:50 compute-0 nova_compute[265391]: 2025-09-30 18:31:50.794 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:50 compute-0 nova_compute[265391]: 2025-09-30 18:31:50.794 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:50 compute-0 nova_compute[265391]: 2025-09-30 18:31:50.795 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:50 compute-0 nova_compute[265391]: 2025-09-30 18:31:50.845 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:50 compute-0 nova_compute[265391]: 2025-09-30 18:31:50.995 2 DEBUG nova.network.neutron [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Updating instance_info_cache with network_info: [{"id": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "address": "fa:16:3e:be:37:f0", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafaa4f9e-ea", "ovs_interfaceid": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:31:51 compute-0 nova_compute[265391]: 2025-09-30 18:31:51.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:51.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:51 compute-0 nova_compute[265391]: 2025-09-30 18:31:51.503 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-5321575c-f6c1-4500-adf7-285c22df2e73" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:31:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1597: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 5.4 KiB/s rd, 8.9 KiB/s wr, 7 op/s
Sep 30 18:31:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:31:51 compute-0 sudo[332668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:31:51 compute-0 sudo[332668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:51 compute-0 sudo[332668]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:51 compute-0 ceph-mon[73755]: pgmap v1596: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:31:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:31:51 compute-0 sudo[332693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:31:51 compute-0 sudo[332693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:51 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Sep 30 18:31:52 compute-0 nova_compute[265391]: 2025-09-30 18:31:52.025 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:31:52 compute-0 nova_compute[265391]: 2025-09-30 18:31:52.027 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:31:52 compute-0 nova_compute[265391]: 2025-09-30 18:31:52.027 2 DEBUG oslo_concurrency.lockutils [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:31:52 compute-0 nova_compute[265391]: 2025-09-30 18:31:52.033 2 INFO nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Sep 30 18:31:52 compute-0 virtqemud[265263]: Domain id=16 name='instance-00000017' uuid=5321575c-f6c1-4500-adf7-285c22df2e73 is tainted: custom-monitor
Sep 30 18:31:52 compute-0 podman[332759]: 2025-09-30 18:31:52.132923626 +0000 UTC m=+0.038947221 container create 602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chebyshev, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:31:52 compute-0 systemd[1]: Started libpod-conmon-602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921.scope.
Sep 30 18:31:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:31:52 compute-0 podman[332759]: 2025-09-30 18:31:52.116536371 +0000 UTC m=+0.022559986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:31:52 compute-0 podman[332759]: 2025-09-30 18:31:52.215885997 +0000 UTC m=+0.121909612 container init 602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chebyshev, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:31:52 compute-0 podman[332759]: 2025-09-30 18:31:52.222754505 +0000 UTC m=+0.128778100 container start 602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:31:52 compute-0 podman[332759]: 2025-09-30 18:31:52.227289512 +0000 UTC m=+0.133313197 container attach 602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:31:52 compute-0 reverent_chebyshev[332775]: 167 167
Sep 30 18:31:52 compute-0 systemd[1]: libpod-602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921.scope: Deactivated successfully.
Sep 30 18:31:52 compute-0 podman[332759]: 2025-09-30 18:31:52.228832542 +0000 UTC m=+0.134856157 container died 602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 18:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-70f6f8330648b9cb2ba1f4683926fb82f25dcef53ecd2b0b5982fa78fa5807e4-merged.mount: Deactivated successfully.
Sep 30 18:31:52 compute-0 podman[332759]: 2025-09-30 18:31:52.274536757 +0000 UTC m=+0.180560352 container remove 602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_chebyshev, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 18:31:52 compute-0 systemd[1]: libpod-conmon-602876850a489dc7018c6ab4afe947c5a72fad23c70459b07632b6023cbcf921.scope: Deactivated successfully.
Sep 30 18:31:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Sep 30 18:31:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:31:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:31:52 compute-0 podman[332801]: 2025-09-30 18:31:52.454094353 +0000 UTC m=+0.039244519 container create 8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:31:52 compute-0 systemd[1]: Started libpod-conmon-8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546.scope.
Sep 30 18:31:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894cb2cb66fe9e55dd030d9b154a892c11df53a459085cb137b0ff1ac494a547/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894cb2cb66fe9e55dd030d9b154a892c11df53a459085cb137b0ff1ac494a547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894cb2cb66fe9e55dd030d9b154a892c11df53a459085cb137b0ff1ac494a547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894cb2cb66fe9e55dd030d9b154a892c11df53a459085cb137b0ff1ac494a547/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894cb2cb66fe9e55dd030d9b154a892c11df53a459085cb137b0ff1ac494a547/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:52 compute-0 podman[332801]: 2025-09-30 18:31:52.521655304 +0000 UTC m=+0.106805490 container init 8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:31:52 compute-0 podman[332801]: 2025-09-30 18:31:52.527478755 +0000 UTC m=+0.112628921 container start 8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_gagarin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:31:52 compute-0 podman[332801]: 2025-09-30 18:31:52.530583336 +0000 UTC m=+0.115733502 container attach 8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:31:52 compute-0 podman[332801]: 2025-09-30 18:31:52.438372505 +0000 UTC m=+0.023522691 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:31:52 compute-0 ceph-mon[73755]: pgmap v1597: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 5.4 KiB/s rd, 8.9 KiB/s wr, 7 op/s
Sep 30 18:31:52 compute-0 ceph-mon[73755]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Sep 30 18:31:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:31:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:31:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:52.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:31:52 compute-0 reverent_gagarin[332817]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:31:52 compute-0 reverent_gagarin[332817]: --> All data devices are unavailable
Sep 30 18:31:52 compute-0 systemd[1]: libpod-8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546.scope: Deactivated successfully.
Sep 30 18:31:52 compute-0 podman[332801]: 2025-09-30 18:31:52.850970282 +0000 UTC m=+0.436120448 container died 8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_gagarin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 18:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-894cb2cb66fe9e55dd030d9b154a892c11df53a459085cb137b0ff1ac494a547-merged.mount: Deactivated successfully.
Sep 30 18:31:52 compute-0 podman[332801]: 2025-09-30 18:31:52.88794089 +0000 UTC m=+0.473091056 container remove 8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:31:52 compute-0 systemd[1]: libpod-conmon-8d0b89173e00916ccbd5ccd4816b3a933744067c9478b02ec212b113d66ff546.scope: Deactivated successfully.
Sep 30 18:31:52 compute-0 sudo[332693]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:52 compute-0 sudo[332844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:31:52 compute-0 sudo[332844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:52 compute-0 sudo[332844]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:53.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:53 compute-0 nova_compute[265391]: 2025-09-30 18:31:53.044 2 INFO nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Sep 30 18:31:53 compute-0 sudo[332869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:31:53 compute-0 sudo[332869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:53 compute-0 nova_compute[265391]: 2025-09-30 18:31:53.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:53 compute-0 podman[332935]: 2025-09-30 18:31:53.463255476 +0000 UTC m=+0.054513994 container create 723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:31:53 compute-0 systemd[1]: Started libpod-conmon-723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54.scope.
Sep 30 18:31:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:31:53 compute-0 podman[332935]: 2025-09-30 18:31:53.444865059 +0000 UTC m=+0.036123587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:31:53 compute-0 podman[332935]: 2025-09-30 18:31:53.553623069 +0000 UTC m=+0.144881587 container init 723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:31:53 compute-0 podman[332935]: 2025-09-30 18:31:53.561963405 +0000 UTC m=+0.153221903 container start 723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 18:31:53 compute-0 podman[332935]: 2025-09-30 18:31:53.565355663 +0000 UTC m=+0.156614171 container attach 723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhaskara, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:31:53 compute-0 hopeful_bhaskara[332952]: 167 167
Sep 30 18:31:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1598: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 5.4 KiB/s rd, 8.9 KiB/s wr, 7 op/s
Sep 30 18:31:53 compute-0 systemd[1]: libpod-723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54.scope: Deactivated successfully.
Sep 30 18:31:53 compute-0 podman[332935]: 2025-09-30 18:31:53.57024648 +0000 UTC m=+0.161504988 container died 723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0be021f20cdd3d9c322cfe66db1d4bc03edc41dacf9c55f973d75f3cee32c1bc-merged.mount: Deactivated successfully.
Sep 30 18:31:53 compute-0 podman[332935]: 2025-09-30 18:31:53.620608876 +0000 UTC m=+0.211867374 container remove 723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhaskara, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Sep 30 18:31:53 compute-0 systemd[1]: libpod-conmon-723d642801744e4d801e44e23922c51b7d81b7bcf5e3340ed2faaad223ccfa54.scope: Deactivated successfully.
Sep 30 18:31:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:53.819Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:31:53 compute-0 podman[332976]: 2025-09-30 18:31:53.85341042 +0000 UTC m=+0.060123180 container create a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:31:53 compute-0 systemd[1]: Started libpod-conmon-a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc.scope.
Sep 30 18:31:53 compute-0 podman[332976]: 2025-09-30 18:31:53.825494396 +0000 UTC m=+0.032207236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:31:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053bdbfc4b7e9fc99e7e0705c6fdbb62d75927938a1922784aad1e3dcc572cae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053bdbfc4b7e9fc99e7e0705c6fdbb62d75927938a1922784aad1e3dcc572cae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053bdbfc4b7e9fc99e7e0705c6fdbb62d75927938a1922784aad1e3dcc572cae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053bdbfc4b7e9fc99e7e0705c6fdbb62d75927938a1922784aad1e3dcc572cae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:53 compute-0 podman[332976]: 2025-09-30 18:31:53.970615769 +0000 UTC m=+0.177328619 container init a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_roentgen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Sep 30 18:31:53 compute-0 podman[332976]: 2025-09-30 18:31:53.981962093 +0000 UTC m=+0.188674843 container start a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_roentgen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:31:53 compute-0 podman[332976]: 2025-09-30 18:31:53.985054993 +0000 UTC m=+0.191767833 container attach a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:31:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:54 compute-0 nova_compute[265391]: 2025-09-30 18:31:54.051 2 INFO nova.virt.libvirt.driver [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Sep 30 18:31:54 compute-0 nova_compute[265391]: 2025-09-30 18:31:54.057 2 DEBUG nova.compute.manager [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]: {
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:     "0": [
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:         {
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "devices": [
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "/dev/loop3"
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             ],
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "lv_name": "ceph_lv0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "lv_size": "21470642176",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "name": "ceph_lv0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "tags": {
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.cluster_name": "ceph",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.crush_device_class": "",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.encrypted": "0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.osd_id": "0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.type": "block",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.vdo": "0",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:                 "ceph.with_tpm": "0"
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             },
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "type": "block",
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:             "vg_name": "ceph_vg0"
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:         }
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]:     ]
Sep 30 18:31:54 compute-0 unruffled_roentgen[332992]: }
Sep 30 18:31:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:54.318 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:31:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:54.318 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:31:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:31:54.319 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:31:54 compute-0 systemd[1]: libpod-a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc.scope: Deactivated successfully.
Sep 30 18:31:54 compute-0 podman[332976]: 2025-09-30 18:31:54.343751213 +0000 UTC m=+0.550463973 container died a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 18:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-053bdbfc4b7e9fc99e7e0705c6fdbb62d75927938a1922784aad1e3dcc572cae-merged.mount: Deactivated successfully.
Sep 30 18:31:54 compute-0 podman[332976]: 2025-09-30 18:31:54.390586837 +0000 UTC m=+0.597299587 container remove a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_roentgen, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:31:54 compute-0 systemd[1]: libpod-conmon-a1a62e84289aec033b5fbed0cddf6b0984e747776db01194e7bca1e6f984c1cc.scope: Deactivated successfully.
Sep 30 18:31:54 compute-0 sudo[332869]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:54 compute-0 sudo[333015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:31:54 compute-0 sudo[333015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:54 compute-0 sudo[333015]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:54 compute-0 nova_compute[265391]: 2025-09-30 18:31:54.568 2 DEBUG nova.objects.instance [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Sep 30 18:31:54 compute-0 sudo[333040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:31:54 compute-0 sudo[333040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:54 compute-0 ceph-mon[73755]: pgmap v1598: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 5.4 KiB/s rd, 8.9 KiB/s wr, 7 op/s
Sep 30 18:31:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:54.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:55 compute-0 podman[333105]: 2025-09-30 18:31:55.040065555 +0000 UTC m=+0.036485097 container create d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:31:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:55.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:55 compute-0 systemd[1]: Started libpod-conmon-d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7.scope.
Sep 30 18:31:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:31:55 compute-0 podman[333105]: 2025-09-30 18:31:55.023959978 +0000 UTC m=+0.020379540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:31:55 compute-0 podman[333105]: 2025-09-30 18:31:55.130161971 +0000 UTC m=+0.126581573 container init d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_murdock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 18:31:55 compute-0 podman[333105]: 2025-09-30 18:31:55.136995858 +0000 UTC m=+0.133415400 container start d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:31:55 compute-0 podman[333105]: 2025-09-30 18:31:55.140944161 +0000 UTC m=+0.137363703 container attach d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:31:55 compute-0 suspicious_murdock[333121]: 167 167
Sep 30 18:31:55 compute-0 systemd[1]: libpod-d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7.scope: Deactivated successfully.
Sep 30 18:31:55 compute-0 podman[333105]: 2025-09-30 18:31:55.145506639 +0000 UTC m=+0.141926181 container died d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_murdock, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-675768daf3af477ae4621b1cfee06650b181ee9c132165fe0c6f9f0dc036f538-merged.mount: Deactivated successfully.
Sep 30 18:31:55 compute-0 podman[333105]: 2025-09-30 18:31:55.182465007 +0000 UTC m=+0.178884559 container remove d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_murdock, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 18:31:55 compute-0 systemd[1]: libpod-conmon-d3ab7099eb3df1994a0de147bbe84c0728b70eacb195ff2e0a0d740b3c5cedf7.scope: Deactivated successfully.
Sep 30 18:31:55 compute-0 podman[333144]: 2025-09-30 18:31:55.395383637 +0000 UTC m=+0.048381335 container create 901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hofstadter, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:31:55 compute-0 systemd[1]: Started libpod-conmon-901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183.scope.
Sep 30 18:31:55 compute-0 podman[333144]: 2025-09-30 18:31:55.377008481 +0000 UTC m=+0.030006229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:31:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9422ee15741b8b76e4c6f313ec335b9543edec510b5d019400042acbce04d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9422ee15741b8b76e4c6f313ec335b9543edec510b5d019400042acbce04d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9422ee15741b8b76e4c6f313ec335b9543edec510b5d019400042acbce04d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9422ee15741b8b76e4c6f313ec335b9543edec510b5d019400042acbce04d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:31:55 compute-0 podman[333144]: 2025-09-30 18:31:55.503169352 +0000 UTC m=+0.156167070 container init 901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hofstadter, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:31:55 compute-0 podman[333144]: 2025-09-30 18:31:55.511775315 +0000 UTC m=+0.164773023 container start 901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 18:31:55 compute-0 podman[333144]: 2025-09-30 18:31:55.516312692 +0000 UTC m=+0.169310390 container attach 901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hofstadter, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 18:31:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1599: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 747 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:31:55 compute-0 nova_compute[265391]: 2025-09-30 18:31:55.587 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:55 compute-0 nova_compute[265391]: 2025-09-30 18:31:55.880 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:55 compute-0 nova_compute[265391]: 2025-09-30 18:31:55.881 2 WARNING neutronclient.v2_0.client [None req-9d63bf12-12ae-476e-ae19-3562886a5a07 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:31:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:31:56 compute-0 nova_compute[265391]: 2025-09-30 18:31:56.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:56 compute-0 lvm[333255]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:31:56 compute-0 lvm[333255]: VG ceph_vg0 finished
Sep 30 18:31:56 compute-0 sudo[333236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:31:56 compute-0 sudo[333236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:56 compute-0 sudo[333236]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:56 compute-0 confident_hofstadter[333160]: {}
Sep 30 18:31:56 compute-0 systemd[1]: libpod-901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183.scope: Deactivated successfully.
Sep 30 18:31:56 compute-0 systemd[1]: libpod-901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183.scope: Consumed 1.242s CPU time.
Sep 30 18:31:56 compute-0 podman[333144]: 2025-09-30 18:31:56.279650993 +0000 UTC m=+0.932648711 container died 901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9422ee15741b8b76e4c6f313ec335b9543edec510b5d019400042acbce04d0-merged.mount: Deactivated successfully.
Sep 30 18:31:56 compute-0 podman[333144]: 2025-09-30 18:31:56.31851409 +0000 UTC m=+0.971511788 container remove 901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:31:56 compute-0 systemd[1]: libpod-conmon-901ace04588cd34990d55a97768b72b7034bb1af2cacc53e563f4ac3f5822183.scope: Deactivated successfully.
Sep 30 18:31:56 compute-0 sudo[333040]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:31:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:31:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:56 compute-0 sudo[333276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:31:56 compute-0 sudo[333276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:31:56 compute-0 sudo[333276]: pam_unix(sudo:session): session closed for user root
Sep 30 18:31:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:56.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:57.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:57.286Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:31:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:31:57.287Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:31:57 compute-0 ceph-mon[73755]: pgmap v1599: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 747 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:31:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:31:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:31:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1007055275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:31:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:31:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1007055275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:31:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1600: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 747 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:31:58 compute-0 nova_compute[265391]: 2025-09-30 18:31:58.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1007055275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:31:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1007055275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:31:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1355569676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:31:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:58] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:31:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:31:58] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:31:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:31:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:31:58.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:31:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:31:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:31:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:31:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:31:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:31:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:31:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:31:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:31:59.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.405 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "5321575c-f6c1-4500-adf7-285c22df2e73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.406 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.406 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.406 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.406 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:31:59 compute-0 ceph-mon[73755]: pgmap v1600: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 747 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.418 2 INFO nova.compute.manager [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Terminating instance
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:31:59 compute-0 podman[333304]: 2025-09-30 18:31:59.531766356 +0000 UTC m=+0.067184153 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Sep 30 18:31:59 compute-0 podman[333305]: 2025-09-30 18:31:59.55778885 +0000 UTC m=+0.093261169 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:31:59 compute-0 podman[333306]: 2025-09-30 18:31:59.557834331 +0000 UTC m=+0.089276525 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Sep 30 18:31:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1601: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 841 B/s rd, 0 op/s
Sep 30 18:31:59 compute-0 podman[276673]: time="2025-09-30T18:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:31:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:31:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10771 "" "Go-http-client/1.1"
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.936 2 DEBUG nova.compute.manager [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:31:59 compute-0 kernel: tapafaa4f9e-ea (unregistering): left promiscuous mode
Sep 30 18:31:59 compute-0 NetworkManager[45059]: <info>  [1759257119.9850] device (tapafaa4f9e-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:31:59 compute-0 ovn_controller[156242]: 2025-09-30T18:31:59Z|00190|binding|INFO|Releasing lport afaa4f9e-eab6-432e-9b39-d80bb074577d from this chassis (sb_readonly=0)
Sep 30 18:31:59 compute-0 ovn_controller[156242]: 2025-09-30T18:31:59Z|00191|binding|INFO|Setting lport afaa4f9e-eab6-432e-9b39-d80bb074577d down in Southbound
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:31:59 compute-0 ovn_controller[156242]: 2025-09-30T18:31:59Z|00192|binding|INFO|Removing iface tapafaa4f9e-ea ovn-installed in OVS
Sep 30 18:31:59 compute-0 nova_compute[265391]: 2025-09-30 18:31:59.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.003 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:37:f0 10.100.0.10'], port_security=['fa:16:3e:be:37:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5321575c-f6c1-4500-adf7-285c22df2e73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '14', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=afaa4f9e-eab6-432e-9b39-d80bb074577d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.005 166158 INFO neutron.agent.ovn.metadata.agent [-] Port afaa4f9e-eab6-432e-9b39-d80bb074577d in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.007 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.034 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[718ae705-4503-466a-be61-42539f668b84]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000017.scope: Deactivated successfully.
Sep 30 18:32:00 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000017.scope: Consumed 2.623s CPU time.
Sep 30 18:32:00 compute-0 systemd-machined[219917]: Machine qemu-16-instance-00000017 terminated.
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.069 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[9f357618-d9d7-49c7-af74-d6b0d7747e07]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.072 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[346f7441-1f3f-49bc-ba3d-292cb85fe774]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.102 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[11d39e74-12ce-44e6-ad6c-72841d451f1f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.119 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ed091610-0827-402f-beb3-0070e8b4ad91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562485, 'reachable_time': 41624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333371, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.140 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c4564ff6-471b-402c-a4c6-54f0936d348c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562497, 'tstamp': 562497}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333372, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562500, 'tstamp': 562500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333372, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.141 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.150 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.150 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.151 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.151 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.153 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a0512f-1c19-4070-8ddb-e0cdf3f47c56]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 kernel: tapafaa4f9e-ea: entered promiscuous mode
Sep 30 18:32:00 compute-0 NetworkManager[45059]: <info>  [1759257120.1581] manager: (tapafaa4f9e-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00193|binding|INFO|Claiming lport afaa4f9e-eab6-432e-9b39-d80bb074577d for this chassis.
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00194|binding|INFO|afaa4f9e-eab6-432e-9b39-d80bb074577d: Claiming fa:16:3e:be:37:f0 10.100.0.10
Sep 30 18:32:00 compute-0 kernel: tapafaa4f9e-ea (unregistering): left promiscuous mode
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.163 2 DEBUG nova.compute.manager [req-a49c3a51-c962-4fb4-8464-27aa8df2c894 req-7b142868-5c61-4b50-a8ce-db8260974a3a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.163 2 DEBUG oslo_concurrency.lockutils [req-a49c3a51-c962-4fb4-8464-27aa8df2c894 req-7b142868-5c61-4b50-a8ce-db8260974a3a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.164 2 DEBUG oslo_concurrency.lockutils [req-a49c3a51-c962-4fb4-8464-27aa8df2c894 req-7b142868-5c61-4b50-a8ce-db8260974a3a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.164 2 DEBUG oslo_concurrency.lockutils [req-a49c3a51-c962-4fb4-8464-27aa8df2c894 req-7b142868-5c61-4b50-a8ce-db8260974a3a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.164 2 DEBUG nova.compute.manager [req-a49c3a51-c962-4fb4-8464-27aa8df2c894 req-7b142868-5c61-4b50-a8ce-db8260974a3a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] No waiting events found dispatching network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.164 2 DEBUG nova.compute.manager [req-a49c3a51-c962-4fb4-8464-27aa8df2c894 req-7b142868-5c61-4b50-a8ce-db8260974a3a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.169 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:37:f0 10.100.0.10'], port_security=['fa:16:3e:be:37:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5321575c-f6c1-4500-adf7-285c22df2e73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '14', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=afaa4f9e-eab6-432e-9b39-d80bb074577d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.170 166158 INFO neutron.agent.ovn.metadata.agent [-] Port afaa4f9e-eab6-432e-9b39-d80bb074577d in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.172 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00195|binding|INFO|Setting lport afaa4f9e-eab6-432e-9b39-d80bb074577d ovn-installed in OVS
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00196|binding|INFO|Setting lport afaa4f9e-eab6-432e-9b39-d80bb074577d up in Southbound
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00197|binding|INFO|Releasing lport afaa4f9e-eab6-432e-9b39-d80bb074577d from this chassis (sb_readonly=1)
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00198|if_status|INFO|Dropped 1 log messages in last 340 seconds (most recently, 340 seconds ago) due to excessive rate
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00199|if_status|INFO|Not setting lport afaa4f9e-eab6-432e-9b39-d80bb074577d down as sb is readonly
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00200|binding|INFO|Removing iface tapafaa4f9e-ea ovn-installed in OVS
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.191 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ce481cec-cee9-433a-ae71-c7e01e7ae82f]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00201|binding|INFO|Releasing lport afaa4f9e-eab6-432e-9b39-d80bb074577d from this chassis (sb_readonly=0)
Sep 30 18:32:00 compute-0 ovn_controller[156242]: 2025-09-30T18:32:00Z|00202|binding|INFO|Setting lport afaa4f9e-eab6-432e-9b39-d80bb074577d down in Southbound
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.194 2 INFO nova.virt.libvirt.driver [-] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Instance destroyed successfully.
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.195 2 DEBUG nova.objects.instance [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'resources' on Instance uuid 5321575c-f6c1-4500-adf7-285c22df2e73 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.206 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:37:f0 10.100.0.10'], port_security=['fa:16:3e:be:37:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5321575c-f6c1-4500-adf7-285c22df2e73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '14', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=afaa4f9e-eab6-432e-9b39-d80bb074577d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.229 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[221bdeb4-14d7-4bc7-a5d4-c93fa9992016]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.231 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9b891c-c2c9-4e5d-9cc3-a26355201c4a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.264 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8cec1a-7b12-474d-9ec9-d4e691f88164]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.284 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[14a8ba6f-e984-4a37-b041-d6ad06af5a90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 9, 'rx_bytes': 1756, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 9, 'rx_bytes': 1756, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562485, 'reachable_time': 41624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333383, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.303 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6df950-6424-4b4a-b103-f135d1d49254]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562497, 'tstamp': 562497}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333384, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562500, 'tstamp': 562500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333384, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.304 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.311 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.311 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.311 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.311 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.312 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[17dc5e32-501c-4c23-a966-c3879cda1086]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.313 166158 INFO neutron.agent.ovn.metadata.agent [-] Port afaa4f9e-eab6-432e-9b39-d80bb074577d in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.314 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.333 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[70874e29-fb71-4397-9c52-cdf3b50f480c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.368 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[dee56033-2a94-488e-91fb-f5819386f33b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.370 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7f7296-1ec3-4ad8-958c-2ce6ad14f887]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.400 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[233e789e-8c37-495c-a084-d86c19f6a8d6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.433 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[34b8790e-14fb-43f0-8289-621664153379]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 11, 'rx_bytes': 1756, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 11, 'rx_bytes': 1756, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562485, 'reachable_time': 41624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333391, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.455 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[842b9669-2fb9-4458-a634-35824359cb59]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562497, 'tstamp': 562497}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333392, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562500, 'tstamp': 562500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333392, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.456 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.465 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.466 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.466 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.466 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:00 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:00.468 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1eefcb32-e997-4b16-9b71-47d8a8d1eefd]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.710 2 DEBUG nova.virt.libvirt.vif [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=1,config_drive='True',created_at=2025-09-30T18:30:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-876288154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-876288154',id=23,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:30:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ce22n40f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',clean_attempts='1',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:31:55Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=5321575c-f6c1-4500-adf7-285c22df2e73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "address": "fa:16:3e:be:37:f0", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafaa4f9e-ea", "ovs_interfaceid": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.711 2 DEBUG nova.network.os_vif_util [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "address": "fa:16:3e:be:37:f0", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafaa4f9e-ea", "ovs_interfaceid": "afaa4f9e-eab6-432e-9b39-d80bb074577d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.711 2 DEBUG nova.network.os_vif_util [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:37:f0,bridge_name='br-int',has_traffic_filtering=True,id=afaa4f9e-eab6-432e-9b39-d80bb074577d,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafaa4f9e-ea') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.712 2 DEBUG os_vif [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:37:f0,bridge_name='br-int',has_traffic_filtering=True,id=afaa4f9e-eab6-432e-9b39-d80bb074577d,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafaa4f9e-ea') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.714 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafaa4f9e-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=676aedfd-c145-45d2-97d6-d0f4cf26720b) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:00 compute-0 nova_compute[265391]: 2025-09-30 18:32:00.721 2 INFO os_vif [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:37:f0,bridge_name='br-int',has_traffic_filtering=True,id=afaa4f9e-eab6-432e-9b39-d80bb074577d,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafaa4f9e-ea')
Sep 30 18:32:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:00.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:01.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:01 compute-0 nova_compute[265391]: 2025-09-30 18:32:01.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:01 compute-0 nova_compute[265391]: 2025-09-30 18:32:01.121 2 INFO nova.virt.libvirt.driver [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Deleting instance files /var/lib/nova/instances/5321575c-f6c1-4500-adf7-285c22df2e73_del
Sep 30 18:32:01 compute-0 nova_compute[265391]: 2025-09-30 18:32:01.121 2 INFO nova.virt.libvirt.driver [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Deletion of /var/lib/nova/instances/5321575c-f6c1-4500-adf7-285c22df2e73_del complete
Sep 30 18:32:01 compute-0 openstack_network_exporter[279566]: ERROR   18:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:32:01 compute-0 openstack_network_exporter[279566]: ERROR   18:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:32:01 compute-0 openstack_network_exporter[279566]: ERROR   18:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:32:01 compute-0 openstack_network_exporter[279566]: ERROR   18:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:32:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:32:01 compute-0 openstack_network_exporter[279566]: ERROR   18:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:32:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:32:01 compute-0 ceph-mon[73755]: pgmap v1601: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 841 B/s rd, 0 op/s
Sep 30 18:32:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1818660566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1602: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 560 B/s rd, 0 op/s
Sep 30 18:32:01 compute-0 nova_compute[265391]: 2025-09-30 18:32:01.632 2 INFO nova.compute.manager [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Took 1.70 seconds to destroy the instance on the hypervisor.
Sep 30 18:32:01 compute-0 nova_compute[265391]: 2025-09-30 18:32:01.632 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:32:01 compute-0 nova_compute[265391]: 2025-09-30 18:32:01.633 2 DEBUG nova.compute.manager [-] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:32:01 compute-0 nova_compute[265391]: 2025-09-30 18:32:01.633 2 DEBUG nova.network.neutron [-] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:32:01 compute-0 nova_compute[265391]: 2025-09-30 18:32:01.633 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.256 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.256 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.257 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.257 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.257 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] No waiting events found dispatching network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.257 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.258 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-plugged-afaa4f9e-eab6-432e-9b39-d80bb074577d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.258 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.258 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.258 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.258 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] No waiting events found dispatching network-vif-plugged-afaa4f9e-eab6-432e-9b39-d80bb074577d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.259 2 WARNING nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received unexpected event network-vif-plugged-afaa4f9e-eab6-432e-9b39-d80bb074577d for instance with vm_state active and task_state deleting.
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.259 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-plugged-afaa4f9e-eab6-432e-9b39-d80bb074577d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.259 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.259 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.259 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.260 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] No waiting events found dispatching network-vif-plugged-afaa4f9e-eab6-432e-9b39-d80bb074577d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.260 2 WARNING nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received unexpected event network-vif-plugged-afaa4f9e-eab6-432e-9b39-d80bb074577d for instance with vm_state active and task_state deleting.
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.260 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.260 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.260 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.261 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.261 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] No waiting events found dispatching network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.261 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.261 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.261 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.262 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.262 2 DEBUG oslo_concurrency.lockutils [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.262 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] No waiting events found dispatching network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.262 2 DEBUG nova.compute.manager [req-b3773cae-e6fb-485d-ac8b-6ab43f9cf6ad req-9fc11520-8a92-4884-a303-35c4a14bc936 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-unplugged-afaa4f9e-eab6-432e-9b39-d80bb074577d for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:32:02 compute-0 nova_compute[265391]: 2025-09-30 18:32:02.498 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:03.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:03 compute-0 ceph-mon[73755]: pgmap v1602: 353 pgs: 353 active+clean; 200 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 560 B/s rd, 0 op/s
Sep 30 18:32:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1603: 353 pgs: 353 active+clean; 163 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Sep 30 18:32:03 compute-0 nova_compute[265391]: 2025-09-30 18:32:03.814 2 DEBUG nova.network.neutron [-] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:32:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:03.820Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:04 compute-0 nova_compute[265391]: 2025-09-30 18:32:04.322 2 INFO nova.compute.manager [-] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Took 2.69 seconds to deallocate network for instance.
Sep 30 18:32:04 compute-0 nova_compute[265391]: 2025-09-30 18:32:04.327 2 DEBUG nova.compute.manager [req-d603083d-e4f4-4ed0-9db9-1259f185f69b req-48a70176-d2b3-4631-9624-a6d371c04212 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 5321575c-f6c1-4500-adf7-285c22df2e73] Received event network-vif-deleted-afaa4f9e-eab6-432e-9b39-d80bb074577d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:04.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:04 compute-0 nova_compute[265391]: 2025-09-30 18:32:04.844 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:04 compute-0 nova_compute[265391]: 2025-09-30 18:32:04.844 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:04 compute-0 nova_compute[265391]: 2025-09-30 18:32:04.852 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:04 compute-0 nova_compute[265391]: 2025-09-30 18:32:04.912 2 INFO nova.scheduler.client.report [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Deleted allocations for instance 5321575c-f6c1-4500-adf7-285c22df2e73
Sep 30 18:32:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:05.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:05 compute-0 ceph-mon[73755]: pgmap v1603: 353 pgs: 353 active+clean; 163 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Sep 30 18:32:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1604: 353 pgs: 353 active+clean; 121 MiB data, 352 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:32:05 compute-0 nova_compute[265391]: 2025-09-30 18:32:05.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:05 compute-0 nova_compute[265391]: 2025-09-30 18:32:05.945 2 DEBUG oslo_concurrency.lockutils [None req-e69b46a4-c9bc-471d-80ba-60e19e4a52b9 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "5321575c-f6c1-4500-adf7-285c22df2e73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.539s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.360 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "87f50c89-ad94-41dc-9263-c5715083d91b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.361 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.361 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.361 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.361 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.375 2 INFO nova.compute.manager [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Terminating instance
Sep 30 18:32:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:06.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.893 2 DEBUG nova.compute.manager [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:32:06 compute-0 kernel: tap688f7db7-db (unregistering): left promiscuous mode
Sep 30 18:32:06 compute-0 NetworkManager[45059]: <info>  [1759257126.9518] device (tap688f7db7-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:32:06 compute-0 ovn_controller[156242]: 2025-09-30T18:32:06Z|00203|binding|INFO|Releasing lport 688f7db7-db25-41cc-8c99-d97dc36a693a from this chassis (sb_readonly=0)
Sep 30 18:32:06 compute-0 ovn_controller[156242]: 2025-09-30T18:32:06Z|00204|binding|INFO|Setting lport 688f7db7-db25-41cc-8c99-d97dc36a693a down in Southbound
Sep 30 18:32:06 compute-0 ovn_controller[156242]: 2025-09-30T18:32:06Z|00205|binding|INFO|Removing iface tap688f7db7-db ovn-installed in OVS
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:06 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:06.966 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:9e:a8 10.100.0.4'], port_security=['fa:16:3e:c5:9e:a8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '87f50c89-ad94-41dc-9263-c5715083d91b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '5', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=688f7db7-db25-41cc-8c99-d97dc36a693a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:32:06 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:06.968 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 688f7db7-db25-41cc-8c99-d97dc36a693a in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:32:06 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:06.969 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:32:06 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:06.970 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbd5390-d103-4096-b5bc-2aee655c243b]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:06 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:06.971 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace which is not needed anymore
Sep 30 18:32:06 compute-0 nova_compute[265391]: 2025-09-30 18:32:06.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:07 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000016.scope: Deactivated successfully.
Sep 30 18:32:07 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000016.scope: Consumed 16.000s CPU time.
Sep 30 18:32:07 compute-0 systemd-machined[219917]: Machine qemu-15-instance-00000016 terminated.
Sep 30 18:32:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:07.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:07 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[331145]: [NOTICE]   (331181) : haproxy version is 3.0.5-8e879a5
Sep 30 18:32:07 compute-0 podman[333443]: 2025-09-30 18:32:07.106616391 +0000 UTC m=+0.038270113 container kill 9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:32:07 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[331145]: [NOTICE]   (331181) : path to executable is /usr/sbin/haproxy
Sep 30 18:32:07 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[331145]: [WARNING]  (331181) : Exiting Master process...
Sep 30 18:32:07 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[331145]: [ALERT]    (331181) : Current worker (331189) exited with code 143 (Terminated)
Sep 30 18:32:07 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[331145]: [WARNING]  (331181) : All workers exited. Exiting... (0)
Sep 30 18:32:07 compute-0 systemd[1]: libpod-9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920.scope: Deactivated successfully.
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.132 2 INFO nova.virt.libvirt.driver [-] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Instance destroyed successfully.
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.133 2 DEBUG nova.objects.instance [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'resources' on Instance uuid 87f50c89-ad94-41dc-9263-c5715083d91b obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.152 2 DEBUG nova.compute.manager [req-93ff77f3-e290-4a11-8f69-916893aa94bb req-1d2717a6-eeff-4d96-a432-4f32e9ae039b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received event network-vif-unplugged-688f7db7-db25-41cc-8c99-d97dc36a693a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.153 2 DEBUG oslo_concurrency.lockutils [req-93ff77f3-e290-4a11-8f69-916893aa94bb req-1d2717a6-eeff-4d96-a432-4f32e9ae039b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.153 2 DEBUG oslo_concurrency.lockutils [req-93ff77f3-e290-4a11-8f69-916893aa94bb req-1d2717a6-eeff-4d96-a432-4f32e9ae039b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.153 2 DEBUG oslo_concurrency.lockutils [req-93ff77f3-e290-4a11-8f69-916893aa94bb req-1d2717a6-eeff-4d96-a432-4f32e9ae039b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.154 2 DEBUG nova.compute.manager [req-93ff77f3-e290-4a11-8f69-916893aa94bb req-1d2717a6-eeff-4d96-a432-4f32e9ae039b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] No waiting events found dispatching network-vif-unplugged-688f7db7-db25-41cc-8c99-d97dc36a693a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.154 2 DEBUG nova.compute.manager [req-93ff77f3-e290-4a11-8f69-916893aa94bb req-1d2717a6-eeff-4d96-a432-4f32e9ae039b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received event network-vif-unplugged-688f7db7-db25-41cc-8c99-d97dc36a693a for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:32:07 compute-0 podman[333462]: 2025-09-30 18:32:07.159356839 +0000 UTC m=+0.031475677 container died 9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920-userdata-shm.mount: Deactivated successfully.
Sep 30 18:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-51f6eb0e1a79ffc0642adab20aa024446318f120575687c49d1f560926be481e-merged.mount: Deactivated successfully.
Sep 30 18:32:07 compute-0 podman[333462]: 2025-09-30 18:32:07.201249305 +0000 UTC m=+0.073368123 container cleanup 9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:32:07 compute-0 systemd[1]: libpod-conmon-9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920.scope: Deactivated successfully.
Sep 30 18:32:07 compute-0 podman[333472]: 2025-09-30 18:32:07.22303507 +0000 UTC m=+0.082887270 container remove 9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.228 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0a56ea27-9ebe-485c-b3e8-b89c0ec02d18]: (4, ("Tue Sep 30 06:32:07 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920)\n9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920\nTue Sep 30 06:32:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920)\n9981e5a67f11a4fbba5fe4bd0e589b268e47786717fbe6fb055699a4e9504920\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.230 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b10c47ac-251f-488a-8489-a75cdb46c0cd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.231 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.231 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a058cc6e-e1f1-433d-b6ee-ba872a3b980d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.232 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:07 compute-0 kernel: tap6901f664-30: left promiscuous mode
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.257 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a342f3ab-4048-4698-a349-3360353b2ca7]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.284 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[88ab56ab-0ffa-46ec-88c6-04cefefc328e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.286 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b32536d1-11fc-45e2-942c-3a9a3b444537]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:07.287Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.304 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a1cada47-c066-4ff7-9908-ae4c069adfa0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562478, 'reachable_time': 36294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333503, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.306 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:32:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:07.306 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[5632cc8d-e474-45cd-954b-80719516ecc5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d6901f664\x2d336b\x2d42d2\x2dbbf7\x2d58951befc8d1.mount: Deactivated successfully.
Sep 30 18:32:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:32:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:32:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:32:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:32:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:32:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:32:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:32:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:32:07 compute-0 ceph-mon[73755]: pgmap v1604: 353 pgs: 353 active+clean; 121 MiB data, 352 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:32:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:32:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1605: 353 pgs: 353 active+clean; 121 MiB data, 352 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.640 2 DEBUG nova.virt.libvirt.vif [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:30:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-836614812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-836614812',id=22,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:30:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-12s0ytrx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:30:28Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=87f50c89-ad94-41dc-9263-c5715083d91b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.641 2 DEBUG nova.network.os_vif_util [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "688f7db7-db25-41cc-8c99-d97dc36a693a", "address": "fa:16:3e:c5:9e:a8", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap688f7db7-db", "ovs_interfaceid": "688f7db7-db25-41cc-8c99-d97dc36a693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.641 2 DEBUG nova.network.os_vif_util [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:9e:a8,bridge_name='br-int',has_traffic_filtering=True,id=688f7db7-db25-41cc-8c99-d97dc36a693a,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap688f7db7-db') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.642 2 DEBUG os_vif [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:9e:a8,bridge_name='br-int',has_traffic_filtering=True,id=688f7db7-db25-41cc-8c99-d97dc36a693a,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap688f7db7-db') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap688f7db7-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=0d303356-f790-4e67-9602-e485bddedde6) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:07 compute-0 nova_compute[265391]: 2025-09-30 18:32:07.652 2 INFO os_vif [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:9e:a8,bridge_name='br-int',has_traffic_filtering=True,id=688f7db7-db25-41cc-8c99-d97dc36a693a,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap688f7db7-db')
Sep 30 18:32:07 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:32:07 compute-0 sshd-session[333301]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:32:07 compute-0 sshd-session[333301]: banner exchange: Connection from 115.190.39.222 port 36054: Connection timed out
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001139004963127564 of space, bias 1.0, pg target 0.2278009926255128 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:32:08
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', '.mgr', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data']
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:32:08 compute-0 nova_compute[265391]: 2025-09-30 18:32:08.029 2 INFO nova.virt.libvirt.driver [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Deleting instance files /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b_del
Sep 30 18:32:08 compute-0 nova_compute[265391]: 2025-09-30 18:32:08.030 2 INFO nova.virt.libvirt.driver [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Deletion of /var/lib/nova/instances/87f50c89-ad94-41dc-9263-c5715083d91b_del complete
Sep 30 18:32:08 compute-0 nova_compute[265391]: 2025-09-30 18:32:08.541 2 INFO nova.compute.manager [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Took 1.65 seconds to destroy the instance on the hypervisor.
Sep 30 18:32:08 compute-0 nova_compute[265391]: 2025-09-30 18:32:08.541 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:32:08 compute-0 nova_compute[265391]: 2025-09-30 18:32:08.541 2 DEBUG nova.compute.manager [-] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:32:08 compute-0 nova_compute[265391]: 2025-09-30 18:32:08.542 2 DEBUG nova.network.neutron [-] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:32:08 compute-0 nova_compute[265391]: 2025-09-30 18:32:08.542 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:08 compute-0 nova_compute[265391]: 2025-09-30 18:32:08.669 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:08] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:32:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:08] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:32:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:08.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.015 2 DEBUG nova.compute.manager [req-56cdbe9c-d735-47c3-a60e-d4531db8f1ba req-4fc6b043-1325-4078-896a-4d6e8fe795e3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received event network-vif-deleted-688f7db7-db25-41cc-8c99-d97dc36a693a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.015 2 INFO nova.compute.manager [req-56cdbe9c-d735-47c3-a60e-d4531db8f1ba req-4fc6b043-1325-4078-896a-4d6e8fe795e3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Neutron deleted interface 688f7db7-db25-41cc-8c99-d97dc36a693a; detaching it from the instance and deleting it from the info cache
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.016 2 DEBUG nova.network.neutron [req-56cdbe9c-d735-47c3-a60e-d4531db8f1ba req-4fc6b043-1325-4078-896a-4d6e8fe795e3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:32:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:09.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.251 2 DEBUG nova.compute.manager [req-b97e8ee0-fde0-4797-bbc8-e06ffda703a7 req-55299bb0-7e4c-4b47-882c-1f2305ec8529 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received event network-vif-unplugged-688f7db7-db25-41cc-8c99-d97dc36a693a external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.252 2 DEBUG oslo_concurrency.lockutils [req-b97e8ee0-fde0-4797-bbc8-e06ffda703a7 req-55299bb0-7e4c-4b47-882c-1f2305ec8529 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.252 2 DEBUG oslo_concurrency.lockutils [req-b97e8ee0-fde0-4797-bbc8-e06ffda703a7 req-55299bb0-7e4c-4b47-882c-1f2305ec8529 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.253 2 DEBUG oslo_concurrency.lockutils [req-b97e8ee0-fde0-4797-bbc8-e06ffda703a7 req-55299bb0-7e4c-4b47-882c-1f2305ec8529 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.253 2 DEBUG nova.compute.manager [req-b97e8ee0-fde0-4797-bbc8-e06ffda703a7 req-55299bb0-7e4c-4b47-882c-1f2305ec8529 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] No waiting events found dispatching network-vif-unplugged-688f7db7-db25-41cc-8c99-d97dc36a693a pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.253 2 DEBUG nova.compute.manager [req-b97e8ee0-fde0-4797-bbc8-e06ffda703a7 req-55299bb0-7e4c-4b47-882c-1f2305ec8529 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Received event network-vif-unplugged-688f7db7-db25-41cc-8c99-d97dc36a693a for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.439 2 DEBUG nova.network.neutron [-] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:32:09 compute-0 ceph-mon[73755]: pgmap v1605: 353 pgs: 353 active+clean; 121 MiB data, 352 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.527 2 DEBUG nova.compute.manager [req-56cdbe9c-d735-47c3-a60e-d4531db8f1ba req-4fc6b043-1325-4078-896a-4d6e8fe795e3 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Detach interface failed, port_id=688f7db7-db25-41cc-8c99-d97dc36a693a, reason: Instance 87f50c89-ad94-41dc-9263-c5715083d91b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Sep 30 18:32:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1606: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:32:09 compute-0 nova_compute[265391]: 2025-09-30 18:32:09.947 2 INFO nova.compute.manager [-] [instance: 87f50c89-ad94-41dc-9263-c5715083d91b] Took 1.41 seconds to deallocate network for instance.
Sep 30 18:32:10 compute-0 nova_compute[265391]: 2025-09-30 18:32:10.471 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:10 compute-0 nova_compute[265391]: 2025-09-30 18:32:10.472 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:10 compute-0 nova_compute[265391]: 2025-09-30 18:32:10.522 2 DEBUG oslo_concurrency.processutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:10.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:32:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1081377242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:11 compute-0 nova_compute[265391]: 2025-09-30 18:32:11.058 2 DEBUG oslo_concurrency.processutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:11.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:11 compute-0 nova_compute[265391]: 2025-09-30 18:32:11.066 2 DEBUG nova.compute.provider_tree [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:32:11 compute-0 nova_compute[265391]: 2025-09-30 18:32:11.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:11 compute-0 ceph-mon[73755]: pgmap v1606: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:32:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1081377242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:11 compute-0 nova_compute[265391]: 2025-09-30 18:32:11.578 2 DEBUG nova.scheduler.client.report [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:32:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1607: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:32:12 compute-0 nova_compute[265391]: 2025-09-30 18:32:12.089 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.617s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:12 compute-0 nova_compute[265391]: 2025-09-30 18:32:12.120 2 INFO nova.scheduler.client.report [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Deleted allocations for instance 87f50c89-ad94-41dc-9263-c5715083d91b
Sep 30 18:32:12 compute-0 nova_compute[265391]: 2025-09-30 18:32:12.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:12.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:13.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:13 compute-0 nova_compute[265391]: 2025-09-30 18:32:13.151 2 DEBUG oslo_concurrency.lockutils [None req-fbd2a862-a54e-4139-b4e6-95dc39836044 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "87f50c89-ad94-41dc-9263-c5715083d91b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.790s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:13 compute-0 ceph-mon[73755]: pgmap v1607: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:32:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1608: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:32:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:13.821Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:32:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:14.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:32:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:32:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:15.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:32:15 compute-0 ceph-mon[73755]: pgmap v1608: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:32:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1609: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Sep 30 18:32:15 compute-0 unix_chkpwd[333557]: password check failed for user (root)
Sep 30 18:32:15 compute-0 sshd-session[333554]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107  user=root
Sep 30 18:32:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:16 compute-0 nova_compute[265391]: 2025-09-30 18:32:16.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:16 compute-0 sudo[333561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:32:16 compute-0 sudo[333561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:16 compute-0 sudo[333561]: pam_unix(sudo:session): session closed for user root
Sep 30 18:32:16 compute-0 ceph-mon[73755]: pgmap v1609: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Sep 30 18:32:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:17.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:17 compute-0 sshd-session[333558]: Invalid user sample from 45.252.249.158 port 52164
Sep 30 18:32:17 compute-0 sshd-session[333558]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:32:17 compute-0 sshd-session[333558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:32:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:17.288Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:17 compute-0 sshd-session[333554]: Failed password for root from 14.225.220.107 port 37826 ssh2
Sep 30 18:32:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1610: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:32:17 compute-0 nova_compute[265391]: 2025-09-30 18:32:17.631 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:17 compute-0 nova_compute[265391]: 2025-09-30 18:32:17.632 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:17 compute-0 nova_compute[265391]: 2025-09-30 18:32:17.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:18 compute-0 nova_compute[265391]: 2025-09-30 18:32:18.140 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:32:18 compute-0 podman[333588]: 2025-09-30 18:32:18.534560377 +0000 UTC m=+0.067392858 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Sep 30 18:32:18 compute-0 podman[333590]: 2025-09-30 18:32:18.571244388 +0000 UTC m=+0.105185498 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:32:18 compute-0 podman[333589]: 2025-09-30 18:32:18.615247899 +0000 UTC m=+0.149058396 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:32:18 compute-0 ceph-mon[73755]: pgmap v1610: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:32:18 compute-0 sshd-session[333554]: Received disconnect from 14.225.220.107 port 37826:11: Bye Bye [preauth]
Sep 30 18:32:18 compute-0 sshd-session[333554]: Disconnected from authenticating user root 14.225.220.107 port 37826 [preauth]
Sep 30 18:32:18 compute-0 nova_compute[265391]: 2025-09-30 18:32:18.690 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:18 compute-0 nova_compute[265391]: 2025-09-30 18:32:18.690 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:18 compute-0 nova_compute[265391]: 2025-09-30 18:32:18.698 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:32:18 compute-0 nova_compute[265391]: 2025-09-30 18:32:18.698 2 INFO nova.compute.claims [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:32:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:18] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:32:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:18] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:32:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:18.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:19 compute-0 sshd-session[333558]: Failed password for invalid user sample from 45.252.249.158 port 52164 ssh2
Sep 30 18:32:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:19.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:19 compute-0 sshd-session[333558]: Received disconnect from 45.252.249.158 port 52164:11: Bye Bye [preauth]
Sep 30 18:32:19 compute-0 sshd-session[333558]: Disconnected from invalid user sample 45.252.249.158 port 52164 [preauth]
Sep 30 18:32:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1611: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:32:19 compute-0 nova_compute[265391]: 2025-09-30 18:32:19.744 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:32:20 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254099041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:20 compute-0 nova_compute[265391]: 2025-09-30 18:32:20.240 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:20 compute-0 nova_compute[265391]: 2025-09-30 18:32:20.247 2 DEBUG nova.compute.provider_tree [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:32:20 compute-0 ceph-mon[73755]: pgmap v1611: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:32:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2254099041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:20 compute-0 nova_compute[265391]: 2025-09-30 18:32:20.754 2 DEBUG nova.scheduler.client.report [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:32:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:20.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:21.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:21 compute-0 nova_compute[265391]: 2025-09-30 18:32:21.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:21 compute-0 nova_compute[265391]: 2025-09-30 18:32:21.267 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.576s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:21 compute-0 nova_compute[265391]: 2025-09-30 18:32:21.267 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:32:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1612: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:32:21 compute-0 nova_compute[265391]: 2025-09-30 18:32:21.778 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:32:21 compute-0 nova_compute[265391]: 2025-09-30 18:32:21.778 2 DEBUG nova.network.neutron [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:32:21 compute-0 nova_compute[265391]: 2025-09-30 18:32:21.779 2 WARNING neutronclient.v2_0.client [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:21 compute-0 nova_compute[265391]: 2025-09-30 18:32:21.779 2 WARNING neutronclient.v2_0.client [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:22 compute-0 nova_compute[265391]: 2025-09-30 18:32:22.288 2 INFO nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:32:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:32:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:32:22 compute-0 nova_compute[265391]: 2025-09-30 18:32:22.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:22 compute-0 ceph-mon[73755]: pgmap v1612: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:32:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:32:22 compute-0 nova_compute[265391]: 2025-09-30 18:32:22.806 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:32:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:22.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:23.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:23 compute-0 nova_compute[265391]: 2025-09-30 18:32:23.283 2 DEBUG nova.network.neutron [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Successfully created port: 48ba4743-596d-47a6-a246-afe70e6e1fc6 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:32:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1613: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:32:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:23.822Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:23 compute-0 nova_compute[265391]: 2025-09-30 18:32:23.824 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:32:23 compute-0 nova_compute[265391]: 2025-09-30 18:32:23.825 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:32:23 compute-0 nova_compute[265391]: 2025-09-30 18:32:23.826 2 INFO nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Creating image(s)
Sep 30 18:32:23 compute-0 nova_compute[265391]: 2025-09-30 18:32:23.851 2 DEBUG nova.storage.rbd_utils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:23 compute-0 nova_compute[265391]: 2025-09-30 18:32:23.877 2 DEBUG nova.storage.rbd_utils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:23 compute-0 nova_compute[265391]: 2025-09-30 18:32:23.905 2 DEBUG nova.storage.rbd_utils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:23 compute-0 nova_compute[265391]: 2025-09-30 18:32:23.909 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.000 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.001 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.002 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.002 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.026 2 DEBUG nova.storage.rbd_utils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.030 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.295 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.377 2 DEBUG nova.storage.rbd_utils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] resizing rbd image 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.488 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.488 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Ensure instance console log exists: /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.489 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.489 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.490 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.540 2 DEBUG nova.network.neutron [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Successfully updated port: 48ba4743-596d-47a6-a246-afe70e6e1fc6 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.613 2 DEBUG nova.compute.manager [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-changed-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.614 2 DEBUG nova.compute.manager [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Refreshing instance network info cache due to event network-changed-48ba4743-596d-47a6-a246-afe70e6e1fc6. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.614 2 DEBUG oslo_concurrency.lockutils [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.615 2 DEBUG oslo_concurrency.lockutils [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:32:24 compute-0 nova_compute[265391]: 2025-09-30 18:32:24.615 2 DEBUG nova.network.neutron [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Refreshing network info cache for port 48ba4743-596d-47a6-a246-afe70e6e1fc6 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:32:24 compute-0 ceph-mon[73755]: pgmap v1613: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:32:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:24.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:25 compute-0 nova_compute[265391]: 2025-09-30 18:32:25.047 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:32:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:25.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:25 compute-0 nova_compute[265391]: 2025-09-30 18:32:25.121 2 WARNING neutronclient.v2_0.client [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:25 compute-0 nova_compute[265391]: 2025-09-30 18:32:25.494 2 DEBUG nova.network.neutron [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:32:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1614: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:32:25 compute-0 nova_compute[265391]: 2025-09-30 18:32:25.665 2 DEBUG nova.network.neutron [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:32:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:26 compute-0 nova_compute[265391]: 2025-09-30 18:32:26.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:26 compute-0 nova_compute[265391]: 2025-09-30 18:32:26.171 2 DEBUG oslo_concurrency.lockutils [req-e54d7f93-0c35-40f7-beca-e7d8acd9bf76 req-018e5d93-8636-4f1f-af1e-aa27d5ea083b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:32:26 compute-0 nova_compute[265391]: 2025-09-30 18:32:26.172 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquired lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:32:26 compute-0 nova_compute[265391]: 2025-09-30 18:32:26.173 2 DEBUG nova.network.neutron [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:32:26 compute-0 ceph-mon[73755]: pgmap v1614: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:32:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:32:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:26.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:32:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:27.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:27.289Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:27 compute-0 nova_compute[265391]: 2025-09-30 18:32:27.385 2 DEBUG nova.network.neutron [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:32:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1615: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:32:27 compute-0 nova_compute[265391]: 2025-09-30 18:32:27.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:28 compute-0 nova_compute[265391]: 2025-09-30 18:32:28.475 2 WARNING neutronclient.v2_0.client [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:28 compute-0 nova_compute[265391]: 2025-09-30 18:32:28.638 2 DEBUG nova.network.neutron [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Updating instance_info_cache with network_info: [{"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:32:28 compute-0 ceph-mon[73755]: pgmap v1615: 353 pgs: 353 active+clean; 41 MiB data, 304 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:32:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:28] "GET /metrics HTTP/1.1" 200 46710 "" "Prometheus/2.51.0"
Sep 30 18:32:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:28] "GET /metrics HTTP/1.1" 200 46710 "" "Prometheus/2.51.0"
Sep 30 18:32:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:28.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:29.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.145 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Releasing lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.146 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Instance network_info: |[{"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.150 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Start _get_guest_xml network_info=[{"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.155 2 WARNING nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.156 2 DEBUG nova.virt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteStrategies-server-760306639', uuid='78e1566d-9c5e-49b1-a044-0c46cf002c66'), owner=OwnerMeta(userid='623ef4a55c9e4fc28bb65e49246b5008', username='tempest-TestExecuteStrategies-1883747907-project-admin', projectid='c634e1c17ed54907969576a0eb8eff50', projectname='tempest-TestExecuteStrategies-1883747907'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257149.1568532) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.161 2 DEBUG nova.virt.libvirt.host [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.162 2 DEBUG nova.virt.libvirt.host [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.165 2 DEBUG nova.virt.libvirt.host [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.165 2 DEBUG nova.virt.libvirt.host [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.166 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.167 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.168 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.168 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.168 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.169 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.169 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.169 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.170 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.170 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.171 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.171 2 DEBUG nova.virt.hardware [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.176 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1616: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:32:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:32:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921557445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.653 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.680 2 DEBUG nova.storage.rbd_utils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:29 compute-0 nova_compute[265391]: 2025-09-30 18:32:29.683 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:29 compute-0 podman[276673]: time="2025-09-30T18:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:32:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/921557445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:32:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:32:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10293 "" "Go-http-client/1.1"
Sep 30 18:32:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:32:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270793738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.110 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.112 2 DEBUG nova.virt.libvirt.vif [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:32:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-760306639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-760306639',id=24,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-7ntnt7t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:32:22Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=78e1566d-9c5e-49b1-a044-0c46cf002c66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.112 2 DEBUG nova.network.os_vif_util [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.113 2 DEBUG nova.network.os_vif_util [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:e6:35,bridge_name='br-int',has_traffic_filtering=True,id=48ba4743-596d-47a6-a246-afe70e6e1fc6,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba4743-59') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.114 2 DEBUG nova.objects.instance [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78e1566d-9c5e-49b1-a044-0c46cf002c66 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:32:30 compute-0 podman[333919]: 2025-09-30 18:32:30.529260599 +0000 UTC m=+0.058680802 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:32:30 compute-0 podman[333920]: 2025-09-30 18:32:30.529500525 +0000 UTC m=+0.058432526 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Sep 30 18:32:30 compute-0 podman[333921]: 2025-09-30 18:32:30.546455085 +0000 UTC m=+0.070311804 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.627 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <uuid>78e1566d-9c5e-49b1-a044-0c46cf002c66</uuid>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <name>instance-00000018</name>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-760306639</nova:name>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:32:29</nova:creationTime>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:32:30 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:32:30 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <nova:port uuid="48ba4743-596d-47a6-a246-afe70e6e1fc6">
Sep 30 18:32:30 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <system>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <entry name="serial">78e1566d-9c5e-49b1-a044-0c46cf002c66</entry>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <entry name="uuid">78e1566d-9c5e-49b1-a044-0c46cf002c66</entry>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </system>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <os>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   </os>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <features>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   </features>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/78e1566d-9c5e-49b1-a044-0c46cf002c66_disk">
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       </source>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config">
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       </source>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:32:30 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:47:e6:35"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <target dev="tap48ba4743-59"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/console.log" append="off"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <video>
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </video>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:32:30 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:32:30 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:32:30 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:32:30 compute-0 nova_compute[265391]: </domain>
Sep 30 18:32:30 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.628 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Preparing to wait for external event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.628 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.628 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.629 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.629 2 DEBUG nova.virt.libvirt.vif [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:32:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-760306639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-760306639',id=24,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-7ntnt7t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:32:22Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=78e1566d-9c5e-49b1-a044-0c46cf002c66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.630 2 DEBUG nova.network.os_vif_util [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.630 2 DEBUG nova.network.os_vif_util [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:e6:35,bridge_name='br-int',has_traffic_filtering=True,id=48ba4743-596d-47a6-a246-afe70e6e1fc6,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba4743-59') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.631 2 DEBUG os_vif [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:e6:35,bridge_name='br-int',has_traffic_filtering=True,id=48ba4743-596d-47a6-a246-afe70e6e1fc6,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba4743-59') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '5094654b-b05d-5aeb-b948-dd78faf9fed3', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48ba4743-59, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap48ba4743-59, col_values=(('qos', UUID('5ee073c3-9534-4a44-9860-f7fb4ba8acd9')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap48ba4743-59, col_values=(('external_ids', {'iface-id': '48ba4743-596d-47a6-a246-afe70e6e1fc6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:e6:35', 'vm-uuid': '78e1566d-9c5e-49b1-a044-0c46cf002c66'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:32:30 compute-0 NetworkManager[45059]: <info>  [1759257150.6902] manager: (tap48ba4743-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:30 compute-0 nova_compute[265391]: 2025-09-30 18:32:30.695 2 INFO os_vif [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:e6:35,bridge_name='br-int',has_traffic_filtering=True,id=48ba4743-596d-47a6-a246-afe70e6e1fc6,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba4743-59')
Sep 30 18:32:30 compute-0 ceph-mon[73755]: pgmap v1616: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:32:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4270793738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:32:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:30.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:30 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:32:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:31 compute-0 nova_compute[265391]: 2025-09-30 18:32:31.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:31.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:31 compute-0 openstack_network_exporter[279566]: ERROR   18:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:32:31 compute-0 openstack_network_exporter[279566]: ERROR   18:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:32:31 compute-0 openstack_network_exporter[279566]: ERROR   18:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:32:31 compute-0 openstack_network_exporter[279566]: ERROR   18:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:32:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:32:31 compute-0 openstack_network_exporter[279566]: ERROR   18:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:32:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:32:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1617: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:32:32 compute-0 nova_compute[265391]: 2025-09-30 18:32:32.240 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:32:32 compute-0 nova_compute[265391]: 2025-09-30 18:32:32.241 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:32:32 compute-0 nova_compute[265391]: 2025-09-30 18:32:32.241 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No VIF found with MAC fa:16:3e:47:e6:35, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:32:32 compute-0 nova_compute[265391]: 2025-09-30 18:32:32.241 2 INFO nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Using config drive
Sep 30 18:32:32 compute-0 nova_compute[265391]: 2025-09-30 18:32:32.267 2 DEBUG nova.storage.rbd_utils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:32 compute-0 nova_compute[265391]: 2025-09-30 18:32:32.788 2 WARNING neutronclient.v2_0.client [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:32 compute-0 ceph-mon[73755]: pgmap v1617: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:32:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:32.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:33.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.347 2 INFO nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Creating config drive at /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/disk.config
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.355 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpp6m0hsut execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.490 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpp6m0hsut" returned: 0 in 0.136s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.527 2 DEBUG nova.storage.rbd_utils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.532 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/disk.config 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1618: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.694 2 DEBUG oslo_concurrency.processutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/disk.config 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.695 2 INFO nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Deleting local config drive /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/disk.config because it was imported into RBD.
Sep 30 18:32:33 compute-0 kernel: tap48ba4743-59: entered promiscuous mode
Sep 30 18:32:33 compute-0 NetworkManager[45059]: <info>  [1759257153.7531] manager: (tap48ba4743-59): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Sep 30 18:32:33 compute-0 ovn_controller[156242]: 2025-09-30T18:32:33Z|00206|binding|INFO|Claiming lport 48ba4743-596d-47a6-a246-afe70e6e1fc6 for this chassis.
Sep 30 18:32:33 compute-0 ovn_controller[156242]: 2025-09-30T18:32:33Z|00207|binding|INFO|48ba4743-596d-47a6-a246-afe70e6e1fc6: Claiming fa:16:3e:47:e6:35 10.100.0.12
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.761 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:e6:35 10.100.0.12'], port_security=['fa:16:3e:47:e6:35 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '78e1566d-9c5e-49b1-a044-0c46cf002c66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=48ba4743-596d-47a6-a246-afe70e6e1fc6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.761 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 48ba4743-596d-47a6-a246-afe70e6e1fc6 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.763 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:32:33 compute-0 ovn_controller[156242]: 2025-09-30T18:32:33Z|00208|binding|INFO|Setting lport 48ba4743-596d-47a6-a246-afe70e6e1fc6 up in Southbound
Sep 30 18:32:33 compute-0 ovn_controller[156242]: 2025-09-30T18:32:33Z|00209|binding|INFO|Setting lport 48ba4743-596d-47a6-a246-afe70e6e1fc6 ovn-installed in OVS
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.778 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[963ee8e4-7bed-4fe0-8a8b-a7b76a53678d]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.778 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6901f664-31 in ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.780 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6901f664-30 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.780 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b469b7cc-741e-47b8-a34b-238d6bfc15be]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.781 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[103118fb-10f0-45c4-af4b-54bb4dbe8e50]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 systemd-udevd[334053]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:32:33 compute-0 systemd-machined[219917]: New machine qemu-17-instance-00000018.
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.796 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[839a0bc4-c6a6-4d54-8c77-3a82ca5a8c63]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 NetworkManager[45059]: <info>  [1759257153.7997] device (tap48ba4743-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:32:33 compute-0 NetworkManager[45059]: <info>  [1759257153.8006] device (tap48ba4743-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.802 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd71992-2e1f-4f2e-8eeb-e1305d6b5988]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000018.
Sep 30 18:32:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:33.822Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.836 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[24fc5b7e-63e0-47d1-a42f-fe81c1fbfbd4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.840 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5be3d32b-5ff8-4bdb-99bd-e765804980e4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 systemd-udevd[334058]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:32:33 compute-0 NetworkManager[45059]: <info>  [1759257153.8423] manager: (tap6901f664-30): new Veth device (/org/freedesktop/NetworkManager/Devices/83)
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.870 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[49b0ad8a-9259-439e-81c6-5df7b3122a12]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.873 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[65bcfb1d-496b-402c-8d1d-c3ce2df8737b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 NetworkManager[45059]: <info>  [1759257153.8988] device (tap6901f664-30): carrier: link connected
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.903 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[a3afd9a7-6050-414b-aa5f-9986ad8a4f21]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.920 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[02543d0c-7b91-4a58-a503-bcecc8dc2ed1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575259, 'reachable_time': 34563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334086, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.937 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2de266aa-40d2-4de9-9528-edbbeb8303ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:412a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575259, 'tstamp': 575259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334087, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.941 2 DEBUG nova.compute.manager [req-c76a55ba-e3d2-4ed9-af13-c00164c26504 req-96f934d1-ae6a-4794-8094-88b793df7478 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.942 2 DEBUG oslo_concurrency.lockutils [req-c76a55ba-e3d2-4ed9-af13-c00164c26504 req-96f934d1-ae6a-4794-8094-88b793df7478 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.942 2 DEBUG oslo_concurrency.lockutils [req-c76a55ba-e3d2-4ed9-af13-c00164c26504 req-96f934d1-ae6a-4794-8094-88b793df7478 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.942 2 DEBUG oslo_concurrency.lockutils [req-c76a55ba-e3d2-4ed9-af13-c00164c26504 req-96f934d1-ae6a-4794-8094-88b793df7478 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:33 compute-0 nova_compute[265391]: 2025-09-30 18:32:33.942 2 DEBUG nova.compute.manager [req-c76a55ba-e3d2-4ed9-af13-c00164c26504 req-96f934d1-ae6a-4794-8094-88b793df7478 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Processing event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:32:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:33.965 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[cbaa5135-c071-43de-9132-b7cd7a3bdb53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575259, 'reachable_time': 34563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334088, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.007 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[feeeea30-e32a-495a-96de-7d9743c2de16]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.092 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[16abd2ed-0c34-4400-9960-4ba9a16ed8b1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.094 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.094 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.095 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:34 compute-0 NetworkManager[45059]: <info>  [1759257154.0984] manager: (tap6901f664-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Sep 30 18:32:34 compute-0 kernel: tap6901f664-30: entered promiscuous mode
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.104 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:34 compute-0 ovn_controller[156242]: 2025-09-30T18:32:34Z|00210|binding|INFO|Releasing lport 5b6cbf18-1826-41d0-920f-e9db4f1a1832 from this chassis (sb_readonly=0)
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.138 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b62c6d2f-1986-481d-bdd9-0670ef054768]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.139 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.140 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.140 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 6901f664-336b-42d2-bbf7-58951befc8d1 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.140 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.140 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa0fb9a-89ca-4fd9-806e-e2bf3ee6294e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.141 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.141 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2a2c4f-a4bb-43df-a421-1d21bb3d2650]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.142 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:32:34 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:34.142 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'env', 'PROCESS_TAG=haproxy-6901f664-336b-42d2-bbf7-58951befc8d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6901f664-336b-42d2-bbf7-58951befc8d1.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:32:34 compute-0 podman[334162]: 2025-09-30 18:32:34.644396577 +0000 UTC m=+0.074077801 container create 7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930)
Sep 30 18:32:34 compute-0 systemd[1]: Started libpod-conmon-7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36.scope.
Sep 30 18:32:34 compute-0 podman[334162]: 2025-09-30 18:32:34.610312174 +0000 UTC m=+0.039993448 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:32:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b1ea676acc4104f1f9e406f0986aed3bb04854be8d2494bc378fc60048d53d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:34 compute-0 podman[334162]: 2025-09-30 18:32:34.736192657 +0000 UTC m=+0.165873921 container init 7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Sep 30 18:32:34 compute-0 podman[334162]: 2025-09-30 18:32:34.741145286 +0000 UTC m=+0.170826540 container start 7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Sep 30 18:32:34 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[334178]: [NOTICE]   (334182) : New worker (334184) forked
Sep 30 18:32:34 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[334178]: [NOTICE]   (334182) : Loading success.
Sep 30 18:32:34 compute-0 ceph-mon[73755]: pgmap v1618: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:32:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:34.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.904 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.908 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.911 2 INFO nova.virt.libvirt.driver [-] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Instance spawned successfully.
Sep 30 18:32:34 compute-0 nova_compute[265391]: 2025-09-30 18:32:34.911 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:32:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:35.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.424 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.424 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.425 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.425 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.425 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.426 2 DEBUG nova.virt.libvirt.driver [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1619: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.934 2 INFO nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Took 12.11 seconds to spawn the instance on the hypervisor.
Sep 30 18:32:35 compute-0 nova_compute[265391]: 2025-09-30 18:32:35.935 2 DEBUG nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:32:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.009 2 DEBUG nova.compute.manager [req-127b065b-291a-46fb-aba5-7cfc6b38873d req-32a957ad-f10f-42c7-8059-384a52306a7e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.009 2 DEBUG oslo_concurrency.lockutils [req-127b065b-291a-46fb-aba5-7cfc6b38873d req-32a957ad-f10f-42c7-8059-384a52306a7e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.009 2 DEBUG oslo_concurrency.lockutils [req-127b065b-291a-46fb-aba5-7cfc6b38873d req-32a957ad-f10f-42c7-8059-384a52306a7e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.009 2 DEBUG oslo_concurrency.lockutils [req-127b065b-291a-46fb-aba5-7cfc6b38873d req-32a957ad-f10f-42c7-8059-384a52306a7e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.010 2 DEBUG nova.compute.manager [req-127b065b-291a-46fb-aba5-7cfc6b38873d req-32a957ad-f10f-42c7-8059-384a52306a7e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] No waiting events found dispatching network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.010 2 WARNING nova.compute.manager [req-127b065b-291a-46fb-aba5-7cfc6b38873d req-32a957ad-f10f-42c7-8059-384a52306a7e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received unexpected event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 for instance with vm_state building and task_state spawning.
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.466 2 INFO nova.compute.manager [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Took 17.81 seconds to build instance.
Sep 30 18:32:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:32:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3187322812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:32:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:32:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3187322812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:32:36 compute-0 sudo[334195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:32:36 compute-0 sudo[334195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:36 compute-0 sudo[334195]: pam_unix(sudo:session): session closed for user root
Sep 30 18:32:36 compute-0 ceph-mon[73755]: pgmap v1619: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Sep 30 18:32:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3187322812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:32:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3187322812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:32:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:36.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:36 compute-0 nova_compute[265391]: 2025-09-30 18:32:36.975 2 DEBUG oslo_concurrency.lockutils [None req-41e7c6ef-5138-40da-a2cc-8f3cbf231faf 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.343s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:37.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:37.290Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:32:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:32:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:32:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:32:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:32:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:32:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:32:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:32:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1620: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Sep 30 18:32:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:32:38 compute-0 nova_compute[265391]: 2025-09-30 18:32:38.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:32:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:32:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:32:38 compute-0 ceph-mgr[74051]: [dashboard INFO request] [192.168.122.100:52750] [POST] [200] [0.003s] [4.0B] [ac81b037-21d7-40c6-a37e-74543546a1a2] /api/prometheus_receiver
Sep 30 18:32:38 compute-0 ceph-mon[73755]: pgmap v1620: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Sep 30 18:32:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:38.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:39.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1621: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:32:40 compute-0 nova_compute[265391]: 2025-09-30 18:32:40.578 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:40 compute-0 nova_compute[265391]: 2025-09-30 18:32:40.579 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:40 compute-0 nova_compute[265391]: 2025-09-30 18:32:40.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:40 compute-0 ceph-mon[73755]: pgmap v1621: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:32:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:32:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:40.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:32:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:41 compute-0 nova_compute[265391]: 2025-09-30 18:32:41.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:41 compute-0 nova_compute[265391]: 2025-09-30 18:32:41.086 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:32:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:41.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1622: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:32:41 compute-0 nova_compute[265391]: 2025-09-30 18:32:41.656 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:41 compute-0 nova_compute[265391]: 2025-09-30 18:32:41.656 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:41 compute-0 nova_compute[265391]: 2025-09-30 18:32:41.667 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:32:41 compute-0 nova_compute[265391]: 2025-09-30 18:32:41.668 2 INFO nova.compute.claims [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:32:42 compute-0 nova_compute[265391]: 2025-09-30 18:32:42.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:32:42 compute-0 nova_compute[265391]: 2025-09-30 18:32:42.746 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:42 compute-0 ceph-mon[73755]: pgmap v1622: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:32:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:43.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:32:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988617778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:43 compute-0 nova_compute[265391]: 2025-09-30 18:32:43.238 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:43 compute-0 nova_compute[265391]: 2025-09-30 18:32:43.244 2 DEBUG nova.compute.provider_tree [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:32:43 compute-0 nova_compute[265391]: 2025-09-30 18:32:43.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:32:43 compute-0 nova_compute[265391]: 2025-09-30 18:32:43.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:32:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1623: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:32:43 compute-0 nova_compute[265391]: 2025-09-30 18:32:43.756 2 DEBUG nova.scheduler.client.report [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:32:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:43.823Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/988617778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:43 compute-0 nova_compute[265391]: 2025-09-30 18:32:43.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.270 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.614s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.271 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.274 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.335s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.275 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.275 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.275 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:32:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521701379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.735 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.795 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.796 2 DEBUG nova.network.neutron [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.796 2 WARNING neutronclient.v2_0.client [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:44 compute-0 nova_compute[265391]: 2025-09-30 18:32:44.797 2 WARNING neutronclient.v2_0.client [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:44.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:44 compute-0 ceph-mon[73755]: pgmap v1623: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:32:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3091147046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2521701379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:45.158 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:45.159 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.302 2 INFO nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.306 2 DEBUG nova.network.neutron [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Successfully created port: f942c9c9-85a4-47cf-9428-7e266b83b49b _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:32:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1624: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.803 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.804 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.813 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:32:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1948820474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.986 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:32:45 compute-0 nova_compute[265391]: 2025-09-30 18:32:45.987 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.014 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.014 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4167MB free_disk=39.971275329589844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.015 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.015 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:46 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:46.161 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.833 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.835 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.835 2 INFO nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Creating image(s)
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.863 2 DEBUG nova.storage.rbd_utils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 29a2fe9a-add5-43c1-948a-9df854aa4261_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:46.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.902 2 DEBUG nova.storage.rbd_utils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 29a2fe9a-add5-43c1-948a-9df854aa4261_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:46 compute-0 ceph-mon[73755]: pgmap v1624: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.941 2 DEBUG nova.storage.rbd_utils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 29a2fe9a-add5-43c1-948a-9df854aa4261_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:46 compute-0 nova_compute[265391]: 2025-09-30 18:32:46.949 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.015 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.016 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.017 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.017 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.042 2 DEBUG nova.storage.rbd_utils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 29a2fe9a-add5-43c1-948a-9df854aa4261_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.047 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 29a2fe9a-add5-43c1-948a-9df854aa4261_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.081 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 78e1566d-9c5e-49b1-a044-0c46cf002c66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.082 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 29a2fe9a-add5-43c1-948a-9df854aa4261 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.082 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.083 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=39GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:32:46 up  1:36,  0 user,  load average: 0.57, 0.86, 0.87\n', 'num_instances': '2', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '2', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '2', 'io_workload': '1', 'num_vm_building': '1', 'num_task_block_device_mapping': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:32:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:47.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.158 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:47.291Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.320 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 29a2fe9a-add5-43c1-948a-9df854aa4261_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:47 compute-0 ovn_controller[156242]: 2025-09-30T18:32:47Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:e6:35 10.100.0.12
Sep 30 18:32:47 compute-0 ovn_controller[156242]: 2025-09-30T18:32:47Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:e6:35 10.100.0.12
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.381 2 DEBUG nova.storage.rbd_utils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] resizing rbd image 29a2fe9a-add5-43c1-948a-9df854aa4261_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.468 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.468 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Ensure instance console log exists: /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.469 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.469 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.469 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.557 2 DEBUG nova.network.neutron [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Successfully updated port: f942c9c9-85a4-47cf-9428-7e266b83b49b _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:32:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1625: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 72 op/s
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.615 2 DEBUG nova.compute.manager [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-changed-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.615 2 DEBUG nova.compute.manager [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Refreshing instance network info cache due to event network-changed-f942c9c9-85a4-47cf-9428-7e266b83b49b. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.616 2 DEBUG oslo_concurrency.lockutils [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.616 2 DEBUG oslo_concurrency.lockutils [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.616 2 DEBUG nova.network.neutron [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Refreshing network info cache for port f942c9c9-85a4-47cf-9428-7e266b83b49b _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.669 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:47 compute-0 nova_compute[265391]: 2025-09-30 18:32:47.677 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:32:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/717575760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:32:48 compute-0 nova_compute[265391]: 2025-09-30 18:32:48.064 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:32:48 compute-0 nova_compute[265391]: 2025-09-30 18:32:48.122 2 WARNING neutronclient.v2_0.client [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:48 compute-0 nova_compute[265391]: 2025-09-30 18:32:48.184 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:32:48 compute-0 nova_compute[265391]: 2025-09-30 18:32:48.496 2 DEBUG nova.network.neutron [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:32:48 compute-0 nova_compute[265391]: 2025-09-30 18:32:48.654 2 DEBUG nova.network.neutron [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:32:48 compute-0 nova_compute[265391]: 2025-09-30 18:32:48.693 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:32:48 compute-0 nova_compute[265391]: 2025-09-30 18:32:48.693 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.678s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:32:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:32:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:48.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:48 compute-0 ceph-mon[73755]: pgmap v1625: 353 pgs: 353 active+clean; 88 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 72 op/s
Sep 30 18:32:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:49.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:49 compute-0 nova_compute[265391]: 2025-09-30 18:32:49.160 2 DEBUG oslo_concurrency.lockutils [req-ffb48872-c870-43e7-b126-45bf01820d1e req-ca8eb338-1d0c-4262-a2fe-7b82a4f52d64 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:32:49 compute-0 nova_compute[265391]: 2025-09-30 18:32:49.162 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquired lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:32:49 compute-0 nova_compute[265391]: 2025-09-30 18:32:49.162 2 DEBUG nova.network.neutron [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:32:49 compute-0 podman[334469]: 2025-09-30 18:32:49.551226867 +0000 UTC m=+0.076794362 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:32:49 compute-0 podman[334467]: 2025-09-30 18:32:49.580454415 +0000 UTC m=+0.104216963 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:32:49 compute-0 podman[334468]: 2025-09-30 18:32:49.585090745 +0000 UTC m=+0.110687551 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible)
Sep 30 18:32:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1626: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Sep 30 18:32:50 compute-0 nova_compute[265391]: 2025-09-30 18:32:50.491 2 DEBUG nova.network.neutron [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:32:50 compute-0 nova_compute[265391]: 2025-09-30 18:32:50.692 2 WARNING neutronclient.v2_0.client [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:50 compute-0 nova_compute[265391]: 2025-09-30 18:32:50.696 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:32:50 compute-0 nova_compute[265391]: 2025-09-30 18:32:50.696 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:32:50 compute-0 nova_compute[265391]: 2025-09-30 18:32:50.697 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:32:50 compute-0 nova_compute[265391]: 2025-09-30 18:32:50.697 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:32:50 compute-0 nova_compute[265391]: 2025-09-30 18:32:50.698 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:32:50 compute-0 nova_compute[265391]: 2025-09-30 18:32:50.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:50.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:50 compute-0 ceph-mon[73755]: pgmap v1626: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Sep 30 18:32:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:51 compute-0 nova_compute[265391]: 2025-09-30 18:32:51.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:51.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:51 compute-0 nova_compute[265391]: 2025-09-30 18:32:51.574 2 DEBUG nova.network.neutron [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Updating instance_info_cache with network_info: [{"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:32:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1627: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.082 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Releasing lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.083 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Instance network_info: |[{"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.086 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Start _get_guest_xml network_info=[{"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.090 2 WARNING nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.091 2 DEBUG nova.virt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteStrategies-server-1847336220', uuid='29a2fe9a-add5-43c1-948a-9df854aa4261'), owner=OwnerMeta(userid='623ef4a55c9e4fc28bb65e49246b5008', username='tempest-TestExecuteStrategies-1883747907-project-admin', projectid='c634e1c17ed54907969576a0eb8eff50', projectname='tempest-TestExecuteStrategies-1883747907'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257172.091499) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.095 2 DEBUG nova.virt.libvirt.host [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.096 2 DEBUG nova.virt.libvirt.host [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.099 2 DEBUG nova.virt.libvirt.host [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.099 2 DEBUG nova.virt.libvirt.host [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.100 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.100 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.101 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.101 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.101 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.101 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.102 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.102 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.102 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.103 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.103 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.103 2 DEBUG nova.virt.hardware [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.107 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:32:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:32:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:32:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430757105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.624 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.658 2 DEBUG nova.storage.rbd_utils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:52 compute-0 nova_compute[265391]: 2025-09-30 18:32:52.663 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:52.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:53 compute-0 ceph-mon[73755]: pgmap v1627: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:32:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:32:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2430757105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:32:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:32:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036967863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.089 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.091 2 DEBUG nova.virt.libvirt.vif [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:32:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1847336220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1847336220',id=25,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-my5o1s4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:32:45Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=29a2fe9a-add5-43c1-948a-9df854aa4261,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.091 2 DEBUG nova.network.os_vif_util [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.092 2 DEBUG nova.network.os_vif_util [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:2d:3d,bridge_name='br-int',has_traffic_filtering=True,id=f942c9c9-85a4-47cf-9428-7e266b83b49b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf942c9c9-85') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.094 2 DEBUG nova.objects.instance [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29a2fe9a-add5-43c1-948a-9df854aa4261 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:32:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:53.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1628: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.604 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <uuid>29a2fe9a-add5-43c1-948a-9df854aa4261</uuid>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <name>instance-00000019</name>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1847336220</nova:name>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:32:52</nova:creationTime>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:32:53 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:32:53 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <nova:port uuid="f942c9c9-85a4-47cf-9428-7e266b83b49b">
Sep 30 18:32:53 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <system>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <entry name="serial">29a2fe9a-add5-43c1-948a-9df854aa4261</entry>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <entry name="uuid">29a2fe9a-add5-43c1-948a-9df854aa4261</entry>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </system>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <os>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   </os>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <features>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   </features>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/29a2fe9a-add5-43c1-948a-9df854aa4261_disk">
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       </source>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config">
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       </source>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:32:53 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:2b:2d:3d"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <target dev="tapf942c9c9-85"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/console.log" append="off"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <video>
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </video>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:32:53 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:32:53 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:32:53 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:32:53 compute-0 nova_compute[265391]: </domain>
Sep 30 18:32:53 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.606 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Preparing to wait for external event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.607 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.607 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.608 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.609 2 DEBUG nova.virt.libvirt.vif [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:32:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1847336220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1847336220',id=25,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-my5o1s4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:32:45Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=29a2fe9a-add5-43c1-948a-9df854aa4261,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.609 2 DEBUG nova.network.os_vif_util [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.610 2 DEBUG nova.network.os_vif_util [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:2d:3d,bridge_name='br-int',has_traffic_filtering=True,id=f942c9c9-85a4-47cf-9428-7e266b83b49b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf942c9c9-85') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.611 2 DEBUG os_vif [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:2d:3d,bridge_name='br-int',has_traffic_filtering=True,id=f942c9c9-85a4-47cf-9428-7e266b83b49b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf942c9c9-85') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '1f8aa9c4-a74e-54b3-8225-6274dca4c305', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf942c9c9-85, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapf942c9c9-85, col_values=(('qos', UUID('626c86aa-7b02-4ad9-baa6-59db390be984')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapf942c9c9-85, col_values=(('external_ids', {'iface-id': 'f942c9c9-85a4-47cf-9428-7e266b83b49b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:2d:3d', 'vm-uuid': '29a2fe9a-add5-43c1-948a-9df854aa4261'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:53 compute-0 NetworkManager[45059]: <info>  [1759257173.6260] manager: (tapf942c9c9-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:53 compute-0 nova_compute[265391]: 2025-09-30 18:32:53.634 2 INFO os_vif [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:2d:3d,bridge_name='br-int',has_traffic_filtering=True,id=f942c9c9-85a4-47cf-9428-7e266b83b49b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf942c9c9-85')
Sep 30 18:32:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:53.826Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2036967863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:32:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:54.320 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:54.321 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:54.321 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:54.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:55 compute-0 ceph-mon[73755]: pgmap v1628: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:32:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:55.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:55 compute-0 nova_compute[265391]: 2025-09-30 18:32:55.179 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:32:55 compute-0 nova_compute[265391]: 2025-09-30 18:32:55.179 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:32:55 compute-0 nova_compute[265391]: 2025-09-30 18:32:55.179 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No VIF found with MAC fa:16:3e:2b:2d:3d, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:32:55 compute-0 nova_compute[265391]: 2025-09-30 18:32:55.180 2 INFO nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Using config drive
Sep 30 18:32:55 compute-0 nova_compute[265391]: 2025-09-30 18:32:55.208 2 DEBUG nova.storage.rbd_utils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1629: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:32:55 compute-0 nova_compute[265391]: 2025-09-30 18:32:55.725 2 WARNING neutronclient.v2_0.client [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:32:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:32:56 compute-0 nova_compute[265391]: 2025-09-30 18:32:56.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:56 compute-0 nova_compute[265391]: 2025-09-30 18:32:56.612 2 INFO nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Creating config drive at /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/disk.config
Sep 30 18:32:56 compute-0 nova_compute[265391]: 2025-09-30 18:32:56.627 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpaqg6rd6t execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:56 compute-0 sudo[334627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:32:56 compute-0 sudo[334627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:56 compute-0 sudo[334627]: pam_unix(sudo:session): session closed for user root
Sep 30 18:32:56 compute-0 sudo[334650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:32:56 compute-0 sudo[334650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:56 compute-0 sudo[334650]: pam_unix(sudo:session): session closed for user root
Sep 30 18:32:56 compute-0 nova_compute[265391]: 2025-09-30 18:32:56.776 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpaqg6rd6t" returned: 0 in 0.149s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:56 compute-0 sudo[334680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:32:56 compute-0 sudo[334680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:56 compute-0 nova_compute[265391]: 2025-09-30 18:32:56.813 2 DEBUG nova.storage.rbd_utils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:32:56 compute-0 nova_compute[265391]: 2025-09-30 18:32:56.816 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/disk.config 29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:32:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:32:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:56.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.008 2 DEBUG oslo_concurrency.processutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/disk.config 29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.010 2 INFO nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Deleting local config drive /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/disk.config because it was imported into RBD.
Sep 30 18:32:57 compute-0 ceph-mon[73755]: pgmap v1629: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:32:57 compute-0 kernel: tapf942c9c9-85: entered promiscuous mode
Sep 30 18:32:57 compute-0 ovn_controller[156242]: 2025-09-30T18:32:57Z|00211|binding|INFO|Claiming lport f942c9c9-85a4-47cf-9428-7e266b83b49b for this chassis.
Sep 30 18:32:57 compute-0 ovn_controller[156242]: 2025-09-30T18:32:57Z|00212|binding|INFO|f942c9c9-85a4-47cf-9428-7e266b83b49b: Claiming fa:16:3e:2b:2d:3d 10.100.0.5
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:57 compute-0 NetworkManager[45059]: <info>  [1759257177.0895] manager: (tapf942c9c9-85): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.092 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:2d:3d 10.100.0.5'], port_security=['fa:16:3e:2b:2d:3d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '29a2fe9a-add5-43c1-948a-9df854aa4261', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=f942c9c9-85a4-47cf-9428-7e266b83b49b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.093 166158 INFO neutron.agent.ovn.metadata.agent [-] Port f942c9c9-85a4-47cf-9428-7e266b83b49b in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.094 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:32:57 compute-0 ovn_controller[156242]: 2025-09-30T18:32:57Z|00213|binding|INFO|Setting lport f942c9c9-85a4-47cf-9428-7e266b83b49b ovn-installed in OVS
Sep 30 18:32:57 compute-0 ovn_controller[156242]: 2025-09-30T18:32:57Z|00214|binding|INFO|Setting lport f942c9c9-85a4-47cf-9428-7e266b83b49b up in Southbound
Sep 30 18:32:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:57.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:57 compute-0 systemd-udevd[334767]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.119 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6625fc9c-2cf6-449a-83b2-6547d35b90e7]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:57 compute-0 NetworkManager[45059]: <info>  [1759257177.1319] device (tapf942c9c9-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:32:57 compute-0 NetworkManager[45059]: <info>  [1759257177.1333] device (tapf942c9c9-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:32:57 compute-0 systemd-machined[219917]: New machine qemu-18-instance-00000019.
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.155 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[04c94c4e-5319-428f-993b-f17311d1b304]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.157 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[c97417bd-e9ef-4ce3-aa0c-550bbc0e93a0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:57 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000019.
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.190 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[dd130ea7-4fb1-464e-b6fb-2a340e8519c9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.211 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d50620-52f9-4339-acb5-beb85b4e0975]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575259, 'reachable_time': 34563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334782, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.236 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c2f720-b0a0-486d-8b9d-5d609268b433]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575273, 'tstamp': 575273}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334784, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575277, 'tstamp': 575277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334784, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.237 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.240 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.240 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.241 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.241 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:32:57 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:32:57.242 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ef91a0-c374-46a7-930b-461da37222e0]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:32:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:57.292Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:57 compute-0 sudo[334680]: pam_unix(sudo:session): session closed for user root
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.522 2 DEBUG nova.compute.manager [req-04ea8b1b-a140-47b5-889d-8a2730bacd1b req-93923c05-7269-4d4d-ad76-c8beb1ac2082 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.523 2 DEBUG oslo_concurrency.lockutils [req-04ea8b1b-a140-47b5-889d-8a2730bacd1b req-93923c05-7269-4d4d-ad76-c8beb1ac2082 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.524 2 DEBUG oslo_concurrency.lockutils [req-04ea8b1b-a140-47b5-889d-8a2730bacd1b req-93923c05-7269-4d4d-ad76-c8beb1ac2082 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.524 2 DEBUG oslo_concurrency.lockutils [req-04ea8b1b-a140-47b5-889d-8a2730bacd1b req-93923c05-7269-4d4d-ad76-c8beb1ac2082 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:57 compute-0 nova_compute[265391]: 2025-09-30 18:32:57.524 2 DEBUG nova.compute.manager [req-04ea8b1b-a140-47b5-889d-8a2730bacd1b req-93923c05-7269-4d4d-ad76-c8beb1ac2082 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Processing event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:32:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1630: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3666147916' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:32:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:32:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3666147916' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:32:57 compute-0 sudo[334816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:32:57 compute-0 sudo[334816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:57 compute-0 sudo[334816]: pam_unix(sudo:session): session closed for user root
Sep 30 18:32:57 compute-0 sudo[334867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:32:57 compute-0 sudo[334867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:32:58 compute-0 ceph-mon[73755]: pgmap v1630: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3666147916' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:32:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3666147916' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:32:58 compute-0 podman[334933]: 2025-09-30 18:32:58.083400019 +0000 UTC m=+0.044685738 container create aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.104 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.107 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.111 2 INFO nova.virt.libvirt.driver [-] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Instance spawned successfully.
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.111 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:32:58 compute-0 systemd[1]: Started libpod-conmon-aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942.scope.
Sep 30 18:32:58 compute-0 podman[334933]: 2025-09-30 18:32:58.062517599 +0000 UTC m=+0.023803318 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:32:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:32:58 compute-0 podman[334933]: 2025-09-30 18:32:58.184807668 +0000 UTC m=+0.146093397 container init aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:32:58 compute-0 podman[334933]: 2025-09-30 18:32:58.192823516 +0000 UTC m=+0.154109235 container start aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:32:58 compute-0 hopeful_chebyshev[334949]: 167 167
Sep 30 18:32:58 compute-0 systemd[1]: libpod-aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942.scope: Deactivated successfully.
Sep 30 18:32:58 compute-0 podman[334933]: 2025-09-30 18:32:58.200740191 +0000 UTC m=+0.162026101 container attach aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chebyshev, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:32:58 compute-0 podman[334933]: 2025-09-30 18:32:58.201876511 +0000 UTC m=+0.163162250 container died aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0380d6422bdedeec2540b641fcd70951992a2b749987887ed5ffc23740d412d4-merged.mount: Deactivated successfully.
Sep 30 18:32:58 compute-0 podman[334933]: 2025-09-30 18:32:58.245116842 +0000 UTC m=+0.206402561 container remove aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:32:58 compute-0 systemd[1]: libpod-conmon-aa1640d7e6d1890d4534a49dff81aae2695af9d50cbedaa40ffc7f752a801942.scope: Deactivated successfully.
Sep 30 18:32:58 compute-0 podman[334973]: 2025-09-30 18:32:58.428609259 +0000 UTC m=+0.040066880 container create c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:32:58 compute-0 systemd[1]: Started libpod-conmon-c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0.scope.
Sep 30 18:32:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf7d594984ee5c8b6cd64b9feb9f5341bc0a5f3f4e6854ea1b16c7ed80bcea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:58 compute-0 podman[334973]: 2025-09-30 18:32:58.412211404 +0000 UTC m=+0.023669045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf7d594984ee5c8b6cd64b9feb9f5341bc0a5f3f4e6854ea1b16c7ed80bcea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf7d594984ee5c8b6cd64b9feb9f5341bc0a5f3f4e6854ea1b16c7ed80bcea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf7d594984ee5c8b6cd64b9feb9f5341bc0a5f3f4e6854ea1b16c7ed80bcea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf7d594984ee5c8b6cd64b9feb9f5341bc0a5f3f4e6854ea1b16c7ed80bcea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:58 compute-0 podman[334973]: 2025-09-30 18:32:58.532740169 +0000 UTC m=+0.144197820 container init c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hofstadter, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:32:58 compute-0 podman[334973]: 2025-09-30 18:32:58.543909678 +0000 UTC m=+0.155367289 container start c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:32:58 compute-0 podman[334973]: 2025-09-30 18:32:58.54821425 +0000 UTC m=+0.159671901 container attach c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.626 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.626 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.627 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.627 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.628 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.628 2 DEBUG nova.virt.libvirt.driver [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:32:58 compute-0 nova_compute[265391]: 2025-09-30 18:32:58.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:32:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:58] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:32:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:32:58] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:32:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:32:58.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:32:58 compute-0 sweet_hofstadter[334990]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:32:58 compute-0 sweet_hofstadter[334990]: --> All data devices are unavailable
Sep 30 18:32:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:32:58.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:58 compute-0 systemd[1]: libpod-c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0.scope: Deactivated successfully.
Sep 30 18:32:58 compute-0 podman[334973]: 2025-09-30 18:32:58.947267876 +0000 UTC m=+0.558725507 container died c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5cf7d594984ee5c8b6cd64b9feb9f5341bc0a5f3f4e6854ea1b16c7ed80bcea-merged.mount: Deactivated successfully.
Sep 30 18:32:58 compute-0 podman[334973]: 2025-09-30 18:32:58.997302223 +0000 UTC m=+0.608759844 container remove c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:32:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:32:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:32:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:32:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:32:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:32:59 compute-0 systemd[1]: libpod-conmon-c9f0cd6eae1a7a2c671528350d7390301f2a3da9009a4ab77a3749d4c62599b0.scope: Deactivated successfully.
Sep 30 18:32:59 compute-0 sudo[334867]: pam_unix(sudo:session): session closed for user root
Sep 30 18:32:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:32:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:32:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:32:59.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.138 2 INFO nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Took 12.30 seconds to spawn the instance on the hypervisor.
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.139 2 DEBUG nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:32:59 compute-0 sudo[335018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:32:59 compute-0 sudo[335018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:59 compute-0 sudo[335018]: pam_unix(sudo:session): session closed for user root
Sep 30 18:32:59 compute-0 sudo[335043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:32:59 compute-0 sudo[335043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:32:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1631: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 335 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.583 2 DEBUG nova.compute.manager [req-37713e82-5f52-47d7-8a9b-18a211adc906 req-37a45e13-d801-48e0-9e41-136e8d5857ce 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.583 2 DEBUG oslo_concurrency.lockutils [req-37713e82-5f52-47d7-8a9b-18a211adc906 req-37a45e13-d801-48e0-9e41-136e8d5857ce 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.584 2 DEBUG oslo_concurrency.lockutils [req-37713e82-5f52-47d7-8a9b-18a211adc906 req-37a45e13-d801-48e0-9e41-136e8d5857ce 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.584 2 DEBUG oslo_concurrency.lockutils [req-37713e82-5f52-47d7-8a9b-18a211adc906 req-37a45e13-d801-48e0-9e41-136e8d5857ce 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.584 2 DEBUG nova.compute.manager [req-37713e82-5f52-47d7-8a9b-18a211adc906 req-37a45e13-d801-48e0-9e41-136e8d5857ce 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] No waiting events found dispatching network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.584 2 WARNING nova.compute.manager [req-37713e82-5f52-47d7-8a9b-18a211adc906 req-37a45e13-d801-48e0-9e41-136e8d5857ce 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received unexpected event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b for instance with vm_state active and task_state None.
Sep 30 18:32:59 compute-0 ceph-mon[73755]: pgmap v1631: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 335 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Sep 30 18:32:59 compute-0 podman[335109]: 2025-09-30 18:32:59.624461043 +0000 UTC m=+0.036379804 container create 5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:32:59 compute-0 systemd[1]: Started libpod-conmon-5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f.scope.
Sep 30 18:32:59 compute-0 nova_compute[265391]: 2025-09-30 18:32:59.676 2 INFO nova.compute.manager [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Took 18.07 seconds to build instance.
Sep 30 18:32:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:32:59 compute-0 podman[335109]: 2025-09-30 18:32:59.697386893 +0000 UTC m=+0.109305654 container init 5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:32:59 compute-0 podman[335109]: 2025-09-30 18:32:59.705285648 +0000 UTC m=+0.117204399 container start 5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:32:59 compute-0 podman[335109]: 2025-09-30 18:32:59.610252444 +0000 UTC m=+0.022171225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:32:59 compute-0 podman[335109]: 2025-09-30 18:32:59.708192854 +0000 UTC m=+0.120111615 container attach 5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Sep 30 18:32:59 compute-0 romantic_montalcini[335126]: 167 167
Sep 30 18:32:59 compute-0 systemd[1]: libpod-5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f.scope: Deactivated successfully.
Sep 30 18:32:59 compute-0 podman[335109]: 2025-09-30 18:32:59.711236942 +0000 UTC m=+0.123155703 container died 5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f36662c30610ab28d7ab62c2f51b90ecf02998e5b2b4933475aa9eeb128e5c60-merged.mount: Deactivated successfully.
Sep 30 18:32:59 compute-0 podman[276673]: time="2025-09-30T18:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:32:59 compute-0 podman[335109]: 2025-09-30 18:32:59.747040031 +0000 UTC m=+0.158958792 container remove 5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 18:32:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:32:59 compute-0 systemd[1]: libpod-conmon-5a4c2567b0c38356859c4871b85b9524433d6526074e6a4b115013cb8500138f.scope: Deactivated successfully.
Sep 30 18:32:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10759 "" "Go-http-client/1.1"
Sep 30 18:32:59 compute-0 podman[335152]: 2025-09-30 18:32:59.928146586 +0000 UTC m=+0.042576685 container create 62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_merkle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:32:59 compute-0 systemd[1]: Started libpod-conmon-62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b.scope.
Sep 30 18:32:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39c2d3fefe7f02dfb10f381a0598446b6defe664b0c659bbb4ca975382ec82f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39c2d3fefe7f02dfb10f381a0598446b6defe664b0c659bbb4ca975382ec82f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39c2d3fefe7f02dfb10f381a0598446b6defe664b0c659bbb4ca975382ec82f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39c2d3fefe7f02dfb10f381a0598446b6defe664b0c659bbb4ca975382ec82f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:33:00 compute-0 podman[335152]: 2025-09-30 18:32:59.911423512 +0000 UTC m=+0.025853641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:33:00 compute-0 podman[335152]: 2025-09-30 18:33:00.022631146 +0000 UTC m=+0.137061285 container init 62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_merkle, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:33:00 compute-0 podman[335152]: 2025-09-30 18:33:00.032860691 +0000 UTC m=+0.147290800 container start 62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:33:00 compute-0 podman[335152]: 2025-09-30 18:33:00.039869283 +0000 UTC m=+0.154299432 container attach 62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_merkle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:33:00 compute-0 nova_compute[265391]: 2025-09-30 18:33:00.181 2 DEBUG oslo_concurrency.lockutils [None req-64550adb-0f6e-4235-85d2-7352342b766c 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.602s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:00 compute-0 competent_merkle[335168]: {
Sep 30 18:33:00 compute-0 competent_merkle[335168]:     "0": [
Sep 30 18:33:00 compute-0 competent_merkle[335168]:         {
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "devices": [
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "/dev/loop3"
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             ],
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "lv_name": "ceph_lv0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "lv_size": "21470642176",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "name": "ceph_lv0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "tags": {
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.cluster_name": "ceph",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.crush_device_class": "",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.encrypted": "0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.osd_id": "0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.type": "block",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.vdo": "0",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:                 "ceph.with_tpm": "0"
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             },
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "type": "block",
Sep 30 18:33:00 compute-0 competent_merkle[335168]:             "vg_name": "ceph_vg0"
Sep 30 18:33:00 compute-0 competent_merkle[335168]:         }
Sep 30 18:33:00 compute-0 competent_merkle[335168]:     ]
Sep 30 18:33:00 compute-0 competent_merkle[335168]: }
Sep 30 18:33:00 compute-0 podman[335152]: 2025-09-30 18:33:00.341197975 +0000 UTC m=+0.455628074 container died 62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:33:00 compute-0 systemd[1]: libpod-62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b.scope: Deactivated successfully.
Sep 30 18:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e39c2d3fefe7f02dfb10f381a0598446b6defe664b0c659bbb4ca975382ec82f-merged.mount: Deactivated successfully.
Sep 30 18:33:00 compute-0 podman[335152]: 2025-09-30 18:33:00.395762879 +0000 UTC m=+0.510193028 container remove 62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:33:00 compute-0 systemd[1]: libpod-conmon-62516797965e5bc0adad38498ebaf1714722ed700dbec91ab2286184c12dc19b.scope: Deactivated successfully.
Sep 30 18:33:00 compute-0 sudo[335043]: pam_unix(sudo:session): session closed for user root
Sep 30 18:33:00 compute-0 sudo[335190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:33:00 compute-0 sudo[335190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:33:00 compute-0 sudo[335190]: pam_unix(sudo:session): session closed for user root
Sep 30 18:33:00 compute-0 podman[335214]: 2025-09-30 18:33:00.654633271 +0000 UTC m=+0.080815476 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:33:00 compute-0 sudo[335235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:33:00 compute-0 sudo[335235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:33:00 compute-0 podman[335215]: 2025-09-30 18:33:00.680309007 +0000 UTC m=+0.080697034 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:33:00 compute-0 podman[335216]: 2025-09-30 18:33:00.680280226 +0000 UTC m=+0.088117176 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Sep 30 18:33:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:00.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:01.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:01 compute-0 nova_compute[265391]: 2025-09-30 18:33:01.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:01 compute-0 podman[335337]: 2025-09-30 18:33:01.168253797 +0000 UTC m=+0.051058545 container create 821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_williamson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 18:33:01 compute-0 systemd[1]: Started libpod-conmon-821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0.scope.
Sep 30 18:33:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:33:01 compute-0 podman[335337]: 2025-09-30 18:33:01.139317567 +0000 UTC m=+0.022122335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:33:01 compute-0 podman[335337]: 2025-09-30 18:33:01.253106987 +0000 UTC m=+0.135911785 container init 821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:33:01 compute-0 podman[335337]: 2025-09-30 18:33:01.262458349 +0000 UTC m=+0.145263107 container start 821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:33:01 compute-0 podman[335337]: 2025-09-30 18:33:01.266263028 +0000 UTC m=+0.149067826 container attach 821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_williamson, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:33:01 compute-0 inspiring_williamson[335353]: 167 167
Sep 30 18:33:01 compute-0 systemd[1]: libpod-821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0.scope: Deactivated successfully.
Sep 30 18:33:01 compute-0 podman[335337]: 2025-09-30 18:33:01.271973886 +0000 UTC m=+0.154778654 container died 821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_williamson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Sep 30 18:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aef47787e8f204ae6716f35644619bcffa4a27992bf60ce982c48617a3f27b8-merged.mount: Deactivated successfully.
Sep 30 18:33:01 compute-0 podman[335337]: 2025-09-30 18:33:01.310454094 +0000 UTC m=+0.193258842 container remove 821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 18:33:01 compute-0 systemd[1]: libpod-conmon-821f2418c835399577faa166ff47c79f67653a9fd6fc7905099807d4603057e0.scope: Deactivated successfully.
Sep 30 18:33:01 compute-0 openstack_network_exporter[279566]: ERROR   18:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:33:01 compute-0 openstack_network_exporter[279566]: ERROR   18:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:33:01 compute-0 openstack_network_exporter[279566]: ERROR   18:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:33:01 compute-0 openstack_network_exporter[279566]: ERROR   18:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:33:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:33:01 compute-0 openstack_network_exporter[279566]: ERROR   18:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:33:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:33:01 compute-0 podman[335377]: 2025-09-30 18:33:01.506461725 +0000 UTC m=+0.042989515 container create 1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:33:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1632: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 26 KiB/s wr, 11 op/s
Sep 30 18:33:01 compute-0 systemd[1]: Started libpod-conmon-1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc.scope.
Sep 30 18:33:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b104377d3a8114511cff7256fd4b38402200bdc2291ad0cdd1a0004fba7c027/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b104377d3a8114511cff7256fd4b38402200bdc2291ad0cdd1a0004fba7c027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b104377d3a8114511cff7256fd4b38402200bdc2291ad0cdd1a0004fba7c027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b104377d3a8114511cff7256fd4b38402200bdc2291ad0cdd1a0004fba7c027/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:33:01 compute-0 podman[335377]: 2025-09-30 18:33:01.48966175 +0000 UTC m=+0.026189570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:33:01 compute-0 podman[335377]: 2025-09-30 18:33:01.588947864 +0000 UTC m=+0.125475694 container init 1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nightingale, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:33:01 compute-0 podman[335377]: 2025-09-30 18:33:01.596197042 +0000 UTC m=+0.132724842 container start 1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:33:01 compute-0 podman[335377]: 2025-09-30 18:33:01.598875471 +0000 UTC m=+0.135403301 container attach 1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nightingale, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:33:01 compute-0 ceph-mon[73755]: pgmap v1632: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 26 KiB/s wr, 11 op/s
Sep 30 18:33:02 compute-0 lvm[335470]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:33:02 compute-0 lvm[335470]: VG ceph_vg0 finished
Sep 30 18:33:02 compute-0 ecstatic_nightingale[335395]: {}
Sep 30 18:33:02 compute-0 systemd[1]: libpod-1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc.scope: Deactivated successfully.
Sep 30 18:33:02 compute-0 systemd[1]: libpod-1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc.scope: Consumed 1.091s CPU time.
Sep 30 18:33:02 compute-0 podman[335377]: 2025-09-30 18:33:02.29083522 +0000 UTC m=+0.827363040 container died 1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b104377d3a8114511cff7256fd4b38402200bdc2291ad0cdd1a0004fba7c027-merged.mount: Deactivated successfully.
Sep 30 18:33:02 compute-0 podman[335377]: 2025-09-30 18:33:02.346494973 +0000 UTC m=+0.883022783 container remove 1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:33:02 compute-0 systemd[1]: libpod-conmon-1a9036e87cb81b1babbf7dd0366ac588e8d93974894377d85b4c8707c4dfffbc.scope: Deactivated successfully.
Sep 30 18:33:02 compute-0 sudo[335235]: pam_unix(sudo:session): session closed for user root
Sep 30 18:33:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:33:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:33:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:33:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:33:02 compute-0 sudo[335489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:33:02 compute-0 sudo[335489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:33:02 compute-0 sudo[335489]: pam_unix(sudo:session): session closed for user root
Sep 30 18:33:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:02.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:03.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:33:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:33:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1633: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 73 op/s
Sep 30 18:33:03 compute-0 nova_compute[265391]: 2025-09-30 18:33:03.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:03.826Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:04 compute-0 ceph-mon[73755]: pgmap v1633: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 73 op/s
Sep 30 18:33:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:04.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:05.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1634: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Sep 30 18:33:05 compute-0 ceph-mon[73755]: pgmap v1634: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Sep 30 18:33:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:06 compute-0 nova_compute[265391]: 2025-09-30 18:33:06.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:06.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:07.293Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:33:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:33:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:33:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:33:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:33:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:33:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:33:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:33:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:33:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1635: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016606713347333464 of space, bias 1.0, pg target 0.33213426694666925 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:33:08
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'vms', '.mgr', '.nfs', 'default.rgw.control']
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:33:08 compute-0 ceph-mon[73755]: pgmap v1635: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:33:08 compute-0 nova_compute[265391]: 2025-09-30 18:33:08.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:08] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:33:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:08] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:33:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:08.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:08.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:09 compute-0 ovn_controller[156242]: 2025-09-30T18:33:09Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:2d:3d 10.100.0.5
Sep 30 18:33:09 compute-0 ovn_controller[156242]: 2025-09-30T18:33:09Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:2d:3d 10.100.0.5
Sep 30 18:33:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:09.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1636: 353 pgs: 353 active+clean; 186 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Sep 30 18:33:09 compute-0 ceph-mon[73755]: pgmap v1636: 353 pgs: 353 active+clean; 186 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Sep 30 18:33:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:10.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:11.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:11 compute-0 nova_compute[265391]: 2025-09-30 18:33:11.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1637: 353 pgs: 353 active+clean; 186 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 86 op/s
Sep 30 18:33:11 compute-0 ceph-mon[73755]: pgmap v1637: 353 pgs: 353 active+clean; 186 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 86 op/s
Sep 30 18:33:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:12.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:13.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1638: 353 pgs: 353 active+clean; 196 MiB data, 412 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Sep 30 18:33:13 compute-0 nova_compute[265391]: 2025-09-30 18:33:13.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:13 compute-0 ceph-mon[73755]: pgmap v1638: 353 pgs: 353 active+clean; 196 MiB data, 412 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Sep 30 18:33:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:13.827Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:14.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:15.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1639: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:33:15 compute-0 ceph-mon[73755]: pgmap v1639: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:33:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:16 compute-0 nova_compute[265391]: 2025-09-30 18:33:16.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:16 compute-0 sudo[335529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:33:16 compute-0 sudo[335529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:33:16 compute-0 sudo[335529]: pam_unix(sudo:session): session closed for user root
Sep 30 18:33:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:16.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:17.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:17.294Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1640: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:33:17 compute-0 ceph-mon[73755]: pgmap v1640: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:33:18 compute-0 nova_compute[265391]: 2025-09-30 18:33:18.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:18] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:33:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:18] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:33:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:18.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:33:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:18.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:33:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:19.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1641: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:33:19 compute-0 ceph-mon[73755]: pgmap v1641: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:33:20 compute-0 podman[335558]: 2025-09-30 18:33:20.510164006 +0000 UTC m=+0.052564993 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Sep 30 18:33:20 compute-0 podman[335560]: 2025-09-30 18:33:20.545371989 +0000 UTC m=+0.082979102 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:33:20 compute-0 podman[335559]: 2025-09-30 18:33:20.62332185 +0000 UTC m=+0.155411670 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4)
Sep 30 18:33:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:21.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:21 compute-0 nova_compute[265391]: 2025-09-30 18:33:21.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1642: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 296 KiB/s rd, 404 KiB/s wr, 42 op/s
Sep 30 18:33:21 compute-0 ceph-mon[73755]: pgmap v1642: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 296 KiB/s rd, 404 KiB/s wr, 42 op/s
Sep 30 18:33:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:33:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:33:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:33:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:22.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:23.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:23 compute-0 unix_chkpwd[335627]: password check failed for user (root)
Sep 30 18:33:23 compute-0 sshd-session[335623]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158  user=root
Sep 30 18:33:23 compute-0 sshd-session[335625]: Invalid user testuser from 14.225.220.107 port 38574
Sep 30 18:33:23 compute-0 sshd-session[335625]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:33:23 compute-0 sshd-session[335625]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:33:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1643: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 296 KiB/s rd, 404 KiB/s wr, 42 op/s
Sep 30 18:33:23 compute-0 ceph-mon[73755]: pgmap v1643: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 296 KiB/s rd, 404 KiB/s wr, 42 op/s
Sep 30 18:33:23 compute-0 nova_compute[265391]: 2025-09-30 18:33:23.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:23.827Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:24.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:25.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1644: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 48 KiB/s rd, 31 KiB/s wr, 15 op/s
Sep 30 18:33:25 compute-0 ceph-mon[73755]: pgmap v1644: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 48 KiB/s rd, 31 KiB/s wr, 15 op/s
Sep 30 18:33:25 compute-0 sshd-session[335623]: Failed password for root from 45.252.249.158 port 40416 ssh2
Sep 30 18:33:25 compute-0 sshd-session[335625]: Failed password for invalid user testuser from 14.225.220.107 port 38574 ssh2
Sep 30 18:33:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:26 compute-0 nova_compute[265391]: 2025-09-30 18:33:26.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:26 compute-0 sshd-session[335623]: Received disconnect from 45.252.249.158 port 40416:11: Bye Bye [preauth]
Sep 30 18:33:26 compute-0 sshd-session[335623]: Disconnected from authenticating user root 45.252.249.158 port 40416 [preauth]
Sep 30 18:33:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:33:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:26.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:33:27 compute-0 sshd-session[335625]: Received disconnect from 14.225.220.107 port 38574:11: Bye Bye [preauth]
Sep 30 18:33:27 compute-0 sshd-session[335625]: Disconnected from invalid user testuser 14.225.220.107 port 38574 [preauth]
Sep 30 18:33:27 compute-0 ovn_controller[156242]: 2025-09-30T18:33:27Z|00215|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Sep 30 18:33:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:27.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:27.295Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1645: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:33:27 compute-0 ceph-mon[73755]: pgmap v1645: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:33:28 compute-0 nova_compute[265391]: 2025-09-30 18:33:28.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:28] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:33:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:28] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:33:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:28.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:29.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1646: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:33:29 compute-0 ceph-mon[73755]: pgmap v1646: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:33:29 compute-0 podman[276673]: time="2025-09-30T18:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:33:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:33:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10762 "" "Go-http-client/1.1"
Sep 30 18:33:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:31.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:31 compute-0 nova_compute[265391]: 2025-09-30 18:33:31.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:31 compute-0 openstack_network_exporter[279566]: ERROR   18:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:33:31 compute-0 openstack_network_exporter[279566]: ERROR   18:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:33:31 compute-0 openstack_network_exporter[279566]: ERROR   18:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:33:31 compute-0 openstack_network_exporter[279566]: ERROR   18:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:33:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:33:31 compute-0 openstack_network_exporter[279566]: ERROR   18:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:33:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:33:31 compute-0 podman[335636]: 2025-09-30 18:33:31.538079175 +0000 UTC m=+0.067319536 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:33:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1647: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:33:31 compute-0 podman[335638]: 2025-09-30 18:33:31.566324508 +0000 UTC m=+0.080673313 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Sep 30 18:33:31 compute-0 podman[335637]: 2025-09-30 18:33:31.566194624 +0000 UTC m=+0.093496765 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Sep 30 18:33:31 compute-0 ceph-mon[73755]: pgmap v1647: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:33:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:33.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:33 compute-0 nova_compute[265391]: 2025-09-30 18:33:33.345 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Check if temp file /var/lib/nova/instances/tmpwp_k9oqd exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:33:33 compute-0 nova_compute[265391]: 2025-09-30 18:33:33.350 2 DEBUG nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwp_k9oqd',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='78e1566d-9c5e-49b1-a044-0c46cf002c66',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:33:33 compute-0 nova_compute[265391]: 2025-09-30 18:33:33.513 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Check if temp file /var/lib/nova/instances/tmp3f_adtk9 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:33:33 compute-0 nova_compute[265391]: 2025-09-30 18:33:33.519 2 DEBUG nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3f_adtk9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='29a2fe9a-add5-43c1-948a-9df854aa4261',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:33:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1648: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:33:33 compute-0 ceph-mon[73755]: pgmap v1648: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:33:33 compute-0 nova_compute[265391]: 2025-09-30 18:33:33.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:33.829Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:33:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:33.829Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:33:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:35.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1649: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:33:35 compute-0 ceph-mon[73755]: pgmap v1649: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:33:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:36 compute-0 nova_compute[265391]: 2025-09-30 18:33:36.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:33:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722427631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:33:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:33:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722427631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:33:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3722427631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:33:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3722427631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:33:36 compute-0 sudo[335699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:33:36 compute-0 sudo[335699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:33:36 compute-0 sudo[335699]: pam_unix(sudo:session): session closed for user root
Sep 30 18:33:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:36.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:37.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:37.296Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:33:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:33:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:33:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:33:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:33:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:33:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:33:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:33:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1650: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:33:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:33:37 compute-0 ceph-mon[73755]: pgmap v1650: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:33:38 compute-0 nova_compute[265391]: 2025-09-30 18:33:38.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:38] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:33:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:38] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:33:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:38.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:38.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:39 compute-0 nova_compute[265391]: 2025-09-30 18:33:39.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:33:39 compute-0 nova_compute[265391]: 2025-09-30 18:33:39.531 2 DEBUG nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Preparing to wait for external event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:33:39 compute-0 nova_compute[265391]: 2025-09-30 18:33:39.531 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:39 compute-0 nova_compute[265391]: 2025-09-30 18:33:39.531 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:39 compute-0 nova_compute[265391]: 2025-09-30 18:33:39.531 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1651: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:33:39 compute-0 ceph-mon[73755]: pgmap v1651: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:33:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:40.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:41.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:41 compute-0 nova_compute[265391]: 2025-09-30 18:33:41.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1652: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:33:41 compute-0 ceph-mon[73755]: pgmap v1652: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:33:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:43.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:43.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1653: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 2 op/s
Sep 30 18:33:43 compute-0 ceph-mon[73755]: pgmap v1653: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 2 op/s
Sep 30 18:33:43 compute-0 nova_compute[265391]: 2025-09-30 18:33:43.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:43.830Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:33:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:43.830Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.430 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.430 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.431 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:33:44 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.811 2 DEBUG nova.compute.manager [req-03d4b808-3d3e-4501-831e-7c07584f604c req-b1e3a9d0-db55-4b59-b746-f5e39f489645 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.812 2 DEBUG oslo_concurrency.lockutils [req-03d4b808-3d3e-4501-831e-7c07584f604c req-b1e3a9d0-db55-4b59-b746-f5e39f489645 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.812 2 DEBUG oslo_concurrency.lockutils [req-03d4b808-3d3e-4501-831e-7c07584f604c req-b1e3a9d0-db55-4b59-b746-f5e39f489645 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.812 2 DEBUG oslo_concurrency.lockutils [req-03d4b808-3d3e-4501-831e-7c07584f604c req-b1e3a9d0-db55-4b59-b746-f5e39f489645 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.812 2 DEBUG nova.compute.manager [req-03d4b808-3d3e-4501-831e-7c07584f604c req-b1e3a9d0-db55-4b59-b746-f5e39f489645 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] No event matching network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 in dict_keys([('network-vif-plugged', '48ba4743-596d-47a6-a246-afe70e6e1fc6')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.812 2 DEBUG nova.compute.manager [req-03d4b808-3d3e-4501-831e-7c07584f604c req-b1e3a9d0-db55-4b59-b746-f5e39f489645 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.952 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.953 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.953 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.954 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:33:44 compute-0 nova_compute[265391]: 2025-09-30 18:33:44.954 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:33:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:45.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:33:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2498694099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:33:45 compute-0 nova_compute[265391]: 2025-09-30 18:33:45.377 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:33:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2498694099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:33:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1654: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 10 KiB/s wr, 2 op/s
Sep 30 18:33:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:45.741 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:33:45 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:45.741 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:33:45 compute-0 nova_compute[265391]: 2025-09-30 18:33:45.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/885221029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:33:46 compute-0 ceph-mon[73755]: pgmap v1654: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 10 KiB/s wr, 2 op/s
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.520 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.520 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.524 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.524 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.709 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.710 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.745 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.035s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.745 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3980MB free_disk=39.901153564453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.746 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:46 compute-0 nova_compute[265391]: 2025-09-30 18:33:46.746 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:47.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:47.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:47.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.498 2 DEBUG nova.compute.manager [req-ce1a2996-0cd8-49a5-84b5-38a25d9a1871 req-cd8613f1-3d58-4c4a-9f68-a322fd437053 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.498 2 DEBUG oslo_concurrency.lockutils [req-ce1a2996-0cd8-49a5-84b5-38a25d9a1871 req-cd8613f1-3d58-4c4a-9f68-a322fd437053 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.499 2 DEBUG oslo_concurrency.lockutils [req-ce1a2996-0cd8-49a5-84b5-38a25d9a1871 req-cd8613f1-3d58-4c4a-9f68-a322fd437053 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.499 2 DEBUG oslo_concurrency.lockutils [req-ce1a2996-0cd8-49a5-84b5-38a25d9a1871 req-cd8613f1-3d58-4c4a-9f68-a322fd437053 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.499 2 DEBUG nova.compute.manager [req-ce1a2996-0cd8-49a5-84b5-38a25d9a1871 req-cd8613f1-3d58-4c4a-9f68-a322fd437053 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Processing event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:33:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1655: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:33:47 compute-0 ceph-mon[73755]: pgmap v1655: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.769 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Updating resource usage from migration 3076aa2f-e697-4bdd-98f3-898820dd8d4b
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.770 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Updating resource usage from migration 6fbb2434-2a54-4726-8c0b-62fb0288d275
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.801 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration 6fbb2434-2a54-4726-8c0b-62fb0288d275 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.801 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration 3076aa2f-e697-4bdd-98f3-898820dd8d4b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.802 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.802 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=39GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:33:46 up  1:37,  0 user,  load average: 0.55, 0.83, 0.86\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_migrating': '2', 'num_os_type_None': '2', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '2', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:33:47 compute-0 nova_compute[265391]: 2025-09-30 18:33:47.855 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:33:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:33:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685991844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:33:48 compute-0 nova_compute[265391]: 2025-09-30 18:33:48.295 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:33:48 compute-0 nova_compute[265391]: 2025-09-30 18:33:48.299 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:33:48 compute-0 nova_compute[265391]: 2025-09-30 18:33:48.555 2 INFO nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Took 9.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:33:48 compute-0 nova_compute[265391]: 2025-09-30 18:33:48.555 2 DEBUG nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:33:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2685991844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:33:48 compute-0 nova_compute[265391]: 2025-09-30 18:33:48.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:48] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:33:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:48] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:33:48 compute-0 nova_compute[265391]: 2025-09-30 18:33:48.806 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:33:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:48.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.063 2 DEBUG nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwp_k9oqd',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='78e1566d-9c5e-49b1-a044-0c46cf002c66',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(6fbb2434-2a54-4726-8c0b-62fb0288d275),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.067 2 DEBUG nova.objects.instance [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 78e1566d-9c5e-49b1-a044-0c46cf002c66 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.068 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.069 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.069 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:33:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.318 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.318 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.573s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1656: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.571 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.572 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.576 2 DEBUG nova.compute.manager [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-changed-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.577 2 DEBUG nova.compute.manager [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Refreshing instance network info cache due to event network-changed-48ba4743-596d-47a6-a246-afe70e6e1fc6. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.577 2 DEBUG oslo_concurrency.lockutils [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.578 2 DEBUG oslo_concurrency.lockutils [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.578 2 DEBUG nova.network.neutron [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Refreshing network info cache for port 48ba4743-596d-47a6-a246-afe70e6e1fc6 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.583 2 DEBUG nova.virt.libvirt.vif [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:32:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-760306639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-760306639',id=24,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:32:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-7ntnt7t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:32:35Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=78e1566d-9c5e-49b1-a044-0c46cf002c66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.583 2 DEBUG nova.network.os_vif_util [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.584 2 DEBUG nova.network.os_vif_util [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:e6:35,bridge_name='br-int',has_traffic_filtering=True,id=48ba4743-596d-47a6-a246-afe70e6e1fc6,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba4743-59') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.585 2 DEBUG nova.virt.libvirt.migration [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:47:e6:35"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <target dev="tap48ba4743-59"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]: </interface>
Sep 30 18:33:49 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.586 2 DEBUG nova.virt.libvirt.migration [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <name>instance-00000018</name>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <uuid>78e1566d-9c5e-49b1-a044-0c46cf002c66</uuid>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-760306639</nova:name>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:32:29</nova:creationTime>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:port uuid="48ba4743-596d-47a6-a246-afe70e6e1fc6">
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <system>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="serial">78e1566d-9c5e-49b1-a044-0c46cf002c66</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="uuid">78e1566d-9c5e-49b1-a044-0c46cf002c66</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </system>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <os>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </os>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <features>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </features>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/78e1566d-9c5e-49b1-a044-0c46cf002c66_disk">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:47:e6:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap48ba4743-59"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/console.log" append="off"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </target>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/console.log" append="off"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </console>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </input>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <video>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </video>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]: </domain>
Sep 30 18:33:49 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.589 2 DEBUG nova.virt.libvirt.migration [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <name>instance-00000018</name>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <uuid>78e1566d-9c5e-49b1-a044-0c46cf002c66</uuid>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-760306639</nova:name>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:32:29</nova:creationTime>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:port uuid="48ba4743-596d-47a6-a246-afe70e6e1fc6">
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <system>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="serial">78e1566d-9c5e-49b1-a044-0c46cf002c66</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="uuid">78e1566d-9c5e-49b1-a044-0c46cf002c66</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </system>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <os>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </os>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <features>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </features>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/78e1566d-9c5e-49b1-a044-0c46cf002c66_disk">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:47:e6:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap48ba4743-59"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/console.log" append="off"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </target>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/console.log" append="off"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </console>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </input>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <video>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </video>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]: </domain>
Sep 30 18:33:49 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.589 2 DEBUG nova.virt.libvirt.migration [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <name>instance-00000018</name>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <uuid>78e1566d-9c5e-49b1-a044-0c46cf002c66</uuid>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-760306639</nova:name>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:32:29</nova:creationTime>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <nova:port uuid="48ba4743-596d-47a6-a246-afe70e6e1fc6">
Sep 30 18:33:49 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <system>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="serial">78e1566d-9c5e-49b1-a044-0c46cf002c66</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="uuid">78e1566d-9c5e-49b1-a044-0c46cf002c66</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </system>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <os>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </os>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <features>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </features>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/78e1566d-9c5e-49b1-a044-0c46cf002c66_disk">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/78e1566d-9c5e-49b1-a044-0c46cf002c66_disk.config">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:47:e6:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap48ba4743-59"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/console.log" append="off"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:33:49 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       </target>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66/console.log" append="off"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </console>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </input>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <video>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </video>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:33:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:33:49 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:33:49 compute-0 nova_compute[265391]: </domain>
Sep 30 18:33:49 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:33:49 compute-0 nova_compute[265391]: 2025-09-30 18:33:49.590 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:33:49 compute-0 ceph-mon[73755]: pgmap v1656: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:33:50 compute-0 nova_compute[265391]: 2025-09-30 18:33:50.076 2 DEBUG nova.virt.libvirt.migration [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:33:50 compute-0 nova_compute[265391]: 2025-09-30 18:33:50.077 2 INFO nova.virt.libvirt.migration [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:33:50 compute-0 nova_compute[265391]: 2025-09-30 18:33:50.092 2 WARNING neutronclient.v2_0.client [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:33:50 compute-0 nova_compute[265391]: 2025-09-30 18:33:50.513 2 WARNING neutronclient.v2_0.client [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:33:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3464279147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:33:50 compute-0 nova_compute[265391]: 2025-09-30 18:33:50.658 2 DEBUG nova.network.neutron [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Updated VIF entry in instance network info cache for port 48ba4743-596d-47a6-a246-afe70e6e1fc6. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:33:50 compute-0 nova_compute[265391]: 2025-09-30 18:33:50.659 2 DEBUG nova.network.neutron [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Updating instance_info_cache with network_info: [{"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:33:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:51.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.094 2 INFO nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.166 2 DEBUG oslo_concurrency.lockutils [req-0e2d378e-e281-4d5d-80f4-fbcbc0aa89e3 req-b21fb9a1-552b-4ebc-a72d-af0abd792399 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-78e1566d-9c5e-49b1-a044-0c46cf002c66" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:33:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:51.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:51 compute-0 podman[335792]: 2025-09-30 18:33:51.538152053 +0000 UTC m=+0.057708448 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:33:51 compute-0 podman[335790]: 2025-09-30 18:33:51.545114603 +0000 UTC m=+0.070815747 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:33:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1657: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.613 2 DEBUG nova.virt.libvirt.migration [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.613 2 DEBUG nova.virt.libvirt.migration [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:33:51 compute-0 podman[335791]: 2025-09-30 18:33:51.62984906 +0000 UTC m=+0.156019576 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Sep 30 18:33:51 compute-0 ceph-mon[73755]: pgmap v1657: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.743 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:33:51 compute-0 kernel: tap48ba4743-59 (unregistering): left promiscuous mode
Sep 30 18:33:51 compute-0 NetworkManager[45059]: <info>  [1759257231.7501] device (tap48ba4743-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:51 compute-0 ovn_controller[156242]: 2025-09-30T18:33:51Z|00216|binding|INFO|Releasing lport 48ba4743-596d-47a6-a246-afe70e6e1fc6 from this chassis (sb_readonly=0)
Sep 30 18:33:51 compute-0 ovn_controller[156242]: 2025-09-30T18:33:51Z|00217|binding|INFO|Setting lport 48ba4743-596d-47a6-a246-afe70e6e1fc6 down in Southbound
Sep 30 18:33:51 compute-0 ovn_controller[156242]: 2025-09-30T18:33:51Z|00218|binding|INFO|Removing iface tap48ba4743-59 ovn-installed in OVS
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.767 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:e6:35 10.100.0.12'], port_security=['fa:16:3e:47:e6:35 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '78e1566d-9c5e-49b1-a044-0c46cf002c66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=48ba4743-596d-47a6-a246-afe70e6e1fc6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.768 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 48ba4743-596d-47a6-a246-afe70e6e1fc6 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.770 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.793 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[39169619-35c5-4d9b-8657-b53483287de8]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:33:51 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000018.scope: Deactivated successfully.
Sep 30 18:33:51 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000018.scope: Consumed 15.779s CPU time.
Sep 30 18:33:51 compute-0 systemd-machined[219917]: Machine qemu-17-instance-00000018 terminated.
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.825 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[c66f4c12-ea01-4e39-90d6-e4bd140b4ed2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.828 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[c5880c6a-f400-463b-90f3-5fbe03944dfb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.857 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[2528d150-2e8b-4bea-b0b3-2134625073e7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.873 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa42c71-96b2-450b-b433-44cbb13afd34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575259, 'reachable_time': 34563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335872, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:33:51 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk: No such file or directory
Sep 30 18:33:51 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 78e1566d-9c5e-49b1-a044-0c46cf002c66_disk: No such file or directory
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.893 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2f28b36d-95d9-47d3-a68c-03b74f329635]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575273, 'tstamp': 575273}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335873, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575277, 'tstamp': 575277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335873, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.894 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:51 compute-0 NetworkManager[45059]: <info>  [1759257231.8993] manager: (tap48ba4743-59): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.904 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.904 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.904 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.904 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:33:51 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:51.906 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4453633b-0ca6-4d6d-bcaa-7a6cde189557]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.912 2 DEBUG nova.compute.manager [req-9225e4b1-88cb-4e34-a964-f737f392aac9 req-c1476f2b-a808-4ebf-acca-27d447e6bbb2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.912 2 DEBUG oslo_concurrency.lockutils [req-9225e4b1-88cb-4e34-a964-f737f392aac9 req-c1476f2b-a808-4ebf-acca-27d447e6bbb2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.912 2 DEBUG oslo_concurrency.lockutils [req-9225e4b1-88cb-4e34-a964-f737f392aac9 req-c1476f2b-a808-4ebf-acca-27d447e6bbb2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.912 2 DEBUG oslo_concurrency.lockutils [req-9225e4b1-88cb-4e34-a964-f737f392aac9 req-c1476f2b-a808-4ebf-acca-27d447e6bbb2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.913 2 DEBUG nova.compute.manager [req-9225e4b1-88cb-4e34-a964-f737f392aac9 req-c1476f2b-a808-4ebf-acca-27d447e6bbb2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] No waiting events found dispatching network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.913 2 DEBUG nova.compute.manager [req-9225e4b1-88cb-4e34-a964-f737f392aac9 req-c1476f2b-a808-4ebf-acca-27d447e6bbb2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.922 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.922 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:33:51 compute-0 nova_compute[265391]: 2025-09-30 18:33:51.923 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.116 2 DEBUG nova.virt.libvirt.guest [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '78e1566d-9c5e-49b1-a044-0c46cf002c66' (instance-00000018) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.117 2 INFO nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Migration operation has completed
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.117 2 INFO nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] _post_live_migration() is started..
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.128 2 WARNING neutronclient.v2_0.client [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.128 2 WARNING neutronclient.v2_0.client [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.316 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.316 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.316 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.316 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.317 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:33:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:33:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:33:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.937 2 DEBUG nova.network.neutron [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port 48ba4743-596d-47a6-a246-afe70e6e1fc6 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.938 2 DEBUG nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.939 2 DEBUG nova.virt.libvirt.vif [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:32:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-760306639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-760306639',id=24,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:32:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-7ntnt7t9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:33:28Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=78e1566d-9c5e-49b1-a044-0c46cf002c66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.939 2 DEBUG nova.network.os_vif_util [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "address": "fa:16:3e:47:e6:35", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48ba4743-59", "ovs_interfaceid": "48ba4743-596d-47a6-a246-afe70e6e1fc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.940 2 DEBUG nova.network.os_vif_util [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:e6:35,bridge_name='br-int',has_traffic_filtering=True,id=48ba4743-596d-47a6-a246-afe70e6e1fc6,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba4743-59') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.940 2 DEBUG os_vif [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:e6:35,bridge_name='br-int',has_traffic_filtering=True,id=48ba4743-596d-47a6-a246-afe70e6e1fc6,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba4743-59') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.943 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48ba4743-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=5ee073c3-9534-4a44-9860-f7fb4ba8acd9) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.955 2 INFO os_vif [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:e6:35,bridge_name='br-int',has_traffic_filtering=True,id=48ba4743-596d-47a6-a246-afe70e6e1fc6,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48ba4743-59')
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.956 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.956 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.956 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.956 2 DEBUG nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.957 2 INFO nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Deleting instance files /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66_del
Sep 30 18:33:52 compute-0 nova_compute[265391]: 2025-09-30 18:33:52.957 2 INFO nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Deletion of /var/lib/nova/instances/78e1566d-9c5e-49b1-a044-0c46cf002c66_del complete
Sep 30 18:33:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:53.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1658: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:33:53 compute-0 sshd-session[335732]: Connection closed by 154.125.120.7 port 52263 [preauth]
Sep 30 18:33:53 compute-0 ceph-mon[73755]: pgmap v1658: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 1023 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:33:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:53.831Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:33:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:53.831Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:33:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.016 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.016 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.016 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.017 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.017 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] No waiting events found dispatching network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.017 2 WARNING nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received unexpected event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 for instance with vm_state active and task_state migrating.
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.017 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.017 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.018 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.018 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.018 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] No waiting events found dispatching network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.018 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.018 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.018 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.018 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.018 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.019 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] No waiting events found dispatching network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.019 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-unplugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.019 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.019 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.019 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.019 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.019 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] No waiting events found dispatching network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.019 2 WARNING nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received unexpected event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 for instance with vm_state active and task_state migrating.
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.020 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.020 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.020 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.020 2 DEBUG oslo_concurrency.lockutils [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.020 2 DEBUG nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] No waiting events found dispatching network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:33:54 compute-0 nova_compute[265391]: 2025-09-30 18:33:54.020 2 WARNING nova.compute.manager [req-a1dc7d1e-4e15-406a-8ce7-2e5bf9ac5462 req-d86c047b-51b8-4880-9def-0e08f7084042 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Received unexpected event network-vif-plugged-48ba4743-596d-47a6-a246-afe70e6e1fc6 for instance with vm_state active and task_state migrating.
Sep 30 18:33:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:54.322 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:33:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:54.322 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:33:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:33:54.323 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:33:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:33:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:55.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:33:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:33:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:55.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:33:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1659: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Sep 30 18:33:55 compute-0 ceph-mon[73755]: pgmap v1659: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Sep 30 18:33:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:33:56 compute-0 nova_compute[265391]: 2025-09-30 18:33:56.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:56 compute-0 sudo[335889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:33:56 compute-0 sudo[335889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:33:56 compute-0 sudo[335889]: pam_unix(sudo:session): session closed for user root
Sep 30 18:33:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:57.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:57.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:57.298Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:33:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3502070360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:33:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:33:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3502070360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:33:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1660: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:33:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3502070360' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:33:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3502070360' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:33:57 compute-0 ceph-mon[73755]: pgmap v1660: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:33:57 compute-0 nova_compute[265391]: 2025-09-30 18:33:57.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:33:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:58] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:33:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:33:58] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:33:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:33:58.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:33:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:33:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:33:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:33:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:33:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:33:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:33:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:33:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:33:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:33:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:33:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1661: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:33:59 compute-0 ceph-mon[73755]: pgmap v1661: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:33:59 compute-0 podman[276673]: time="2025-09-30T18:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:33:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:33:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10765 "" "Go-http-client/1.1"
Sep 30 18:34:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:01.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:01 compute-0 nova_compute[265391]: 2025-09-30 18:34:01.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:01 compute-0 openstack_network_exporter[279566]: ERROR   18:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:34:01 compute-0 openstack_network_exporter[279566]: ERROR   18:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:34:01 compute-0 openstack_network_exporter[279566]: ERROR   18:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:34:01 compute-0 openstack_network_exporter[279566]: ERROR   18:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:34:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:34:01 compute-0 openstack_network_exporter[279566]: ERROR   18:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:34:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:34:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1662: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:34:01 compute-0 ceph-mon[73755]: pgmap v1662: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:34:02 compute-0 nova_compute[265391]: 2025-09-30 18:34:02.494 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:02 compute-0 nova_compute[265391]: 2025-09-30 18:34:02.494 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:02 compute-0 nova_compute[265391]: 2025-09-30 18:34:02.494 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "78e1566d-9c5e-49b1-a044-0c46cf002c66-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:02 compute-0 podman[335920]: 2025-09-30 18:34:02.546090639 +0000 UTC m=+0.069543573 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:34:02 compute-0 podman[335921]: 2025-09-30 18:34:02.546563041 +0000 UTC m=+0.068852355 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:34:02 compute-0 podman[335922]: 2025-09-30 18:34:02.546579272 +0000 UTC m=+0.062604113 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Sep 30 18:34:02 compute-0 sudo[335977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:34:02 compute-0 sudo[335977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:02 compute-0 sudo[335977]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:02 compute-0 sudo[336002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 18:34:02 compute-0 sudo[336002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:02 compute-0 nova_compute[265391]: 2025-09-30 18:34:02.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:03 compute-0 nova_compute[265391]: 2025-09-30 18:34:03.009 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:03 compute-0 nova_compute[265391]: 2025-09-30 18:34:03.010 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:03 compute-0 nova_compute[265391]: 2025-09-30 18:34:03.010 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:03 compute-0 nova_compute[265391]: 2025-09-30 18:34:03.011 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:34:03 compute-0 nova_compute[265391]: 2025-09-30 18:34:03.011 2 DEBUG oslo_concurrency.processutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:03.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:03 compute-0 podman[336118]: 2025-09-30 18:34:03.345511525 +0000 UTC m=+0.072451989 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:34:03 compute-0 podman[336118]: 2025-09-30 18:34:03.444802489 +0000 UTC m=+0.171742953 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:34:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:34:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2211126201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:03 compute-0 nova_compute[265391]: 2025-09-30 18:34:03.477 2 DEBUG oslo_concurrency.processutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2211126201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1663: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:34:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:03.832Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:03 compute-0 podman[336242]: 2025-09-30 18:34:03.937140894 +0000 UTC m=+0.056050044 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:34:03 compute-0 podman[336242]: 2025-09-30 18:34:03.973749513 +0000 UTC m=+0.092658643 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:34:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:04 compute-0 podman[336332]: 2025-09-30 18:34:04.29261054 +0000 UTC m=+0.055795108 container exec 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:34:04 compute-0 podman[336332]: 2025-09-30 18:34:04.303649476 +0000 UTC m=+0.066834044 container exec_died 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:04 compute-0 podman[336398]: 2025-09-30 18:34:04.494137414 +0000 UTC m=+0.051739862 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:34:04 compute-0 podman[336398]: 2025-09-30 18:34:04.504614286 +0000 UTC m=+0.062216694 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:34:04 compute-0 ceph-mon[73755]: pgmap v1663: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.524 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.525 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.530121) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257244530151, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2055, "num_deletes": 251, "total_data_size": 3785685, "memory_usage": 3839952, "flush_reason": "Manual Compaction"}
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257244543915, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3684699, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40764, "largest_seqno": 42818, "table_properties": {"data_size": 3675513, "index_size": 5680, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19402, "raw_average_key_size": 20, "raw_value_size": 3657048, "raw_average_value_size": 3845, "num_data_blocks": 247, "num_entries": 951, "num_filter_entries": 951, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257041, "oldest_key_time": 1759257041, "file_creation_time": 1759257244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 13851 microseconds, and 7299 cpu microseconds.
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.543964) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3684699 bytes OK
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.543989) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.545510) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.545533) EVENT_LOG_v1 {"time_micros": 1759257244545526, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.545551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3777169, prev total WAL file size 3777169, number of live WAL files 2.
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.546927) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3598KB)], [92(11MB)]
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257244546951, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 15442102, "oldest_snapshot_seqno": -1}
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6939 keys, 13530559 bytes, temperature: kUnknown
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257244607812, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 13530559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13485693, "index_size": 26386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 179819, "raw_average_key_size": 25, "raw_value_size": 13362824, "raw_average_value_size": 1925, "num_data_blocks": 1047, "num_entries": 6939, "num_filter_entries": 6939, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.608036) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 13530559 bytes
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.609195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.4 rd, 222.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.2 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 7458, records dropped: 519 output_compression: NoCompression
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.609212) EVENT_LOG_v1 {"time_micros": 1759257244609204, "job": 54, "event": "compaction_finished", "compaction_time_micros": 60928, "compaction_time_cpu_micros": 32701, "output_level": 6, "num_output_files": 1, "total_output_size": 13530559, "num_input_records": 7458, "num_output_records": 6939, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257244609918, "job": 54, "event": "table_file_deletion", "file_number": 94}
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257244612440, "job": 54, "event": "table_file_deletion", "file_number": 92}
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.546846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.612559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.612566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.612570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.612574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:34:04 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:34:04.612578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.680 2 WARNING nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.681 2 DEBUG oslo_concurrency.processutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.704 2 DEBUG oslo_concurrency.processutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.705 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4099MB free_disk=39.90114974975586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.705 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:04 compute-0 nova_compute[265391]: 2025-09-30 18:34:04.706 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:04 compute-0 podman[336462]: 2025-09-30 18:34:04.736014815 +0000 UTC m=+0.060069868 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, version=2.2.4, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.28.2, release=1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Sep 30 18:34:04 compute-0 podman[336462]: 2025-09-30 18:34:04.748689624 +0000 UTC m=+0.072744667 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:34:04 compute-0 podman[336528]: 2025-09-30 18:34:04.97418241 +0000 UTC m=+0.054412882 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:34:05 compute-0 podman[336528]: 2025-09-30 18:34:05.005746218 +0000 UTC m=+0.085976630 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:34:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:34:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:05.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:34:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:05.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:05 compute-0 podman[336604]: 2025-09-30 18:34:05.252508306 +0000 UTC m=+0.072621614 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:34:05 compute-0 podman[336604]: 2025-09-30 18:34:05.433024126 +0000 UTC m=+0.253137434 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:34:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1664: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 170 B/s wr, 5 op/s
Sep 30 18:34:05 compute-0 ceph-mon[73755]: pgmap v1664: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 170 B/s wr, 5 op/s
Sep 30 18:34:05 compute-0 nova_compute[265391]: 2025-09-30 18:34:05.743 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 78e1566d-9c5e-49b1-a044-0c46cf002c66 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:34:05 compute-0 podman[336719]: 2025-09-30 18:34:05.792319391 +0000 UTC m=+0.051042634 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:34:05 compute-0 podman[336719]: 2025-09-30 18:34:05.835852649 +0000 UTC m=+0.094575862 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:34:05 compute-0 sudo[336002]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:34:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:34:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:05 compute-0 sudo[336764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:34:05 compute-0 sudo[336764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:05 compute-0 sudo[336764]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:06 compute-0 sudo[336789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:34:06 compute-0 sudo[336789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.252 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.252 2 INFO nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Updating resource usage from migration 3076aa2f-e697-4bdd-98f3-898820dd8d4b
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.280 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 6fbb2434-2a54-4726-8c0b-62fb0288d275 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.281 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 3076aa2f-e697-4bdd-98f3-898820dd8d4b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.281 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.281 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:34:04 up  1:37,  0 user,  load average: 0.71, 0.84, 0.86\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.316 2 DEBUG oslo_concurrency.processutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:06 compute-0 sudo[336789]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1665: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 556 B/s rd, 0 op/s
Sep 30 18:34:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1666: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 678 B/s rd, 0 op/s
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:34:06 compute-0 sudo[336864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:34:06 compute-0 sudo[336864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:06 compute-0 sudo[336864]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:34:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320034758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:06 compute-0 sudo[336889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:34:06 compute-0 sudo[336889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.777 2 DEBUG oslo_concurrency.processutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:06 compute-0 nova_compute[265391]: 2025-09-30 18:34:06.784 2 DEBUG nova.compute.provider_tree [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:34:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2320034758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:07.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:07 compute-0 podman[336957]: 2025-09-30 18:34:07.195996511 +0000 UTC m=+0.043012136 container create 2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:34:07 compute-0 systemd[1]: Started libpod-conmon-2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4.scope.
Sep 30 18:34:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:34:07 compute-0 podman[336957]: 2025-09-30 18:34:07.178730464 +0000 UTC m=+0.025746109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:34:07 compute-0 podman[336957]: 2025-09-30 18:34:07.2919939 +0000 UTC m=+0.139009555 container init 2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_jepsen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:34:07 compute-0 nova_compute[265391]: 2025-09-30 18:34:07.292 2 DEBUG nova.scheduler.client.report [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:34:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:07.299Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:07 compute-0 podman[336957]: 2025-09-30 18:34:07.303037477 +0000 UTC m=+0.150053102 container start 2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Sep 30 18:34:07 compute-0 podman[336957]: 2025-09-30 18:34:07.306230289 +0000 UTC m=+0.153245934 container attach 2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:34:07 compute-0 lucid_jepsen[336974]: 167 167
Sep 30 18:34:07 compute-0 systemd[1]: libpod-2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4.scope: Deactivated successfully.
Sep 30 18:34:07 compute-0 podman[336957]: 2025-09-30 18:34:07.309405762 +0000 UTC m=+0.156421397 container died 2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 18:34:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:34:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca4f7e31a83c91cbba240ec385f5eded79f45cf54dd8605ab60b33539aa34bed-merged.mount: Deactivated successfully.
Sep 30 18:34:07 compute-0 podman[336957]: 2025-09-30 18:34:07.348887885 +0000 UTC m=+0.195903510 container remove 2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:34:07 compute-0 systemd[1]: libpod-conmon-2898387376c7fb80d6b4c90db94b94b62f1359990b25cd6e4c4364fc7e929af4.scope: Deactivated successfully.
Sep 30 18:34:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:34:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:34:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:34:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:34:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:34:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:34:07 compute-0 podman[336997]: 2025-09-30 18:34:07.520815713 +0000 UTC m=+0.038630553 container create fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:34:07 compute-0 systemd[1]: Started libpod-conmon-fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48.scope.
Sep 30 18:34:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1e7eed396b66bf4c4ad2bf7d1e79dd22daa1e6a81247759573767296e1e0eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1e7eed396b66bf4c4ad2bf7d1e79dd22daa1e6a81247759573767296e1e0eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1e7eed396b66bf4c4ad2bf7d1e79dd22daa1e6a81247759573767296e1e0eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1e7eed396b66bf4c4ad2bf7d1e79dd22daa1e6a81247759573767296e1e0eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae1e7eed396b66bf4c4ad2bf7d1e79dd22daa1e6a81247759573767296e1e0eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:07 compute-0 podman[336997]: 2025-09-30 18:34:07.584680499 +0000 UTC m=+0.102495269 container init fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:34:07 compute-0 podman[336997]: 2025-09-30 18:34:07.591171547 +0000 UTC m=+0.108986287 container start fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:34:07 compute-0 podman[336997]: 2025-09-30 18:34:07.594285047 +0000 UTC m=+0.112099807 container attach fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 18:34:07 compute-0 podman[336997]: 2025-09-30 18:34:07.503913965 +0000 UTC m=+0.021728735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:34:07 compute-0 nova_compute[265391]: 2025-09-30 18:34:07.803 2 DEBUG nova.compute.resource_tracker [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:34:07 compute-0 nova_compute[265391]: 2025-09-30 18:34:07.804 2 DEBUG oslo_concurrency.lockutils [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.098s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:07 compute-0 nova_compute[265391]: 2025-09-30 18:34:07.821 2 INFO nova.compute.manager [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:34:07 compute-0 interesting_greider[337015]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:34:07 compute-0 interesting_greider[337015]: --> All data devices are unavailable
Sep 30 18:34:07 compute-0 systemd[1]: libpod-fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48.scope: Deactivated successfully.
Sep 30 18:34:07 compute-0 podman[336997]: 2025-09-30 18:34:07.921024269 +0000 UTC m=+0.438839039 container died fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:34:07 compute-0 ceph-mon[73755]: pgmap v1665: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 556 B/s rd, 0 op/s
Sep 30 18:34:07 compute-0 ceph-mon[73755]: pgmap v1666: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 678 B/s rd, 0 op/s
Sep 30 18:34:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae1e7eed396b66bf4c4ad2bf7d1e79dd22daa1e6a81247759573767296e1e0eb-merged.mount: Deactivated successfully.
Sep 30 18:34:07 compute-0 nova_compute[265391]: 2025-09-30 18:34:07.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:07 compute-0 podman[336997]: 2025-09-30 18:34:07.962008211 +0000 UTC m=+0.479822961 container remove fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:34:07 compute-0 systemd[1]: libpod-conmon-fcfcf48835c35fd103cf016a8a59469ecfe37cfe8949034858a88868ea237f48.scope: Deactivated successfully.
Sep 30 18:34:08 compute-0 sudo[336889]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002276388363205704 of space, bias 1.0, pg target 0.45527767264114083 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:34:08
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.control', 'default.rgw.log', '.mgr']
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:34:08 compute-0 sudo[337045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:34:08 compute-0 sudo[337045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:08 compute-0 sudo[337045]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:08 compute-0 sudo[337070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:34:08 compute-0 sudo[337070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:08 compute-0 podman[337137]: 2025-09-30 18:34:08.499324091 +0000 UTC m=+0.036658701 container create e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 18:34:08 compute-0 systemd[1]: Started libpod-conmon-e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec.scope.
Sep 30 18:34:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:34:08 compute-0 podman[337137]: 2025-09-30 18:34:08.572589031 +0000 UTC m=+0.109923671 container init e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_liskov, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:34:08 compute-0 podman[337137]: 2025-09-30 18:34:08.482132276 +0000 UTC m=+0.019466916 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:34:08 compute-0 podman[337137]: 2025-09-30 18:34:08.578301529 +0000 UTC m=+0.115636139 container start e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_liskov, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:34:08 compute-0 podman[337137]: 2025-09-30 18:34:08.581168753 +0000 UTC m=+0.118503363 container attach e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:34:08 compute-0 friendly_liskov[337153]: 167 167
Sep 30 18:34:08 compute-0 systemd[1]: libpod-e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec.scope: Deactivated successfully.
Sep 30 18:34:08 compute-0 conmon[337153]: conmon e13e13d212811aec24c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec.scope/container/memory.events
Sep 30 18:34:08 compute-0 podman[337137]: 2025-09-30 18:34:08.583647518 +0000 UTC m=+0.120982128 container died e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1667: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 339 B/s rd, 0 op/s
Sep 30 18:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7583486ce5acc458a6de9b0726669cbfadb3b1f3fde34f797d6286055f8fd35-merged.mount: Deactivated successfully.
Sep 30 18:34:08 compute-0 podman[337137]: 2025-09-30 18:34:08.617545096 +0000 UTC m=+0.154879706 container remove e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 18:34:08 compute-0 systemd[1]: libpod-conmon-e13e13d212811aec24c02b4939eb11554eb86185205f97af2cd09525a3d5c5ec.scope: Deactivated successfully.
Sep 30 18:34:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:08] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:34:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:08] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:34:08 compute-0 podman[337176]: 2025-09-30 18:34:08.806768812 +0000 UTC m=+0.038604192 container create de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:34:08 compute-0 systemd[1]: Started libpod-conmon-de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4.scope.
Sep 30 18:34:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:08.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec257c6f2a803b6d0d1fddf154fe92a74e8a92a028c77b767da5dc2dd570b45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec257c6f2a803b6d0d1fddf154fe92a74e8a92a028c77b767da5dc2dd570b45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec257c6f2a803b6d0d1fddf154fe92a74e8a92a028c77b767da5dc2dd570b45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec257c6f2a803b6d0d1fddf154fe92a74e8a92a028c77b767da5dc2dd570b45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:08 compute-0 podman[337176]: 2025-09-30 18:34:08.790263084 +0000 UTC m=+0.022098474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:34:08 compute-0 podman[337176]: 2025-09-30 18:34:08.8876708 +0000 UTC m=+0.119506200 container init de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:34:08 compute-0 nova_compute[265391]: 2025-09-30 18:34:08.893 2 INFO nova.scheduler.client.report [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 6fbb2434-2a54-4726-8c0b-62fb0288d275
Sep 30 18:34:08 compute-0 podman[337176]: 2025-09-30 18:34:08.893980223 +0000 UTC m=+0.125815603 container start de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:34:08 compute-0 nova_compute[265391]: 2025-09-30 18:34:08.894 2 DEBUG nova.virt.libvirt.driver [None req-7087f7f3-1215-4d86-811d-00bd39ad11e9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 78e1566d-9c5e-49b1-a044-0c46cf002c66] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:34:08 compute-0 podman[337176]: 2025-09-30 18:34:08.896820677 +0000 UTC m=+0.128656057 container attach de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:34:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:09.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]: {
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:     "0": [
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:         {
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "devices": [
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "/dev/loop3"
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             ],
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "lv_name": "ceph_lv0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "lv_size": "21470642176",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "name": "ceph_lv0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "tags": {
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.cluster_name": "ceph",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.crush_device_class": "",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.encrypted": "0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.osd_id": "0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.type": "block",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.vdo": "0",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:                 "ceph.with_tpm": "0"
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             },
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "type": "block",
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:             "vg_name": "ceph_vg0"
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:         }
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]:     ]
Sep 30 18:34:09 compute-0 xenodochial_mendel[337192]: }
Sep 30 18:34:09 compute-0 systemd[1]: libpod-de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4.scope: Deactivated successfully.
Sep 30 18:34:09 compute-0 podman[337176]: 2025-09-30 18:34:09.172030462 +0000 UTC m=+0.403865852 container died de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:34:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:09.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ec257c6f2a803b6d0d1fddf154fe92a74e8a92a028c77b767da5dc2dd570b45-merged.mount: Deactivated successfully.
Sep 30 18:34:09 compute-0 podman[337176]: 2025-09-30 18:34:09.215383526 +0000 UTC m=+0.447218906 container remove de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 18:34:09 compute-0 systemd[1]: libpod-conmon-de5cf8d6bfd93dad06a1c9179e5a66a4820d8cb8f4bcfa7247cf7599307f6da4.scope: Deactivated successfully.
Sep 30 18:34:09 compute-0 sudo[337070]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:09 compute-0 sudo[337213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:34:09 compute-0 sudo[337213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:09 compute-0 sudo[337213]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:09 compute-0 sudo[337238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:34:09 compute-0 sudo[337238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:09 compute-0 podman[337307]: 2025-09-30 18:34:09.876942466 +0000 UTC m=+0.057036299 container create fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:34:09 compute-0 nova_compute[265391]: 2025-09-30 18:34:09.911 2 DEBUG nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Preparing to wait for external event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:34:09 compute-0 nova_compute[265391]: 2025-09-30 18:34:09.911 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:09 compute-0 nova_compute[265391]: 2025-09-30 18:34:09.911 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:09 compute-0 nova_compute[265391]: 2025-09-30 18:34:09.912 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:09 compute-0 systemd[1]: Started libpod-conmon-fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567.scope.
Sep 30 18:34:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:34:09 compute-0 podman[337307]: 2025-09-30 18:34:09.841541219 +0000 UTC m=+0.021635072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:34:09 compute-0 ceph-mon[73755]: pgmap v1667: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 339 B/s rd, 0 op/s
Sep 30 18:34:09 compute-0 podman[337307]: 2025-09-30 18:34:09.950670158 +0000 UTC m=+0.130763971 container init fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:34:09 compute-0 podman[337307]: 2025-09-30 18:34:09.958303055 +0000 UTC m=+0.138396848 container start fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:34:09 compute-0 podman[337307]: 2025-09-30 18:34:09.961626132 +0000 UTC m=+0.141719935 container attach fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:34:09 compute-0 compassionate_jemison[337323]: 167 167
Sep 30 18:34:09 compute-0 systemd[1]: libpod-fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567.scope: Deactivated successfully.
Sep 30 18:34:09 compute-0 podman[337307]: 2025-09-30 18:34:09.964847415 +0000 UTC m=+0.144941208 container died fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 18:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-791e21a4bbce4b0efb3770cbb5f47dbfb91b19b966916e36c9bfe535da6382e5-merged.mount: Deactivated successfully.
Sep 30 18:34:09 compute-0 podman[337307]: 2025-09-30 18:34:09.997661556 +0000 UTC m=+0.177755349 container remove fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:34:10 compute-0 systemd[1]: libpod-conmon-fee45d784689e978c8c05a5a6d0a6e5c5443eef7e14763089713ab0583d73567.scope: Deactivated successfully.
Sep 30 18:34:10 compute-0 podman[337348]: 2025-09-30 18:34:10.165718883 +0000 UTC m=+0.041436785 container create 8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_beaver, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:34:10 compute-0 systemd[1]: Started libpod-conmon-8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9.scope.
Sep 30 18:34:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:34:10 compute-0 podman[337348]: 2025-09-30 18:34:10.145473818 +0000 UTC m=+0.021191770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16272c3ae17ee55c93da1809798a90168098693a4383d0209110eeddc6e744c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16272c3ae17ee55c93da1809798a90168098693a4383d0209110eeddc6e744c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16272c3ae17ee55c93da1809798a90168098693a4383d0209110eeddc6e744c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16272c3ae17ee55c93da1809798a90168098693a4383d0209110eeddc6e744c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:34:10 compute-0 podman[337348]: 2025-09-30 18:34:10.256428405 +0000 UTC m=+0.132146327 container init 8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_beaver, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:34:10 compute-0 podman[337348]: 2025-09-30 18:34:10.261822395 +0000 UTC m=+0.137540297 container start 8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 18:34:10 compute-0 podman[337348]: 2025-09-30 18:34:10.264885824 +0000 UTC m=+0.140603756 container attach 8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_beaver, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:34:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1668: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 679 B/s rd, 0 op/s
Sep 30 18:34:10 compute-0 lvm[337439]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:34:10 compute-0 lvm[337439]: VG ceph_vg0 finished
Sep 30 18:34:10 compute-0 lucid_beaver[337365]: {}
Sep 30 18:34:10 compute-0 systemd[1]: libpod-8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9.scope: Deactivated successfully.
Sep 30 18:34:10 compute-0 systemd[1]: libpod-8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9.scope: Consumed 1.077s CPU time.
Sep 30 18:34:10 compute-0 podman[337348]: 2025-09-30 18:34:10.955993492 +0000 UTC m=+0.831711384 container died 8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-16272c3ae17ee55c93da1809798a90168098693a4383d0209110eeddc6e744c8-merged.mount: Deactivated successfully.
Sep 30 18:34:10 compute-0 podman[337348]: 2025-09-30 18:34:10.998272508 +0000 UTC m=+0.873990420 container remove 8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:34:11 compute-0 systemd[1]: libpod-conmon-8f3fb2d6e880d6eb2ea6fc90dc44768e64d9b1ff0f0f29446f01fb7566cff5a9.scope: Deactivated successfully.
Sep 30 18:34:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:11 compute-0 sudo[337238]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:34:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:34:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:11 compute-0 sudo[337455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:34:11 compute-0 sudo[337455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:11 compute-0 sudo[337455]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:11.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:11 compute-0 nova_compute[265391]: 2025-09-30 18:34:11.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:12 compute-0 ceph-mon[73755]: pgmap v1668: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 679 B/s rd, 0 op/s
Sep 30 18:34:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:34:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1669: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 678 B/s rd, 0 op/s
Sep 30 18:34:12 compute-0 nova_compute[265391]: 2025-09-30 18:34:12.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:13.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:13.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:13.832Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:13 compute-0 nova_compute[265391]: 2025-09-30 18:34:13.863 2 DEBUG nova.compute.manager [req-369cb39b-d8df-40b0-82e6-cda39f888454 req-49df384d-ecde-4861-bd67-8865d5798374 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:13 compute-0 nova_compute[265391]: 2025-09-30 18:34:13.864 2 DEBUG oslo_concurrency.lockutils [req-369cb39b-d8df-40b0-82e6-cda39f888454 req-49df384d-ecde-4861-bd67-8865d5798374 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:13 compute-0 nova_compute[265391]: 2025-09-30 18:34:13.864 2 DEBUG oslo_concurrency.lockutils [req-369cb39b-d8df-40b0-82e6-cda39f888454 req-49df384d-ecde-4861-bd67-8865d5798374 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:13 compute-0 nova_compute[265391]: 2025-09-30 18:34:13.864 2 DEBUG oslo_concurrency.lockutils [req-369cb39b-d8df-40b0-82e6-cda39f888454 req-49df384d-ecde-4861-bd67-8865d5798374 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:13 compute-0 nova_compute[265391]: 2025-09-30 18:34:13.864 2 DEBUG nova.compute.manager [req-369cb39b-d8df-40b0-82e6-cda39f888454 req-49df384d-ecde-4861-bd67-8865d5798374 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] No event matching network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b in dict_keys([('network-vif-plugged', 'f942c9c9-85a4-47cf-9428-7e266b83b49b')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:34:13 compute-0 nova_compute[265391]: 2025-09-30 18:34:13.864 2 DEBUG nova.compute.manager [req-369cb39b-d8df-40b0-82e6-cda39f888454 req-49df384d-ecde-4861-bd67-8865d5798374 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:34:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:14 compute-0 ceph-mon[73755]: pgmap v1669: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 678 B/s rd, 0 op/s
Sep 30 18:34:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1670: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 678 B/s rd, 11 KiB/s wr, 2 op/s
Sep 30 18:34:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:15.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:15.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.435 2 INFO nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Took 5.52 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.953 2 DEBUG nova.compute.manager [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.954 2 DEBUG oslo_concurrency.lockutils [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.954 2 DEBUG oslo_concurrency.lockutils [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.954 2 DEBUG oslo_concurrency.lockutils [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.954 2 DEBUG nova.compute.manager [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Processing event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.954 2 DEBUG nova.compute.manager [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-changed-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.954 2 DEBUG nova.compute.manager [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Refreshing instance network info cache due to event network-changed-f942c9c9-85a4-47cf-9428-7e266b83b49b. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.955 2 DEBUG oslo_concurrency.lockutils [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.955 2 DEBUG oslo_concurrency.lockutils [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.955 2 DEBUG nova.network.neutron [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Refreshing network info cache for port f942c9c9-85a4-47cf-9428-7e266b83b49b _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:34:15 compute-0 nova_compute[265391]: 2025-09-30 18:34:15.956 2 DEBUG nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:34:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:16 compute-0 ceph-mon[73755]: pgmap v1670: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 678 B/s rd, 11 KiB/s wr, 2 op/s
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.463 2 WARNING neutronclient.v2_0.client [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.468 2 DEBUG nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3f_adtk9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='29a2fe9a-add5-43c1-948a-9df854aa4261',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(3076aa2f-e697-4bdd-98f3-898820dd8d4b),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.471 2 DEBUG nova.objects.instance [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 29a2fe9a-add5-43c1-948a-9df854aa4261 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.472 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.474 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.474 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:34:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1671: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 614 B/s rd, 9.7 KiB/s wr, 2 op/s
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.976 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.977 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.985 2 DEBUG nova.virt.libvirt.vif [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:32:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1847336220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1847336220',id=25,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:32:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-my5o1s4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:32:59Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=29a2fe9a-add5-43c1-948a-9df854aa4261,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.985 2 DEBUG nova.network.os_vif_util [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.986 2 DEBUG nova.network.os_vif_util [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:2d:3d,bridge_name='br-int',has_traffic_filtering=True,id=f942c9c9-85a4-47cf-9428-7e266b83b49b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf942c9c9-85') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.986 2 DEBUG nova.virt.libvirt.migration [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:2b:2d:3d"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <target dev="tapf942c9c9-85"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]: </interface>
Sep 30 18:34:16 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.987 2 DEBUG nova.virt.libvirt.migration [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <name>instance-00000019</name>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <uuid>29a2fe9a-add5-43c1-948a-9df854aa4261</uuid>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1847336220</nova:name>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:32:52</nova:creationTime>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:port uuid="f942c9c9-85a4-47cf-9428-7e266b83b49b">
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <system>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="serial">29a2fe9a-add5-43c1-948a-9df854aa4261</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="uuid">29a2fe9a-add5-43c1-948a-9df854aa4261</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </system>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <os>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </os>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <features>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </features>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/29a2fe9a-add5-43c1-948a-9df854aa4261_disk">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </source>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </source>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:2b:2d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf942c9c9-85"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/console.log" append="off"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </target>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/console.log" append="off"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </console>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </input>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <video>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </video>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]: </domain>
Sep 30 18:34:16 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.989 2 DEBUG nova.virt.libvirt.migration [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <name>instance-00000019</name>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <uuid>29a2fe9a-add5-43c1-948a-9df854aa4261</uuid>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1847336220</nova:name>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:32:52</nova:creationTime>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:port uuid="f942c9c9-85a4-47cf-9428-7e266b83b49b">
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <system>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="serial">29a2fe9a-add5-43c1-948a-9df854aa4261</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="uuid">29a2fe9a-add5-43c1-948a-9df854aa4261</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </system>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <os>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </os>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <features>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </features>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/29a2fe9a-add5-43c1-948a-9df854aa4261_disk">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </source>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </source>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:2b:2d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf942c9c9-85"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/console.log" append="off"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </target>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/console.log" append="off"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </console>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </input>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <video>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </video>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]: </domain>
Sep 30 18:34:16 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:34:16 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.991 2 DEBUG nova.virt.libvirt.migration [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <name>instance-00000019</name>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <uuid>29a2fe9a-add5-43c1-948a-9df854aa4261</uuid>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1847336220</nova:name>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:32:52</nova:creationTime>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         <nova:port uuid="f942c9c9-85a4-47cf-9428-7e266b83b49b">
Sep 30 18:34:16 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <system>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="serial">29a2fe9a-add5-43c1-948a-9df854aa4261</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="uuid">29a2fe9a-add5-43c1-948a-9df854aa4261</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     </system>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <os>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </os>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <features>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </features>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:34:16 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:34:16 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:34:17 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/29a2fe9a-add5-43c1-948a-9df854aa4261_disk">
Sep 30 18:34:17 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       </source>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:34:17 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/29a2fe9a-add5-43c1-948a-9df854aa4261_disk.config">
Sep 30 18:34:17 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       </source>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:2b:2d:3d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf942c9c9-85"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/console.log" append="off"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:34:17 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       </target>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261/console.log" append="off"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </console>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </input>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <video>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </video>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:34:17 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:34:17 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:34:17 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:34:17 compute-0 nova_compute[265391]: </domain>
Sep 30 18:34:17 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:34:17 compute-0 nova_compute[265391]: 2025-09-30 18:34:16.991 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:34:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:17.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:17 compute-0 sudo[337486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:34:17 compute-0 sudo[337486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:17 compute-0 sudo[337486]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:17 compute-0 nova_compute[265391]: 2025-09-30 18:34:17.081 2 WARNING neutronclient.v2_0.client [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:34:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:34:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:17.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:34:17 compute-0 nova_compute[265391]: 2025-09-30 18:34:17.260 2 DEBUG nova.network.neutron [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Updated VIF entry in instance network info cache for port f942c9c9-85a4-47cf-9428-7e266b83b49b. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:34:17 compute-0 nova_compute[265391]: 2025-09-30 18:34:17.261 2 DEBUG nova.network.neutron [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Updating instance_info_cache with network_info: [{"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:34:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:17.301Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:17 compute-0 nova_compute[265391]: 2025-09-30 18:34:17.480 2 DEBUG nova.virt.libvirt.migration [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:34:17 compute-0 nova_compute[265391]: 2025-09-30 18:34:17.480 2 INFO nova.virt.libvirt.migration [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:34:17 compute-0 nova_compute[265391]: 2025-09-30 18:34:17.775 2 DEBUG oslo_concurrency.lockutils [req-7b387c6d-7f86-47c3-8a70-c0179186dbcd req-a2716b92-eb00-48fe-ab5f-32f1504c58c8 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-29a2fe9a-add5-43c1-948a-9df854aa4261" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:34:17 compute-0 nova_compute[265391]: 2025-09-30 18:34:17.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:18 compute-0 ceph-mon[73755]: pgmap v1671: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 614 B/s rd, 9.7 KiB/s wr, 2 op/s
Sep 30 18:34:18 compute-0 nova_compute[265391]: 2025-09-30 18:34:18.500 2 INFO nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:34:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1672: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 2.7 KiB/s rd, 8.2 KiB/s wr, 4 op/s
Sep 30 18:34:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:18] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:34:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:18] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:34:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:18.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.005 2 DEBUG nova.virt.libvirt.migration [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.005 2 DEBUG nova.virt.libvirt.migration [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:34:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:19.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:19 compute-0 kernel: tapf942c9c9-85 (unregistering): left promiscuous mode
Sep 30 18:34:19 compute-0 NetworkManager[45059]: <info>  [1759257259.1556] device (tapf942c9c9-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:34:19 compute-0 ovn_controller[156242]: 2025-09-30T18:34:19Z|00219|binding|INFO|Releasing lport f942c9c9-85a4-47cf-9428-7e266b83b49b from this chassis (sb_readonly=0)
Sep 30 18:34:19 compute-0 ovn_controller[156242]: 2025-09-30T18:34:19Z|00220|binding|INFO|Setting lport f942c9c9-85a4-47cf-9428-7e266b83b49b down in Southbound
Sep 30 18:34:19 compute-0 ovn_controller[156242]: 2025-09-30T18:34:19Z|00221|binding|INFO|Removing iface tapf942c9c9-85 ovn-installed in OVS
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.172 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:2d:3d 10.100.0.5'], port_security=['fa:16:3e:2b:2d:3d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '29a2fe9a-add5-43c1-948a-9df854aa4261', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=f942c9c9-85a4-47cf-9428-7e266b83b49b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.173 166158 INFO neutron.agent.ovn.metadata.agent [-] Port f942c9c9-85a4-47cf-9428-7e266b83b49b in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.174 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.175 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e7a37e-e3ba-4ebb-9b49-f6a5643b350a]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.176 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace which is not needed anymore
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:19.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:19 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000019.scope: Deactivated successfully.
Sep 30 18:34:19 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000019.scope: Consumed 15.083s CPU time.
Sep 30 18:34:19 compute-0 systemd-machined[219917]: Machine qemu-18-instance-00000019 terminated.
Sep 30 18:34:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[334178]: [NOTICE]   (334182) : haproxy version is 3.0.5-8e879a5
Sep 30 18:34:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[334178]: [NOTICE]   (334182) : path to executable is /usr/sbin/haproxy
Sep 30 18:34:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[334178]: [WARNING]  (334182) : Exiting Master process...
Sep 30 18:34:19 compute-0 podman[337542]: 2025-09-30 18:34:19.297720267 +0000 UTC m=+0.029578618 container kill 7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:34:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[334178]: [ALERT]    (334182) : Current worker (334184) exited with code 143 (Terminated)
Sep 30 18:34:19 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[334178]: [WARNING]  (334182) : All workers exited. Exiting... (0)
Sep 30 18:34:19 compute-0 systemd[1]: libpod-7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36.scope: Deactivated successfully.
Sep 30 18:34:19 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 29a2fe9a-add5-43c1-948a-9df854aa4261_disk: No such file or directory
Sep 30 18:34:19 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 29a2fe9a-add5-43c1-948a-9df854aa4261_disk: No such file or directory
Sep 30 18:34:19 compute-0 podman[337557]: 2025-09-30 18:34:19.349112369 +0000 UTC m=+0.034367872 container died 7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930)
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.354 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.355 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.356 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36-userdata-shm.mount: Deactivated successfully.
Sep 30 18:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b1ea676acc4104f1f9e406f0986aed3bb04854be8d2494bc378fc60048d53d-merged.mount: Deactivated successfully.
Sep 30 18:34:19 compute-0 podman[337557]: 2025-09-30 18:34:19.392801302 +0000 UTC m=+0.078056785 container cleanup 7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, tcib_managed=true)
Sep 30 18:34:19 compute-0 systemd[1]: libpod-conmon-7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36.scope: Deactivated successfully.
Sep 30 18:34:19 compute-0 podman[337559]: 2025-09-30 18:34:19.408950801 +0000 UTC m=+0.060646614 container remove 7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.415 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[83777718-cbc7-48ea-84fb-f6240f9d64a8]: (4, ("Tue Sep 30 06:34:19 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36)\n7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36\nTue Sep 30 06:34:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36)\n7ec97ada63ff762fbe17ffe9f28aa39e23933405c2900185797397d4c0ee7c36\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.416 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3717f989-3c2e-42f7-8116-55a8b8f7c987]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.417 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.417 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[12cf7b70-b120-4402-8522-617c5ef9459b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.418 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:19 compute-0 kernel: tap6901f664-30: left promiscuous mode
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.437 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[42480472-a806-4d5d-8aa6-761aa6d45e87]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.466 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb50218-1d66-476f-aa11-43500e880f08]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.468 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f27c28-e5d3-4a07-80e8-57e61909975f]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.486 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7921d4-8506-4591-aa78-f67e78e3380c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575252, 'reachable_time': 34073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337601, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.488 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:34:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:19.489 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[e13143ab-d1a4-40f6-8fa7-5e56fdb5efd3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:34:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d6901f664\x2d336b\x2d42d2\x2dbbf7\x2d58951befc8d1.mount: Deactivated successfully.
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.507 2 DEBUG nova.virt.libvirt.guest [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '29a2fe9a-add5-43c1-948a-9df854aa4261' (instance-00000019) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.507 2 INFO nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Migration operation has completed
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.507 2 INFO nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] _post_live_migration() is started..
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.523 2 WARNING neutronclient.v2_0.client [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.523 2 WARNING neutronclient.v2_0.client [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.606 2 DEBUG nova.compute.manager [req-b298b88a-2b80-4158-a6e0-690853ab34d2 req-275415dc-c383-4241-9c8c-5e67b0d0fae6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.606 2 DEBUG oslo_concurrency.lockutils [req-b298b88a-2b80-4158-a6e0-690853ab34d2 req-275415dc-c383-4241-9c8c-5e67b0d0fae6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.606 2 DEBUG oslo_concurrency.lockutils [req-b298b88a-2b80-4158-a6e0-690853ab34d2 req-275415dc-c383-4241-9c8c-5e67b0d0fae6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.607 2 DEBUG oslo_concurrency.lockutils [req-b298b88a-2b80-4158-a6e0-690853ab34d2 req-275415dc-c383-4241-9c8c-5e67b0d0fae6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.607 2 DEBUG nova.compute.manager [req-b298b88a-2b80-4158-a6e0-690853ab34d2 req-275415dc-c383-4241-9c8c-5e67b0d0fae6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] No waiting events found dispatching network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:34:19 compute-0 nova_compute[265391]: 2025-09-30 18:34:19.607 2 DEBUG nova.compute.manager [req-b298b88a-2b80-4158-a6e0-690853ab34d2 req-275415dc-c383-4241-9c8c-5e67b0d0fae6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:34:20 compute-0 ceph-mon[73755]: pgmap v1672: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 2.7 KiB/s rd, 8.2 KiB/s wr, 4 op/s
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.352 2 DEBUG nova.network.neutron [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port f942c9c9-85a4-47cf-9428-7e266b83b49b and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.352 2 DEBUG nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.353 2 DEBUG nova.virt.libvirt.vif [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:32:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1847336220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1847336220',id=25,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:32:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-my5o1s4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:33:29Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=29a2fe9a-add5-43c1-948a-9df854aa4261,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.354 2 DEBUG nova.network.os_vif_util [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "address": "fa:16:3e:2b:2d:3d", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf942c9c9-85", "ovs_interfaceid": "f942c9c9-85a4-47cf-9428-7e266b83b49b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.354 2 DEBUG nova.network.os_vif_util [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:2d:3d,bridge_name='br-int',has_traffic_filtering=True,id=f942c9c9-85a4-47cf-9428-7e266b83b49b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf942c9c9-85') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.355 2 DEBUG os_vif [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:2d:3d,bridge_name='br-int',has_traffic_filtering=True,id=f942c9c9-85a4-47cf-9428-7e266b83b49b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf942c9c9-85') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf942c9c9-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=626c86aa-7b02-4ad9-baa6-59db390be984) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.367 2 INFO os_vif [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:2d:3d,bridge_name='br-int',has_traffic_filtering=True,id=f942c9c9-85a4-47cf-9428-7e266b83b49b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf942c9c9-85')
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.368 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.368 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.368 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.369 2 DEBUG nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.369 2 INFO nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Deleting instance files /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261_del
Sep 30 18:34:20 compute-0 nova_compute[265391]: 2025-09-30 18:34:20.369 2 INFO nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Deletion of /var/lib/nova/instances/29a2fe9a-add5-43c1-948a-9df854aa4261_del complete
Sep 30 18:34:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1673: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 8.2 KiB/s wr, 7 op/s
Sep 30 18:34:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:21.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.694 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.694 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.694 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.694 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.694 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] No waiting events found dispatching network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.695 2 WARNING nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received unexpected event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b for instance with vm_state active and task_state migrating.
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.695 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.695 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.695 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.695 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.695 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] No waiting events found dispatching network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.695 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.696 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.696 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.696 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.696 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.696 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] No waiting events found dispatching network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.696 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-unplugged-f942c9c9-85a4-47cf-9428-7e266b83b49b for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.696 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.697 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.697 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.697 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.697 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] No waiting events found dispatching network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.697 2 WARNING nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received unexpected event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b for instance with vm_state active and task_state migrating.
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.697 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.698 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.698 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.698 2 DEBUG oslo_concurrency.lockutils [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.698 2 DEBUG nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] No waiting events found dispatching network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:34:21 compute-0 nova_compute[265391]: 2025-09-30 18:34:21.698 2 WARNING nova.compute.manager [req-20fb1c13-8947-4d10-99fd-e1ee948d2794 req-5da56740-cd90-4b9c-8800-e85c4db55ecc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Received unexpected event network-vif-plugged-f942c9c9-85a4-47cf-9428-7e266b83b49b for instance with vm_state active and task_state migrating.
Sep 30 18:34:22 compute-0 ceph-mon[73755]: pgmap v1673: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 8.2 KiB/s wr, 7 op/s
Sep 30 18:34:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:34:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:34:22 compute-0 podman[337608]: 2025-09-30 18:34:22.542164521 +0000 UTC m=+0.061625108 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:34:22 compute-0 podman[337606]: 2025-09-30 18:34:22.563367901 +0000 UTC m=+0.081634457 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:34:22 compute-0 podman[337607]: 2025-09-30 18:34:22.575072534 +0000 UTC m=+0.093256548 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Sep 30 18:34:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1674: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 8.2 KiB/s wr, 6 op/s
Sep 30 18:34:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:34:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:23.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:23.833Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:24 compute-0 ceph-mon[73755]: pgmap v1674: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 8.2 KiB/s wr, 6 op/s
Sep 30 18:34:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1675: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 8.2 KiB/s wr, 7 op/s
Sep 30 18:34:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:25.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:25.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:25 compute-0 nova_compute[265391]: 2025-09-30 18:34:25.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:26 compute-0 nova_compute[265391]: 2025-09-30 18:34:26.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:26 compute-0 ceph-mon[73755]: pgmap v1675: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 8.2 KiB/s wr, 7 op/s
Sep 30 18:34:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1676: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:34:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:27.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:27.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:27.302Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:28 compute-0 ceph-mon[73755]: pgmap v1676: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:34:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1677: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:34:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:28] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:34:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:28] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:34:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:28.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:29.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:29.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:29 compute-0 podman[276673]: time="2025-09-30T18:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:34:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:34:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10296 "" "Go-http-client/1.1"
Sep 30 18:34:30 compute-0 ceph-mon[73755]: pgmap v1677: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.409 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.410 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.410 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "29a2fe9a-add5-43c1-948a-9df854aa4261-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1678: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 3.0 KiB/s rd, 85 B/s wr, 3 op/s
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.922 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.923 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.923 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.923 2 DEBUG nova.compute.resource_tracker [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:34:30 compute-0 nova_compute[265391]: 2025-09-30 18:34:30.923 2 DEBUG oslo_concurrency.processutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:31.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:31 compute-0 nova_compute[265391]: 2025-09-30 18:34:31.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:34:31 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1148247937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:31 compute-0 nova_compute[265391]: 2025-09-30 18:34:31.377 2 DEBUG oslo_concurrency.processutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:31 compute-0 openstack_network_exporter[279566]: ERROR   18:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:34:31 compute-0 openstack_network_exporter[279566]: ERROR   18:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:34:31 compute-0 openstack_network_exporter[279566]: ERROR   18:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:34:31 compute-0 openstack_network_exporter[279566]: ERROR   18:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:34:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:34:31 compute-0 openstack_network_exporter[279566]: ERROR   18:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:34:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:34:31 compute-0 nova_compute[265391]: 2025-09-30 18:34:31.569 2 WARNING nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:34:31 compute-0 nova_compute[265391]: 2025-09-30 18:34:31.571 2 DEBUG oslo_concurrency.processutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:31 compute-0 nova_compute[265391]: 2025-09-30 18:34:31.595 2 DEBUG oslo_concurrency.processutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:31 compute-0 nova_compute[265391]: 2025-09-30 18:34:31.596 2 DEBUG nova.compute.resource_tracker [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4365MB free_disk=39.90114974975586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:34:31 compute-0 nova_compute[265391]: 2025-09-30 18:34:31.596 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:31 compute-0 nova_compute[265391]: 2025-09-30 18:34:31.596 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:31 compute-0 sshd-session[337680]: Invalid user minecraft from 45.252.249.158 port 53634
Sep 30 18:34:31 compute-0 sshd-session[337680]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:34:31 compute-0 sshd-session[337680]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.252.249.158
Sep 30 18:34:32 compute-0 sshd-session[337682]: Invalid user sample from 14.225.220.107 port 44746
Sep 30 18:34:32 compute-0 sshd-session[337682]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:34:32 compute-0 sshd-session[337682]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:34:32 compute-0 ceph-mon[73755]: pgmap v1678: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 3.0 KiB/s rd, 85 B/s wr, 3 op/s
Sep 30 18:34:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1148247937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1679: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:34:32 compute-0 nova_compute[265391]: 2025-09-30 18:34:32.624 2 DEBUG nova.compute.resource_tracker [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 29a2fe9a-add5-43c1-948a-9df854aa4261 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:34:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:33.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:33 compute-0 nova_compute[265391]: 2025-09-30 18:34:33.131 2 DEBUG nova.compute.resource_tracker [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:34:33 compute-0 nova_compute[265391]: 2025-09-30 18:34:33.161 2 DEBUG nova.compute.resource_tracker [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 3076aa2f-e697-4bdd-98f3-898820dd8d4b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:34:33 compute-0 nova_compute[265391]: 2025-09-30 18:34:33.162 2 DEBUG nova.compute.resource_tracker [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:34:33 compute-0 nova_compute[265391]: 2025-09-30 18:34:33.162 2 DEBUG nova.compute.resource_tracker [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:34:31 up  1:37,  0 user,  load average: 0.75, 0.84, 0.86\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:34:33 compute-0 nova_compute[265391]: 2025-09-30 18:34:33.207 2 DEBUG oslo_concurrency.processutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:33.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:33 compute-0 sshd-session[337680]: Failed password for invalid user minecraft from 45.252.249.158 port 53634 ssh2
Sep 30 18:34:33 compute-0 podman[337730]: 2025-09-30 18:34:33.532999835 +0000 UTC m=+0.064995406 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:34:33 compute-0 podman[337732]: 2025-09-30 18:34:33.536792594 +0000 UTC m=+0.064985216 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Sep 30 18:34:33 compute-0 podman[337731]: 2025-09-30 18:34:33.558095066 +0000 UTC m=+0.090295552 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:34:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:34:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698006576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:33 compute-0 nova_compute[265391]: 2025-09-30 18:34:33.655 2 DEBUG oslo_concurrency.processutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:33 compute-0 nova_compute[265391]: 2025-09-30 18:34:33.660 2 DEBUG nova.compute.provider_tree [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:34:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:33.834Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:33 compute-0 sshd-session[337680]: Received disconnect from 45.252.249.158 port 53634:11: Bye Bye [preauth]
Sep 30 18:34:33 compute-0 sshd-session[337680]: Disconnected from invalid user minecraft 45.252.249.158 port 53634 [preauth]
Sep 30 18:34:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:34 compute-0 sshd-session[337682]: Failed password for invalid user sample from 14.225.220.107 port 44746 ssh2
Sep 30 18:34:34 compute-0 nova_compute[265391]: 2025-09-30 18:34:34.167 2 DEBUG nova.scheduler.client.report [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:34:34 compute-0 ceph-mon[73755]: pgmap v1679: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:34:34 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/698006576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1680: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Sep 30 18:34:34 compute-0 nova_compute[265391]: 2025-09-30 18:34:34.680 2 DEBUG nova.compute.resource_tracker [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:34:34 compute-0 nova_compute[265391]: 2025-09-30 18:34:34.681 2 DEBUG oslo_concurrency.lockutils [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.085s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:34 compute-0 nova_compute[265391]: 2025-09-30 18:34:34.701 2 INFO nova.compute.manager [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:34:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:34:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:35.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:34:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:35.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:35 compute-0 nova_compute[265391]: 2025-09-30 18:34:35.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:35 compute-0 nova_compute[265391]: 2025-09-30 18:34:35.802 2 INFO nova.scheduler.client.report [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 3076aa2f-e697-4bdd-98f3-898820dd8d4b
Sep 30 18:34:35 compute-0 nova_compute[265391]: 2025-09-30 18:34:35.802 2 DEBUG nova.virt.libvirt.driver [None req-5bbdd9fe-04ff-4dd1-80b6-ecbd7218c9ad 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 29a2fe9a-add5-43c1-948a-9df854aa4261] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:34:35 compute-0 sshd-session[337682]: Received disconnect from 14.225.220.107 port 44746:11: Bye Bye [preauth]
Sep 30 18:34:35 compute-0 sshd-session[337682]: Disconnected from invalid user sample 14.225.220.107 port 44746 [preauth]
Sep 30 18:34:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:36 compute-0 nova_compute[265391]: 2025-09-30 18:34:36.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:36 compute-0 ceph-mon[73755]: pgmap v1680: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Sep 30 18:34:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:34:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2476786179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:34:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:34:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2476786179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:34:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1681: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:34:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:37.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:37 compute-0 sudo[337796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:34:37 compute-0 sudo[337796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:37 compute-0 sudo[337796]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:37.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:37.302Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:34:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:34:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:34:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2476786179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:34:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2476786179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:34:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:34:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:34:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:34:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:34:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:34:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:34:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:34:38 compute-0 ceph-mon[73755]: pgmap v1681: 353 pgs: 353 active+clean; 200 MiB data, 418 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:34:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1682: 353 pgs: 353 active+clean; 140 MiB data, 380 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Sep 30 18:34:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:38] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:34:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:38] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:34:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:39.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:39.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:39 compute-0 nova_compute[265391]: 2025-09-30 18:34:39.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:40 compute-0 ceph-mon[73755]: pgmap v1682: 353 pgs: 353 active+clean; 140 MiB data, 380 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Sep 30 18:34:40 compute-0 nova_compute[265391]: 2025-09-30 18:34:40.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1683: 353 pgs: 353 active+clean; 121 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:34:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:41.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:41 compute-0 nova_compute[265391]: 2025-09-30 18:34:41.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:41.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:42 compute-0 ceph-mon[73755]: pgmap v1683: 353 pgs: 353 active+clean; 121 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:34:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1684: 353 pgs: 353 active+clean; 121 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:34:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:43.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:43.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:43 compute-0 nova_compute[265391]: 2025-09-30 18:34:43.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:43 compute-0 nova_compute[265391]: 2025-09-30 18:34:43.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:34:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:43.835Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:43 compute-0 nova_compute[265391]: 2025-09-30 18:34:43.935 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:34:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:44 compute-0 ceph-mon[73755]: pgmap v1684: 353 pgs: 353 active+clean; 121 MiB data, 372 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:34:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1685: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:34:44 compute-0 nova_compute[265391]: 2025-09-30 18:34:44.935 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:45.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:45.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:45 compute-0 nova_compute[265391]: 2025-09-30 18:34:45.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1617100929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:46 compute-0 nova_compute[265391]: 2025-09-30 18:34:46.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:46 compute-0 nova_compute[265391]: 2025-09-30 18:34:46.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:46 compute-0 nova_compute[265391]: 2025-09-30 18:34:46.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:46 compute-0 ceph-mon[73755]: pgmap v1685: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:34:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1686: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:34:46 compute-0 nova_compute[265391]: 2025-09-30 18:34:46.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:46 compute-0 nova_compute[265391]: 2025-09-30 18:34:46.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:46 compute-0 nova_compute[265391]: 2025-09-30 18:34:46.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:46 compute-0 nova_compute[265391]: 2025-09-30 18:34:46.941 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:34:46 compute-0 nova_compute[265391]: 2025-09-30 18:34:46.941 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:47.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:34:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:47.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:34:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:47.303Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:34:47 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24173303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:47 compute-0 nova_compute[265391]: 2025-09-30 18:34:47.414 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/331097220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/24173303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:47 compute-0 nova_compute[265391]: 2025-09-30 18:34:47.540 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:34:47 compute-0 nova_compute[265391]: 2025-09-30 18:34:47.541 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:47 compute-0 nova_compute[265391]: 2025-09-30 18:34:47.564 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:47 compute-0 nova_compute[265391]: 2025-09-30 18:34:47.565 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4422MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:34:47 compute-0 nova_compute[265391]: 2025-09-30 18:34:47.565 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:47 compute-0 nova_compute[265391]: 2025-09-30 18:34:47.566 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:48 compute-0 ceph-mon[73755]: pgmap v1686: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:34:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2946357604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:48 compute-0 nova_compute[265391]: 2025-09-30 18:34:48.615 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:34:48 compute-0 nova_compute[265391]: 2025-09-30 18:34:48.616 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:34:47 up  1:38,  0 user,  load average: 0.62, 0.80, 0.85\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:34:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1687: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:34:48 compute-0 nova_compute[265391]: 2025-09-30 18:34:48.644 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:34:48 compute-0 nova_compute[265391]: 2025-09-30 18:34:48.660 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:34:48 compute-0 nova_compute[265391]: 2025-09-30 18:34:48.660 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:34:48 compute-0 nova_compute[265391]: 2025-09-30 18:34:48.673 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:34:48 compute-0 nova_compute[265391]: 2025-09-30 18:34:48.690 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:34:48 compute-0 nova_compute[265391]: 2025-09-30 18:34:48.714 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:48] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:34:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:48] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:34:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:48.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:49.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:34:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/591739360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:49 compute-0 nova_compute[265391]: 2025-09-30 18:34:49.145 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:49 compute-0 nova_compute[265391]: 2025-09-30 18:34:49.150 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:34:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:49.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/591739360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:49 compute-0 nova_compute[265391]: 2025-09-30 18:34:49.659 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:34:50 compute-0 nova_compute[265391]: 2025-09-30 18:34:50.170 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:34:50 compute-0 nova_compute[265391]: 2025-09-30 18:34:50.171 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.605s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:50 compute-0 nova_compute[265391]: 2025-09-30 18:34:50.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:50 compute-0 ceph-mon[73755]: pgmap v1687: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:34:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1688: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 36 op/s
Sep 30 18:34:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:51.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:51 compute-0 nova_compute[265391]: 2025-09-30 18:34:51.170 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:51 compute-0 nova_compute[265391]: 2025-09-30 18:34:51.171 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:51 compute-0 nova_compute[265391]: 2025-09-30 18:34:51.171 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:51 compute-0 nova_compute[265391]: 2025-09-30 18:34:51.172 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:34:51 compute-0 nova_compute[265391]: 2025-09-30 18:34:51.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:51.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:51 compute-0 ceph-mon[73755]: pgmap v1688: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 36 op/s
Sep 30 18:34:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:34:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:34:52 compute-0 nova_compute[265391]: 2025-09-30 18:34:52.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:34:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1689: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:34:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:34:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:53.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:53 compute-0 podman[337884]: 2025-09-30 18:34:53.572261929 +0000 UTC m=+0.079672397 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:34:53 compute-0 podman[337882]: 2025-09-30 18:34:53.580460482 +0000 UTC m=+0.101428311 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Sep 30 18:34:53 compute-0 podman[337883]: 2025-09-30 18:34:53.607213255 +0000 UTC m=+0.122089136 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20250930, tcib_managed=true, config_id=ovn_controller)
Sep 30 18:34:53 compute-0 ceph-mon[73755]: pgmap v1689: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:34:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:53.836Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:54.323 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:54.323 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:34:54.324 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1690: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:34:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:55.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:55.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:55 compute-0 nova_compute[265391]: 2025-09-30 18:34:55.440 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:55 compute-0 nova_compute[265391]: 2025-09-30 18:34:55.440 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:55 compute-0 nova_compute[265391]: 2025-09-30 18:34:55.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:55 compute-0 ceph-mon[73755]: pgmap v1690: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:34:55 compute-0 nova_compute[265391]: 2025-09-30 18:34:55.946 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:34:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:34:56 compute-0 nova_compute[265391]: 2025-09-30 18:34:56.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:34:56 compute-0 nova_compute[265391]: 2025-09-30 18:34:56.501 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:34:56 compute-0 nova_compute[265391]: 2025-09-30 18:34:56.502 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:34:56 compute-0 nova_compute[265391]: 2025-09-30 18:34:56.511 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:34:56 compute-0 nova_compute[265391]: 2025-09-30 18:34:56.511 2 INFO nova.compute.claims [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:34:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1691: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:34:57 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 18:34:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:57.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:57.304Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:57 compute-0 sudo[337951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:34:57 compute-0 sudo[337951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:34:57 compute-0 sudo[337951]: pam_unix(sudo:session): session closed for user root
Sep 30 18:34:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:34:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3353939610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:34:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:34:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3353939610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:34:57 compute-0 nova_compute[265391]: 2025-09-30 18:34:57.561 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:34:57 compute-0 ceph-mon[73755]: pgmap v1691: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:34:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3353939610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:34:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3353939610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:34:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:34:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4158659323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:57 compute-0 nova_compute[265391]: 2025-09-30 18:34:57.995 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:34:58 compute-0 nova_compute[265391]: 2025-09-30 18:34:58.000 2 DEBUG nova.compute.provider_tree [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:34:58 compute-0 nova_compute[265391]: 2025-09-30 18:34:58.510 2 DEBUG nova.scheduler.client.report [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:34:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1692: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:34:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4158659323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:34:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:58] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:34:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:34:58] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:34:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:34:58.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:34:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:34:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:34:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:34:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:34:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:34:59 compute-0 nova_compute[265391]: 2025-09-30 18:34:59.020 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.518s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:34:59 compute-0 nova_compute[265391]: 2025-09-30 18:34:59.021 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:34:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:34:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:34:59.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:34:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:34:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:34:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:34:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:34:59 compute-0 nova_compute[265391]: 2025-09-30 18:34:59.535 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:34:59 compute-0 nova_compute[265391]: 2025-09-30 18:34:59.535 2 DEBUG nova.network.neutron [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:34:59 compute-0 nova_compute[265391]: 2025-09-30 18:34:59.536 2 WARNING neutronclient.v2_0.client [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:34:59 compute-0 nova_compute[265391]: 2025-09-30 18:34:59.536 2 WARNING neutronclient.v2_0.client [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:34:59 compute-0 podman[276673]: time="2025-09-30T18:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:34:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:34:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10299 "" "Go-http-client/1.1"
Sep 30 18:34:59 compute-0 ceph-mon[73755]: pgmap v1692: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:35:00 compute-0 nova_compute[265391]: 2025-09-30 18:35:00.047 2 INFO nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:35:00 compute-0 nova_compute[265391]: 2025-09-30 18:35:00.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:00 compute-0 nova_compute[265391]: 2025-09-30 18:35:00.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:00 compute-0 nova_compute[265391]: 2025-09-30 18:35:00.558 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:35:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1693: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:35:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:01.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:01.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:01 compute-0 openstack_network_exporter[279566]: ERROR   18:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:35:01 compute-0 openstack_network_exporter[279566]: ERROR   18:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:35:01 compute-0 openstack_network_exporter[279566]: ERROR   18:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:35:01 compute-0 openstack_network_exporter[279566]: ERROR   18:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:35:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:35:01 compute-0 openstack_network_exporter[279566]: ERROR   18:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:35:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.578 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.579 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.579 2 INFO nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Creating image(s)
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.602 2 DEBUG nova.storage.rbd_utils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 342a3981-de33-491a-974b-5566045fba97_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.634 2 DEBUG nova.storage.rbd_utils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 342a3981-de33-491a-974b-5566045fba97_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.661 2 DEBUG nova.storage.rbd_utils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 342a3981-de33-491a-974b-5566045fba97_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.664 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.721 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.721 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.722 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.722 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.743 2 DEBUG nova.storage.rbd_utils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 342a3981-de33-491a-974b-5566045fba97_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.746 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 342a3981-de33-491a-974b-5566045fba97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:01 compute-0 ceph-mon[73755]: pgmap v1693: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:35:01 compute-0 nova_compute[265391]: 2025-09-30 18:35:01.980 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 342a3981-de33-491a-974b-5566045fba97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:02 compute-0 nova_compute[265391]: 2025-09-30 18:35:02.046 2 DEBUG nova.storage.rbd_utils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] resizing rbd image 342a3981-de33-491a-974b-5566045fba97_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:35:02 compute-0 nova_compute[265391]: 2025-09-30 18:35:02.147 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:35:02 compute-0 nova_compute[265391]: 2025-09-30 18:35:02.148 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Ensure instance console log exists: /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:35:02 compute-0 nova_compute[265391]: 2025-09-30 18:35:02.149 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:02 compute-0 nova_compute[265391]: 2025-09-30 18:35:02.149 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:02 compute-0 nova_compute[265391]: 2025-09-30 18:35:02.149 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1694: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:35:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:03.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:03.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:03 compute-0 nova_compute[265391]: 2025-09-30 18:35:03.260 2 DEBUG nova.network.neutron [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Successfully created port: f05039eb-b7e1-4072-bc17-63c6787538a1 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:35:03 compute-0 ceph-mon[73755]: pgmap v1694: 353 pgs: 353 active+clean; 41 MiB data, 325 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:35:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:03.838Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:03 compute-0 nova_compute[265391]: 2025-09-30 18:35:03.934 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:03 compute-0 nova_compute[265391]: 2025-09-30 18:35:03.935 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:35:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:04 compute-0 podman[338174]: 2025-09-30 18:35:04.527491539 +0000 UTC m=+0.061251679 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:35:04 compute-0 podman[338175]: 2025-09-30 18:35:04.542123088 +0000 UTC m=+0.069789010 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Sep 30 18:35:04 compute-0 podman[338173]: 2025-09-30 18:35:04.562077916 +0000 UTC m=+0.099395688 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Sep 30 18:35:04 compute-0 nova_compute[265391]: 2025-09-30 18:35:04.570 2 DEBUG nova.network.neutron [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Successfully updated port: f05039eb-b7e1-4072-bc17-63c6787538a1 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:35:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1695: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:35:04 compute-0 nova_compute[265391]: 2025-09-30 18:35:04.642 2 DEBUG nova.compute.manager [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-changed-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:35:04 compute-0 nova_compute[265391]: 2025-09-30 18:35:04.643 2 DEBUG nova.compute.manager [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Refreshing instance network info cache due to event network-changed-f05039eb-b7e1-4072-bc17-63c6787538a1. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:35:04 compute-0 nova_compute[265391]: 2025-09-30 18:35:04.643 2 DEBUG oslo_concurrency.lockutils [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:35:04 compute-0 nova_compute[265391]: 2025-09-30 18:35:04.643 2 DEBUG oslo_concurrency.lockutils [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:35:04 compute-0 nova_compute[265391]: 2025-09-30 18:35:04.643 2 DEBUG nova.network.neutron [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Refreshing network info cache for port f05039eb-b7e1-4072-bc17-63c6787538a1 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:35:05 compute-0 nova_compute[265391]: 2025-09-30 18:35:05.077 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:35:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:05.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:05 compute-0 nova_compute[265391]: 2025-09-30 18:35:05.150 2 WARNING neutronclient.v2_0.client [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:35:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:05.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:05 compute-0 nova_compute[265391]: 2025-09-30 18:35:05.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:05 compute-0 nova_compute[265391]: 2025-09-30 18:35:05.590 2 DEBUG nova.network.neutron [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:35:05 compute-0 ceph-mon[73755]: pgmap v1695: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:35:05 compute-0 nova_compute[265391]: 2025-09-30 18:35:05.827 2 DEBUG nova.network.neutron [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:35:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:06 compute-0 nova_compute[265391]: 2025-09-30 18:35:06.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:06 compute-0 nova_compute[265391]: 2025-09-30 18:35:06.333 2 DEBUG oslo_concurrency.lockutils [req-9700fdd7-4258-46f8-94c4-2f9aa95ee9b7 req-2a8e869a-5d64-496f-91fd-230fdaeb4690 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:35:06 compute-0 nova_compute[265391]: 2025-09-30 18:35:06.333 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquired lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:35:06 compute-0 nova_compute[265391]: 2025-09-30 18:35:06.334 2 DEBUG nova.network.neutron [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:35:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1696: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:35:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:07.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:07.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:07.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:35:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:35:07 compute-0 nova_compute[265391]: 2025-09-30 18:35:07.417 2 DEBUG nova.network.neutron [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:35:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:35:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:35:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:35:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:35:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:35:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:35:07 compute-0 nova_compute[265391]: 2025-09-30 18:35:07.697 2 WARNING neutronclient.v2_0.client [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:35:07 compute-0 ceph-mon[73755]: pgmap v1696: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:35:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:35:07 compute-0 nova_compute[265391]: 2025-09-30 18:35:07.912 2 DEBUG nova.network.neutron [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Updating instance_info_cache with network_info: [{"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005189955618773198 of space, bias 1.0, pg target 0.10379911237546396 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:35:08
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', 'images', '.nfs']
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.420 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Releasing lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.420 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Instance network_info: |[{"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.425 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Start _get_guest_xml network_info=[{"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.430 2 WARNING nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.432 2 DEBUG nova.virt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteStrategies-server-2049754641', uuid='342a3981-de33-491a-974b-5566045fba97'), owner=OwnerMeta(userid='623ef4a55c9e4fc28bb65e49246b5008', username='tempest-TestExecuteStrategies-1883747907-project-admin', projectid='c634e1c17ed54907969576a0eb8eff50', projectname='tempest-TestExecuteStrategies-1883747907'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257308.4319654) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.437 2 DEBUG nova.virt.libvirt.host [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.438 2 DEBUG nova.virt.libvirt.host [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.442 2 DEBUG nova.virt.libvirt.host [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.442 2 DEBUG nova.virt.libvirt.host [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.443 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.443 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.444 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.445 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.445 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.446 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.446 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.446 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.447 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.447 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.448 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.448 2 DEBUG nova.virt.hardware [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.453 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1697: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:35:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:08] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:35:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:08] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:35:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:08.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:35:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763612461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.911 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.939 2 DEBUG nova.storage.rbd_utils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 342a3981-de33-491a-974b-5566045fba97_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:08 compute-0 nova_compute[265391]: 2025-09-30 18:35:08.943 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:09.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:09.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:35:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596084137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.390 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.393 2 DEBUG nova.virt.libvirt.vif [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:34:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2049754641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2049754641',id=26,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ryu3h0vp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:35:00Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=342a3981-de33-491a-974b-5566045fba97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.394 2 DEBUG nova.network.os_vif_util [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.396 2 DEBUG nova.network.os_vif_util [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:d4:b5,bridge_name='br-int',has_traffic_filtering=True,id=f05039eb-b7e1-4072-bc17-63c6787538a1,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05039eb-b7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.398 2 DEBUG nova.objects.instance [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'pci_devices' on Instance uuid 342a3981-de33-491a-974b-5566045fba97 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:35:09 compute-0 ceph-mon[73755]: pgmap v1697: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:35:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2763612461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:35:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1596084137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.907 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <uuid>342a3981-de33-491a-974b-5566045fba97</uuid>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <name>instance-0000001a</name>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-2049754641</nova:name>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:35:08</nova:creationTime>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:35:09 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:35:09 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <nova:port uuid="f05039eb-b7e1-4072-bc17-63c6787538a1">
Sep 30 18:35:09 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <system>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <entry name="serial">342a3981-de33-491a-974b-5566045fba97</entry>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <entry name="uuid">342a3981-de33-491a-974b-5566045fba97</entry>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </system>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <os>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   </os>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <features>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   </features>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/342a3981-de33-491a-974b-5566045fba97_disk">
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       </source>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/342a3981-de33-491a-974b-5566045fba97_disk.config">
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       </source>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:35:09 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:84:d4:b5"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <target dev="tapf05039eb-b7"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/console.log" append="off"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <video>
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </video>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:35:09 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:35:09 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:35:09 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:35:09 compute-0 nova_compute[265391]: </domain>
Sep 30 18:35:09 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.909 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Preparing to wait for external event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.909 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.909 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.909 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.910 2 DEBUG nova.virt.libvirt.vif [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:34:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2049754641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2049754641',id=26,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ryu3h0vp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:35:00Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=342a3981-de33-491a-974b-5566045fba97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.910 2 DEBUG nova.network.os_vif_util [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.911 2 DEBUG nova.network.os_vif_util [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:d4:b5,bridge_name='br-int',has_traffic_filtering=True,id=f05039eb-b7e1-4072-bc17-63c6787538a1,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05039eb-b7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.911 2 DEBUG os_vif [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:d4:b5,bridge_name='br-int',has_traffic_filtering=True,id=f05039eb-b7e1-4072-bc17-63c6787538a1,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05039eb-b7') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.912 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.912 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '4fd2d94c-8564-5ce6-b996-1a987f922b83', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.950 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf05039eb-b7, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.951 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapf05039eb-b7, col_values=(('qos', UUID('452d081b-12d2-4e75-9796-973ac4069ac5')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.951 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapf05039eb-b7, col_values=(('external_ids', {'iface-id': 'f05039eb-b7e1-4072-bc17-63c6787538a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:d4:b5', 'vm-uuid': '342a3981-de33-491a-974b-5566045fba97'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:09 compute-0 NetworkManager[45059]: <info>  [1759257309.9534] manager: (tapf05039eb-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:09 compute-0 nova_compute[265391]: 2025-09-30 18:35:09.961 2 INFO os_vif [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:d4:b5,bridge_name='br-int',has_traffic_filtering=True,id=f05039eb-b7e1-4072-bc17-63c6787538a1,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05039eb-b7')
Sep 30 18:35:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1698: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Sep 30 18:35:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:11 compute-0 nova_compute[265391]: 2025-09-30 18:35:11.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:11.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:11 compute-0 sudo[338302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:35:11 compute-0 sudo[338302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:11 compute-0 sudo[338302]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:11 compute-0 sudo[338327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:35:11 compute-0 sudo[338327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:11 compute-0 nova_compute[265391]: 2025-09-30 18:35:11.498 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:35:11 compute-0 nova_compute[265391]: 2025-09-30 18:35:11.499 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:35:11 compute-0 nova_compute[265391]: 2025-09-30 18:35:11.499 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No VIF found with MAC fa:16:3e:84:d4:b5, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:35:11 compute-0 nova_compute[265391]: 2025-09-30 18:35:11.499 2 INFO nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Using config drive
Sep 30 18:35:11 compute-0 nova_compute[265391]: 2025-09-30 18:35:11.517 2 DEBUG nova.storage.rbd_utils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 342a3981-de33-491a-974b-5566045fba97_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:11 compute-0 ceph-mon[73755]: pgmap v1698: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Sep 30 18:35:12 compute-0 sudo[338327]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:12 compute-0 nova_compute[265391]: 2025-09-30 18:35:12.029 2 WARNING neutronclient.v2_0.client [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:35:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:35:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:35:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:35:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:35:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1699: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Sep 30 18:35:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:35:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:35:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:35:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:35:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:35:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:35:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:35:12 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:35:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:35:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:35:12 compute-0 sudo[338405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:35:12 compute-0 sudo[338405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:12 compute-0 sudo[338405]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:12 compute-0 sudo[338430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:35:12 compute-0 sudo[338430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:12 compute-0 nova_compute[265391]: 2025-09-30 18:35:12.666 2 INFO nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Creating config drive at /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/disk.config
Sep 30 18:35:12 compute-0 nova_compute[265391]: 2025-09-30 18:35:12.672 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmphi5hn3qz execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:12 compute-0 nova_compute[265391]: 2025-09-30 18:35:12.816 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmphi5hn3qz" returned: 0 in 0.144s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:12 compute-0 nova_compute[265391]: 2025-09-30 18:35:12.839 2 DEBUG nova.storage.rbd_utils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 342a3981-de33-491a-974b-5566045fba97_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:12 compute-0 nova_compute[265391]: 2025-09-30 18:35:12.843 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/disk.config 342a3981-de33-491a-974b-5566045fba97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:12 compute-0 podman[338515]: 2025-09-30 18:35:12.892257482 +0000 UTC m=+0.040720687 container create 0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:35:12 compute-0 systemd[1]: Started libpod-conmon-0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569.scope.
Sep 30 18:35:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:35:12 compute-0 podman[338515]: 2025-09-30 18:35:12.873132166 +0000 UTC m=+0.021595391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:35:12 compute-0 podman[338515]: 2025-09-30 18:35:12.97508559 +0000 UTC m=+0.123548825 container init 0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:35:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:35:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:35:12 compute-0 ceph-mon[73755]: pgmap v1699: 353 pgs: 353 active+clean; 88 MiB data, 346 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Sep 30 18:35:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:35:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:35:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:35:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:35:12 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:35:12 compute-0 podman[338515]: 2025-09-30 18:35:12.983403165 +0000 UTC m=+0.131866360 container start 0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_boyd, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:35:12 compute-0 podman[338515]: 2025-09-30 18:35:12.986174937 +0000 UTC m=+0.134638142 container attach 0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:35:12 compute-0 busy_boyd[338547]: 167 167
Sep 30 18:35:12 compute-0 systemd[1]: libpod-0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569.scope: Deactivated successfully.
Sep 30 18:35:12 compute-0 conmon[338547]: conmon 0663395e363962757e9f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569.scope/container/memory.events
Sep 30 18:35:12 compute-0 podman[338515]: 2025-09-30 18:35:12.990776616 +0000 UTC m=+0.139239841 container died 0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_boyd, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4ce3da589e274c0d3082bbe516ed0a70ad0f1ab0859568016afabc5ef824200-merged.mount: Deactivated successfully.
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.021 2 DEBUG oslo_concurrency.processutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/disk.config 342a3981-de33-491a-974b-5566045fba97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.024 2 INFO nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Deleting local config drive /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/disk.config because it was imported into RBD.
Sep 30 18:35:13 compute-0 podman[338515]: 2025-09-30 18:35:13.02794282 +0000 UTC m=+0.176406025 container remove 0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:35:13 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 18:35:13 compute-0 systemd[1]: libpod-conmon-0663395e363962757e9f1021291dba2a3ee270ca5f5de60a1b5f613b2b758569.scope: Deactivated successfully.
Sep 30 18:35:13 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 18:35:13 compute-0 kernel: tapf05039eb-b7: entered promiscuous mode
Sep 30 18:35:13 compute-0 NetworkManager[45059]: <info>  [1759257313.1397] manager: (tapf05039eb-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Sep 30 18:35:13 compute-0 ovn_controller[156242]: 2025-09-30T18:35:13Z|00222|binding|INFO|Claiming lport f05039eb-b7e1-4072-bc17-63c6787538a1 for this chassis.
Sep 30 18:35:13 compute-0 ovn_controller[156242]: 2025-09-30T18:35:13Z|00223|binding|INFO|f05039eb-b7e1-4072-bc17-63c6787538a1: Claiming fa:16:3e:84:d4:b5 10.100.0.7
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:13.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.149 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:d4:b5 10.100.0.7'], port_security=['fa:16:3e:84:d4:b5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '342a3981-de33-491a-974b-5566045fba97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=f05039eb-b7e1-4072-bc17-63c6787538a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.150 166158 INFO neutron.agent.ovn.metadata.agent [-] Port f05039eb-b7e1-4072-bc17-63c6787538a1 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.151 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:35:13 compute-0 ovn_controller[156242]: 2025-09-30T18:35:13Z|00224|binding|INFO|Setting lport f05039eb-b7e1-4072-bc17-63c6787538a1 ovn-installed in OVS
Sep 30 18:35:13 compute-0 ovn_controller[156242]: 2025-09-30T18:35:13Z|00225|binding|INFO|Setting lport f05039eb-b7e1-4072-bc17-63c6787538a1 up in Southbound
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.166 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2958b79b-b4c4-4945-aaf5-e77e0f0d6616]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.166 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6901f664-31 in ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.168 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6901f664-30 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.168 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbebffe-fa7b-444e-9751-dba1c1f989bd]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.169 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[63a8ba32-d3dc-43c2-a31f-16461bb748a1]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 systemd-udevd[338618]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.184 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[657aaf92-fd25-4a23-ac52-5535af6e1970]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 systemd-machined[219917]: New machine qemu-19-instance-0000001a.
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.193 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f50bddb3-3512-4d07-9684-231cfec638d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-0000001a.
Sep 30 18:35:13 compute-0 NetworkManager[45059]: <info>  [1759257313.2057] device (tapf05039eb-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:35:13 compute-0 NetworkManager[45059]: <info>  [1759257313.2071] device (tapf05039eb-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:35:13 compute-0 podman[338601]: 2025-09-30 18:35:13.210211635 +0000 UTC m=+0.055069668 container create 9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_elion, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.230 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[1b66ec61-f2c3-410a-b565-5fa8e76fe51b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.239 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d6003f-c55a-4589-b78b-4ad5c3993aee]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 NetworkManager[45059]: <info>  [1759257313.2402] manager: (tap6901f664-30): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Sep 30 18:35:13 compute-0 systemd[1]: Started libpod-conmon-9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38.scope.
Sep 30 18:35:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:13.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.273 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7f7144-651a-4b15-95ce-7b86eacee52d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.276 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[d633ccc6-81c4-4c98-90f9-28a39ff65f0c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:35:13 compute-0 podman[338601]: 2025-09-30 18:35:13.190261818 +0000 UTC m=+0.035119881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af1b7c57281102700a1c2e6dd1d906a1847e3d91ebca345c318edbe0138d1f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af1b7c57281102700a1c2e6dd1d906a1847e3d91ebca345c318edbe0138d1f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af1b7c57281102700a1c2e6dd1d906a1847e3d91ebca345c318edbe0138d1f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af1b7c57281102700a1c2e6dd1d906a1847e3d91ebca345c318edbe0138d1f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af1b7c57281102700a1c2e6dd1d906a1847e3d91ebca345c318edbe0138d1f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.307 2 DEBUG nova.compute.manager [req-209b45da-bdfd-4f19-8777-e0a53551ef9c req-4aaec3be-161e-434a-9e7a-32c13e7e4179 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.308 2 DEBUG oslo_concurrency.lockutils [req-209b45da-bdfd-4f19-8777-e0a53551ef9c req-4aaec3be-161e-434a-9e7a-32c13e7e4179 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.309 2 DEBUG oslo_concurrency.lockutils [req-209b45da-bdfd-4f19-8777-e0a53551ef9c req-4aaec3be-161e-434a-9e7a-32c13e7e4179 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.309 2 DEBUG oslo_concurrency.lockutils [req-209b45da-bdfd-4f19-8777-e0a53551ef9c req-4aaec3be-161e-434a-9e7a-32c13e7e4179 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.309 2 DEBUG nova.compute.manager [req-209b45da-bdfd-4f19-8777-e0a53551ef9c req-4aaec3be-161e-434a-9e7a-32c13e7e4179 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Processing event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:35:13 compute-0 podman[338601]: 2025-09-30 18:35:13.310313961 +0000 UTC m=+0.155172044 container init 9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:35:13 compute-0 NetworkManager[45059]: <info>  [1759257313.3160] device (tap6901f664-30): carrier: link connected
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.323 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[84a470b2-8921-4142-99eb-17b4ca69c911]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 podman[338601]: 2025-09-30 18:35:13.325964696 +0000 UTC m=+0.170822729 container start 9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_elion, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 18:35:13 compute-0 podman[338601]: 2025-09-30 18:35:13.329121358 +0000 UTC m=+0.173979391 container attach 9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.341 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7c70f5-068e-449b-a5f9-fa9c1e1401cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591200, 'reachable_time': 17513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338659, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.376 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee84e5b9-aa9f-4e97-b03a-8a0a404242d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:412a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591200, 'tstamp': 591200}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338660, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.406 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e211fb56-234a-43a9-8462-9b3e52b96d05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591200, 'reachable_time': 17513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338661, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.452 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f6debb3c-d517-43d1-a3be-ed1f7b4cbf3c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.517 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7f44d9-5e86-4425-a041-3e8867b1427f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.523 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.523 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.523 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:13 compute-0 kernel: tap6901f664-30: entered promiscuous mode
Sep 30 18:35:13 compute-0 NetworkManager[45059]: <info>  [1759257313.5257] manager: (tap6901f664-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.529 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:13 compute-0 ovn_controller[156242]: 2025-09-30T18:35:13Z|00226|binding|INFO|Releasing lport 5b6cbf18-1826-41d0-920f-e9db4f1a1832 from this chassis (sb_readonly=0)
Sep 30 18:35:13 compute-0 nova_compute[265391]: 2025-09-30 18:35:13.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.552 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[904437f9-ea5c-49df-9de2-6700ede9098b]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.553 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.553 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.554 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 6901f664-336b-42d2-bbf7-58951befc8d1 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.554 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.554 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1107b98f-15ed-4aec-becc-3128930d67c8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.555 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.555 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a8925a-751d-4fc2-84c4-4018f878028c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.556 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:35:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:13.558 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'env', 'PROCESS_TAG=haproxy-6901f664-336b-42d2-bbf7-58951befc8d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6901f664-336b-42d2-bbf7-58951befc8d1.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:35:13 compute-0 clever_elion[338634]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:35:13 compute-0 clever_elion[338634]: --> All data devices are unavailable
Sep 30 18:35:13 compute-0 systemd[1]: libpod-9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38.scope: Deactivated successfully.
Sep 30 18:35:13 compute-0 podman[338601]: 2025-09-30 18:35:13.707659642 +0000 UTC m=+0.552517675 container died 9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5af1b7c57281102700a1c2e6dd1d906a1847e3d91ebca345c318edbe0138d1f0-merged.mount: Deactivated successfully.
Sep 30 18:35:13 compute-0 podman[338601]: 2025-09-30 18:35:13.751641012 +0000 UTC m=+0.596499045 container remove 9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:35:13 compute-0 systemd[1]: libpod-conmon-9b7855a469d01097a0022d6e160b7ea379bd181a5cb7546fefabeda3b0db2b38.scope: Deactivated successfully.
Sep 30 18:35:13 compute-0 sudo[338430]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:13.839Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:13 compute-0 sudo[338715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:35:13 compute-0 sudo[338715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:13 compute-0 sudo[338715]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:13 compute-0 sudo[338773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:35:13 compute-0 sudo[338773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:13 compute-0 podman[338811]: 2025-09-30 18:35:13.987161289 +0000 UTC m=+0.061278040 container create dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Sep 30 18:35:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:14 compute-0 systemd[1]: Started libpod-conmon-dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7.scope.
Sep 30 18:35:14 compute-0 podman[338811]: 2025-09-30 18:35:13.951068693 +0000 UTC m=+0.025185464 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:35:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd575eac7b19b3fc574f216322c53321edcd97eb4f24a44744c03c0f44c898e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:14 compute-0 podman[338811]: 2025-09-30 18:35:14.08327136 +0000 UTC m=+0.157388141 container init dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:35:14 compute-0 podman[338811]: 2025-09-30 18:35:14.088312641 +0000 UTC m=+0.162429392 container start dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930)
Sep 30 18:35:14 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[338828]: [NOTICE]   (338832) : New worker (338841) forked
Sep 30 18:35:14 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[338828]: [NOTICE]   (338832) : Loading success.
Sep 30 18:35:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1700: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.347 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.350 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.353 2 INFO nova.virt.libvirt.driver [-] [instance: 342a3981-de33-491a-974b-5566045fba97] Instance spawned successfully.
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.354 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:35:14 compute-0 podman[338885]: 2025-09-30 18:35:14.364703096 +0000 UTC m=+0.048632892 container create adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:35:14 compute-0 systemd[1]: Started libpod-conmon-adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519.scope.
Sep 30 18:35:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:35:14 compute-0 podman[338885]: 2025-09-30 18:35:14.342955032 +0000 UTC m=+0.026884858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:35:14 compute-0 podman[338885]: 2025-09-30 18:35:14.451129366 +0000 UTC m=+0.135059182 container init adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:35:14 compute-0 podman[338885]: 2025-09-30 18:35:14.459594206 +0000 UTC m=+0.143524002 container start adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:35:14 compute-0 sad_herschel[338902]: 167 167
Sep 30 18:35:14 compute-0 systemd[1]: libpod-adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519.scope: Deactivated successfully.
Sep 30 18:35:14 compute-0 conmon[338902]: conmon adef057863c03de84a6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519.scope/container/memory.events
Sep 30 18:35:14 compute-0 podman[338885]: 2025-09-30 18:35:14.465815477 +0000 UTC m=+0.149745293 container attach adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:35:14 compute-0 podman[338885]: 2025-09-30 18:35:14.466892705 +0000 UTC m=+0.150822501 container died adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 18:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c14cbb7ec4732030767eae09421545aea1b078560306e44b61ab176df84932a3-merged.mount: Deactivated successfully.
Sep 30 18:35:14 compute-0 podman[338885]: 2025-09-30 18:35:14.500555288 +0000 UTC m=+0.184485074 container remove adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:35:14 compute-0 systemd[1]: libpod-conmon-adef057863c03de84a6aafa1a81a8cb7e8edd99ef628b69ae1d39aaa92a5a519.scope: Deactivated successfully.
Sep 30 18:35:14 compute-0 podman[338926]: 2025-09-30 18:35:14.695000989 +0000 UTC m=+0.050225513 container create 95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Sep 30 18:35:14 compute-0 systemd[1]: Started libpod-conmon-95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4.scope.
Sep 30 18:35:14 compute-0 podman[338926]: 2025-09-30 18:35:14.678975353 +0000 UTC m=+0.034199897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:35:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488bec737c838e3f2139a5b6e9efefe1ec8b6137184b72e6679aeffbde750202/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488bec737c838e3f2139a5b6e9efefe1ec8b6137184b72e6679aeffbde750202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488bec737c838e3f2139a5b6e9efefe1ec8b6137184b72e6679aeffbde750202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488bec737c838e3f2139a5b6e9efefe1ec8b6137184b72e6679aeffbde750202/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:14 compute-0 podman[338926]: 2025-09-30 18:35:14.798964054 +0000 UTC m=+0.154188618 container init 95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 18:35:14 compute-0 podman[338926]: 2025-09-30 18:35:14.805983976 +0000 UTC m=+0.161208490 container start 95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:35:14 compute-0 podman[338926]: 2025-09-30 18:35:14.811715485 +0000 UTC m=+0.166940029 container attach 95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.880 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.882 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.883 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.883 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.884 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.884 2 DEBUG nova.virt.libvirt.driver [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:14 compute-0 nova_compute[265391]: 2025-09-30 18:35:14.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]: {
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:     "0": [
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:         {
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "devices": [
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "/dev/loop3"
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             ],
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "lv_name": "ceph_lv0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "lv_size": "21470642176",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "name": "ceph_lv0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "tags": {
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.cluster_name": "ceph",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.crush_device_class": "",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.encrypted": "0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.osd_id": "0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.type": "block",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.vdo": "0",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:                 "ceph.with_tpm": "0"
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             },
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "type": "block",
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:             "vg_name": "ceph_vg0"
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:         }
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]:     ]
Sep 30 18:35:15 compute-0 naughty_lichterman[338942]: }
Sep 30 18:35:15 compute-0 systemd[1]: libpod-95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4.scope: Deactivated successfully.
Sep 30 18:35:15 compute-0 podman[338926]: 2025-09-30 18:35:15.120808698 +0000 UTC m=+0.476033272 container died 95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:35:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-488bec737c838e3f2139a5b6e9efefe1ec8b6137184b72e6679aeffbde750202-merged.mount: Deactivated successfully.
Sep 30 18:35:15 compute-0 podman[338926]: 2025-09-30 18:35:15.176642516 +0000 UTC m=+0.531867040 container remove 95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:35:15 compute-0 systemd[1]: libpod-conmon-95f20698580feab8f33acb2c36a015b84c9b675473c36056256f39a0958818f4.scope: Deactivated successfully.
Sep 30 18:35:15 compute-0 sudo[338773]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:15.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:15 compute-0 sudo[338963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:35:15 compute-0 sudo[338963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:15 compute-0 sudo[338963]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:15 compute-0 sudo[338988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:35:15 compute-0 sudo[338988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.372 2 DEBUG nova.compute.manager [req-509f26b2-8c2b-46f1-915c-c9452656447f req-ca92ed42-c9e9-4b4c-ab93-91a0d0847f3b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.373 2 DEBUG oslo_concurrency.lockutils [req-509f26b2-8c2b-46f1-915c-c9452656447f req-ca92ed42-c9e9-4b4c-ab93-91a0d0847f3b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.373 2 DEBUG oslo_concurrency.lockutils [req-509f26b2-8c2b-46f1-915c-c9452656447f req-ca92ed42-c9e9-4b4c-ab93-91a0d0847f3b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.374 2 DEBUG oslo_concurrency.lockutils [req-509f26b2-8c2b-46f1-915c-c9452656447f req-ca92ed42-c9e9-4b4c-ab93-91a0d0847f3b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.374 2 DEBUG nova.compute.manager [req-509f26b2-8c2b-46f1-915c-c9452656447f req-ca92ed42-c9e9-4b4c-ab93-91a0d0847f3b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] No waiting events found dispatching network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.374 2 WARNING nova.compute.manager [req-509f26b2-8c2b-46f1-915c-c9452656447f req-ca92ed42-c9e9-4b4c-ab93-91a0d0847f3b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received unexpected event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 for instance with vm_state building and task_state spawning.
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.392 2 INFO nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Took 13.81 seconds to spawn the instance on the hypervisor.
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.393 2 DEBUG nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:35:15 compute-0 ceph-mon[73755]: pgmap v1700: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Sep 30 18:35:15 compute-0 podman[339055]: 2025-09-30 18:35:15.78856489 +0000 UTC m=+0.040759577 container create ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_driscoll, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:35:15 compute-0 systemd[1]: Started libpod-conmon-ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a.scope.
Sep 30 18:35:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:35:15 compute-0 podman[339055]: 2025-09-30 18:35:15.863702929 +0000 UTC m=+0.115897636 container init ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_driscoll, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:35:15 compute-0 podman[339055]: 2025-09-30 18:35:15.772317849 +0000 UTC m=+0.024512556 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:35:15 compute-0 podman[339055]: 2025-09-30 18:35:15.869667353 +0000 UTC m=+0.121862040 container start ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:35:15 compute-0 podman[339055]: 2025-09-30 18:35:15.873671717 +0000 UTC m=+0.125866424 container attach ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:35:15 compute-0 tender_driscoll[339073]: 167 167
Sep 30 18:35:15 compute-0 systemd[1]: libpod-ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a.scope: Deactivated successfully.
Sep 30 18:35:15 compute-0 conmon[339073]: conmon ccd0798522b197305d41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a.scope/container/memory.events
Sep 30 18:35:15 compute-0 podman[339055]: 2025-09-30 18:35:15.877150437 +0000 UTC m=+0.129345134 container died ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf9334dbba7f7649486c5a1f07b3a11720cf14c05ce1ed7a8c98f4691c24fc1-merged.mount: Deactivated successfully.
Sep 30 18:35:15 compute-0 podman[339055]: 2025-09-30 18:35:15.911873167 +0000 UTC m=+0.164067854 container remove ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:35:15 compute-0 nova_compute[265391]: 2025-09-30 18:35:15.932 2 INFO nova.compute.manager [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Took 19.47 seconds to build instance.
Sep 30 18:35:15 compute-0 systemd[1]: libpod-conmon-ccd0798522b197305d415ab697cfd865279587662c69e2839ac9ed5bb47b6d1a.scope: Deactivated successfully.
Sep 30 18:35:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:16 compute-0 podman[339099]: 2025-09-30 18:35:16.076395983 +0000 UTC m=+0.043443578 container create 62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:35:16 compute-0 systemd[1]: Started libpod-conmon-62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53.scope.
Sep 30 18:35:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad710787ba6295e840fbd2c23fbbc92c0bbdd2d6e77a3786bb768d01590ca664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad710787ba6295e840fbd2c23fbbc92c0bbdd2d6e77a3786bb768d01590ca664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad710787ba6295e840fbd2c23fbbc92c0bbdd2d6e77a3786bb768d01590ca664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad710787ba6295e840fbd2c23fbbc92c0bbdd2d6e77a3786bb768d01590ca664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:35:16 compute-0 podman[339099]: 2025-09-30 18:35:16.144194501 +0000 UTC m=+0.111242186 container init 62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:35:16 compute-0 podman[339099]: 2025-09-30 18:35:16.052796431 +0000 UTC m=+0.019844046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:35:16 compute-0 podman[339099]: 2025-09-30 18:35:16.152947097 +0000 UTC m=+0.119994692 container start 62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_cannon, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:35:16 compute-0 podman[339099]: 2025-09-30 18:35:16.156331235 +0000 UTC m=+0.123378920 container attach 62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:35:16 compute-0 nova_compute[265391]: 2025-09-30 18:35:16.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1701: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 4.1 KiB/s rd, 14 KiB/s wr, 6 op/s
Sep 30 18:35:16 compute-0 nova_compute[265391]: 2025-09-30 18:35:16.439 2 DEBUG oslo_concurrency.lockutils [None req-801dd077-0a39-4b6e-9131-5b7c1e496657 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.998s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:16 compute-0 lvm[339191]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:35:16 compute-0 lvm[339191]: VG ceph_vg0 finished
Sep 30 18:35:16 compute-0 eloquent_cannon[339116]: {}
Sep 30 18:35:16 compute-0 systemd[1]: libpod-62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53.scope: Deactivated successfully.
Sep 30 18:35:16 compute-0 systemd[1]: libpod-62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53.scope: Consumed 1.251s CPU time.
Sep 30 18:35:16 compute-0 podman[339099]: 2025-09-30 18:35:16.919679926 +0000 UTC m=+0.886727551 container died 62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad710787ba6295e840fbd2c23fbbc92c0bbdd2d6e77a3786bb768d01590ca664-merged.mount: Deactivated successfully.
Sep 30 18:35:16 compute-0 podman[339099]: 2025-09-30 18:35:16.96882677 +0000 UTC m=+0.935874395 container remove 62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 18:35:17 compute-0 systemd[1]: libpod-conmon-62dc4dbd5433264d759599dc21076f0c72099c8f50031e21504a2f69a5d4cc53.scope: Deactivated successfully.
Sep 30 18:35:17 compute-0 sudo[338988]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:35:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:35:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:35:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:35:17 compute-0 sudo[339205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:35:17 compute-0 sudo[339205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:17 compute-0 sudo[339205]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:17.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:17.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:17.306Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:17 compute-0 sudo[339230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:35:17 compute-0 sudo[339230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:17 compute-0 sudo[339230]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:17 compute-0 ceph-mon[73755]: pgmap v1701: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 4.1 KiB/s rd, 14 KiB/s wr, 6 op/s
Sep 30 18:35:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:35:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:35:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1702: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 4.1 KiB/s rd, 14 KiB/s wr, 6 op/s
Sep 30 18:35:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:18] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:35:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:18] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:35:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:18.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:19.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:19.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:19 compute-0 ceph-mon[73755]: pgmap v1702: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 4.1 KiB/s rd, 14 KiB/s wr, 6 op/s
Sep 30 18:35:19 compute-0 nova_compute[265391]: 2025-09-30 18:35:19.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1703: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 77 op/s
Sep 30 18:35:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:21 compute-0 nova_compute[265391]: 2025-09-30 18:35:21.081 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:21 compute-0 nova_compute[265391]: 2025-09-30 18:35:21.081 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:21 compute-0 ceph-mon[73755]: pgmap v1703: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 77 op/s
Sep 30 18:35:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:21.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:21 compute-0 nova_compute[265391]: 2025-09-30 18:35:21.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:21.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:21 compute-0 nova_compute[265391]: 2025-09-30 18:35:21.588 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:35:22 compute-0 nova_compute[265391]: 2025-09-30 18:35:22.153 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:22 compute-0 nova_compute[265391]: 2025-09-30 18:35:22.153 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:22 compute-0 nova_compute[265391]: 2025-09-30 18:35:22.163 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:35:22 compute-0 nova_compute[265391]: 2025-09-30 18:35:22.163 2 INFO nova.compute.claims [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:35:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1704: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 76 op/s
Sep 30 18:35:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:35:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:35:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:35:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:23.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:23 compute-0 nova_compute[265391]: 2025-09-30 18:35:23.234 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:23.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:23 compute-0 ceph-mon[73755]: pgmap v1704: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 76 op/s
Sep 30 18:35:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:35:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3008404670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:35:23 compute-0 nova_compute[265391]: 2025-09-30 18:35:23.696 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:23 compute-0 nova_compute[265391]: 2025-09-30 18:35:23.701 2 DEBUG nova.compute.provider_tree [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:35:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:23.839Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:24 compute-0 nova_compute[265391]: 2025-09-30 18:35:24.209 2 DEBUG nova.scheduler.client.report [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:35:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1705: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:35:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3008404670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:35:24 compute-0 podman[339288]: 2025-09-30 18:35:24.526290484 +0000 UTC m=+0.060618853 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:35:24 compute-0 podman[339287]: 2025-09-30 18:35:24.547572025 +0000 UTC m=+0.084725977 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:35:24 compute-0 podman[339286]: 2025-09-30 18:35:24.547621167 +0000 UTC m=+0.084857761 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:35:24 compute-0 nova_compute[265391]: 2025-09-30 18:35:24.717 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.564s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:24 compute-0 nova_compute[265391]: 2025-09-30 18:35:24.718 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:35:24 compute-0 nova_compute[265391]: 2025-09-30 18:35:24.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:25 compute-0 nova_compute[265391]: 2025-09-30 18:35:25.229 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:35:25 compute-0 nova_compute[265391]: 2025-09-30 18:35:25.229 2 DEBUG nova.network.neutron [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:35:25 compute-0 nova_compute[265391]: 2025-09-30 18:35:25.230 2 WARNING neutronclient.v2_0.client [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:35:25 compute-0 nova_compute[265391]: 2025-09-30 18:35:25.230 2 WARNING neutronclient.v2_0.client [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:35:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:25.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:25 compute-0 ceph-mon[73755]: pgmap v1705: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:35:25 compute-0 nova_compute[265391]: 2025-09-30 18:35:25.742 2 INFO nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:35:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:25.939 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:35:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:25.940 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:35:25 compute-0 nova_compute[265391]: 2025-09-30 18:35:25.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:25 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:25.941 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:26 compute-0 nova_compute[265391]: 2025-09-30 18:35:26.187 2 DEBUG nova.network.neutron [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Successfully created port: 23538fed-fc3c-4080-bbea-55e12668af3b _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:35:26 compute-0 nova_compute[265391]: 2025-09-30 18:35:26.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:26 compute-0 nova_compute[265391]: 2025-09-30 18:35:26.259 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:35:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1706: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 69 op/s
Sep 30 18:35:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:27.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:27 compute-0 ceph-mon[73755]: pgmap v1706: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 69 op/s
Sep 30 18:35:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:27.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.293 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.295 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.295 2 INFO nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Creating image(s)
Sep 30 18:35:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:27.309Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:35:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:27.310Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.328 2 DEBUG nova.storage.rbd_utils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.366 2 DEBUG nova.storage.rbd_utils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.404 2 DEBUG nova.storage.rbd_utils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.409 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.479 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.480 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.480 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.481 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.508 2 DEBUG nova.storage.rbd_utils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.512 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.874 2 DEBUG nova.network.neutron [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Successfully updated port: 23538fed-fc3c-4080-bbea-55e12668af3b _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.977 2 DEBUG nova.compute.manager [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-changed-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.978 2 DEBUG nova.compute.manager [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Refreshing instance network info cache due to event network-changed-23538fed-fc3c-4080-bbea-55e12668af3b. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.979 2 DEBUG oslo_concurrency.lockutils [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.979 2 DEBUG oslo_concurrency.lockutils [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:35:27 compute-0 nova_compute[265391]: 2025-09-30 18:35:27.979 2 DEBUG nova.network.neutron [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Refreshing network info cache for port 23538fed-fc3c-4080-bbea-55e12668af3b _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:35:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1707: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 69 op/s
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.383 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.497 2 WARNING neutronclient.v2_0.client [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.532 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.603 2 DEBUG nova.network.neutron [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.609 2 DEBUG nova.storage.rbd_utils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] resizing rbd image 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:35:28 compute-0 ovn_controller[156242]: 2025-09-30T18:35:28Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:d4:b5 10.100.0.7
Sep 30 18:35:28 compute-0 ovn_controller[156242]: 2025-09-30T18:35:28Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:d4:b5 10.100.0.7
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.705 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.706 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Ensure instance console log exists: /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.706 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.706 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:28 compute-0 nova_compute[265391]: 2025-09-30 18:35:28.707 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:28] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:35:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:28] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:35:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:29.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:29.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:29 compute-0 nova_compute[265391]: 2025-09-30 18:35:29.564 2 DEBUG nova.network.neutron [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:35:29 compute-0 ceph-mon[73755]: pgmap v1707: 353 pgs: 353 active+clean; 88 MiB data, 347 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 69 op/s
Sep 30 18:35:29 compute-0 podman[276673]: time="2025-09-30T18:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:35:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:35:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10758 "" "Go-http-client/1.1"
Sep 30 18:35:29 compute-0 nova_compute[265391]: 2025-09-30 18:35:29.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:30 compute-0 nova_compute[265391]: 2025-09-30 18:35:30.071 2 DEBUG oslo_concurrency.lockutils [req-be55e399-c08b-47a9-a799-c873b9d166f9 req-decdb636-ea01-497d-88b6-adc754fe459a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:35:30 compute-0 nova_compute[265391]: 2025-09-30 18:35:30.072 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquired lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:35:30 compute-0 nova_compute[265391]: 2025-09-30 18:35:30.072 2 DEBUG nova.network.neutron [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:35:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1708: 353 pgs: 353 active+clean; 161 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Sep 30 18:35:30 compute-0 ceph-mon[73755]: pgmap v1708: 353 pgs: 353 active+clean; 161 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Sep 30 18:35:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:31.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:31 compute-0 nova_compute[265391]: 2025-09-30 18:35:31.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:31.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:31 compute-0 openstack_network_exporter[279566]: ERROR   18:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:35:31 compute-0 openstack_network_exporter[279566]: ERROR   18:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:35:31 compute-0 openstack_network_exporter[279566]: ERROR   18:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:35:31 compute-0 openstack_network_exporter[279566]: ERROR   18:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:35:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:35:31 compute-0 openstack_network_exporter[279566]: ERROR   18:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:35:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:35:31 compute-0 nova_compute[265391]: 2025-09-30 18:35:31.577 2 DEBUG nova.network.neutron [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:35:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1709: 353 pgs: 353 active+clean; 161 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 69 op/s
Sep 30 18:35:32 compute-0 ceph-mon[73755]: pgmap v1709: 353 pgs: 353 active+clean; 161 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 69 op/s
Sep 30 18:35:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:33.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:33.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:33 compute-0 nova_compute[265391]: 2025-09-30 18:35:33.431 2 WARNING neutronclient.v2_0.client [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:35:33 compute-0 nova_compute[265391]: 2025-09-30 18:35:33.597 2 DEBUG nova.network.neutron [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Updating instance_info_cache with network_info: [{"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:35:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:33.840Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.103 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Releasing lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.103 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Instance network_info: |[{"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.106 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Start _get_guest_xml network_info=[{"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.111 2 WARNING nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.113 2 DEBUG nova.virt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteStrategies-server-1259001093', uuid='741d9cb1-7a49-4d89-8b1a-78ae947f2c49'), owner=OwnerMeta(userid='623ef4a55c9e4fc28bb65e49246b5008', username='tempest-TestExecuteStrategies-1883747907-project-admin', projectid='c634e1c17ed54907969576a0eb8eff50', projectname='tempest-TestExecuteStrategies-1883747907'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257334.1129858) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.117 2 DEBUG nova.virt.libvirt.host [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.117 2 DEBUG nova.virt.libvirt.host [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.123 2 DEBUG nova.virt.libvirt.host [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.124 2 DEBUG nova.virt.libvirt.host [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.125 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.125 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.126 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.126 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.127 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.127 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.127 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.128 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.128 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.129 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.129 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.129 2 DEBUG nova.virt.hardware [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.134 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1710: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 408 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:35:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:35:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2597361341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.604 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.638 2 DEBUG nova.storage.rbd_utils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.642 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:34 compute-0 nova_compute[265391]: 2025-09-30 18:35:34.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:35:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903216319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.108 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.110 2 DEBUG nova.virt.libvirt.vif [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1259001093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1259001093',id=27,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-9g37rry3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:35:26Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=741d9cb1-7a49-4d89-8b1a-78ae947f2c49,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.110 2 DEBUG nova.network.os_vif_util [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.111 2 DEBUG nova.network.os_vif_util [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:4f:5c,bridge_name='br-int',has_traffic_filtering=True,id=23538fed-fc3c-4080-bbea-55e12668af3b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23538fed-fc') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.111 2 DEBUG nova.objects.instance [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lazy-loading 'pci_devices' on Instance uuid 741d9cb1-7a49-4d89-8b1a-78ae947f2c49 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:35:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:35.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:35.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:35 compute-0 podman[339595]: 2025-09-30 18:35:35.535573065 +0000 UTC m=+0.064723009 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:35:35 compute-0 podman[339597]: 2025-09-30 18:35:35.544667281 +0000 UTC m=+0.063769615 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Sep 30 18:35:35 compute-0 ceph-mon[73755]: pgmap v1710: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 408 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:35:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2597361341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:35:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2903216319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:35:35 compute-0 podman[339596]: 2025-09-30 18:35:35.579307668 +0000 UTC m=+0.087868889 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.619 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <uuid>741d9cb1-7a49-4d89-8b1a-78ae947f2c49</uuid>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <name>instance-0000001b</name>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1259001093</nova:name>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:35:34</nova:creationTime>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:35:35 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:35:35 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <nova:port uuid="23538fed-fc3c-4080-bbea-55e12668af3b">
Sep 30 18:35:35 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <system>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <entry name="serial">741d9cb1-7a49-4d89-8b1a-78ae947f2c49</entry>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <entry name="uuid">741d9cb1-7a49-4d89-8b1a-78ae947f2c49</entry>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </system>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <os>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   </os>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <features>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   </features>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk">
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       </source>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config">
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       </source>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:35:35 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:1a:4f:5c"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <target dev="tap23538fed-fc"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/console.log" append="off"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <video>
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </video>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:35:35 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:35:35 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:35:35 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:35:35 compute-0 nova_compute[265391]: </domain>
Sep 30 18:35:35 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.621 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Preparing to wait for external event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.621 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.622 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.622 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.622 2 DEBUG nova.virt.libvirt.vif [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1259001093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1259001093',id=27,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-9g37rry3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:35:26Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=741d9cb1-7a49-4d89-8b1a-78ae947f2c49,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.623 2 DEBUG nova.network.os_vif_util [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converting VIF {"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.623 2 DEBUG nova.network.os_vif_util [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:4f:5c,bridge_name='br-int',has_traffic_filtering=True,id=23538fed-fc3c-4080-bbea-55e12668af3b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23538fed-fc') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.623 2 DEBUG os_vif [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:4f:5c,bridge_name='br-int',has_traffic_filtering=True,id=23538fed-fc3c-4080-bbea-55e12668af3b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23538fed-fc') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '92c5bf17-6cf2-5fdf-90b2-77d036c7342b', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23538fed-fc, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap23538fed-fc, col_values=(('qos', UUID('9961ea65-e38d-4d4d-9dc7-d397e1b9882e')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap23538fed-fc, col_values=(('external_ids', {'iface-id': '23538fed-fc3c-4080-bbea-55e12668af3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:4f:5c', 'vm-uuid': '741d9cb1-7a49-4d89-8b1a-78ae947f2c49'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:35 compute-0 NetworkManager[45059]: <info>  [1759257335.6447] manager: (tap23538fed-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:35 compute-0 nova_compute[265391]: 2025-09-30 18:35:35.651 2 INFO os_vif [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:4f:5c,bridge_name='br-int',has_traffic_filtering=True,id=23538fed-fc3c-4080-bbea-55e12668af3b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23538fed-fc')
Sep 30 18:35:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:36 compute-0 nova_compute[265391]: 2025-09-30 18:35:36.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1711: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:35:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:35:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1599935679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:35:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:35:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1599935679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:35:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1599935679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:35:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1599935679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:35:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:37 compute-0 nova_compute[265391]: 2025-09-30 18:35:37.199 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:35:37 compute-0 nova_compute[265391]: 2025-09-30 18:35:37.199 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:35:37 compute-0 nova_compute[265391]: 2025-09-30 18:35:37.199 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] No VIF found with MAC fa:16:3e:1a:4f:5c, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:35:37 compute-0 nova_compute[265391]: 2025-09-30 18:35:37.200 2 INFO nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Using config drive
Sep 30 18:35:37 compute-0 nova_compute[265391]: 2025-09-30 18:35:37.221 2 DEBUG nova.storage.rbd_utils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:37.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:37.311Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:35:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:35:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:35:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:35:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:35:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:35:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:35:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:35:37 compute-0 sudo[339677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:35:37 compute-0 sudo[339677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:37 compute-0 sudo[339677]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:37 compute-0 ceph-mon[73755]: pgmap v1711: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:35:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:35:37 compute-0 nova_compute[265391]: 2025-09-30 18:35:37.733 2 WARNING neutronclient.v2_0.client [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:35:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1712: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:35:38 compute-0 nova_compute[265391]: 2025-09-30 18:35:38.686 2 INFO nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Creating config drive at /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/disk.config
Sep 30 18:35:38 compute-0 nova_compute[265391]: 2025-09-30 18:35:38.699 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp5vsxr01f execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:38 compute-0 ceph-mon[73755]: pgmap v1712: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Sep 30 18:35:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:35:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:35:38 compute-0 nova_compute[265391]: 2025-09-30 18:35:38.853 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp5vsxr01f" returned: 0 in 0.154s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:38.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:38 compute-0 nova_compute[265391]: 2025-09-30 18:35:38.882 2 DEBUG nova.storage.rbd_utils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] rbd image 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:35:38 compute-0 nova_compute[265391]: 2025-09-30 18:35:38.886 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/disk.config 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:39.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.227 2 DEBUG oslo_concurrency.processutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/disk.config 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.228 2 INFO nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Deleting local config drive /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/disk.config because it was imported into RBD.
Sep 30 18:35:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:39.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:39 compute-0 kernel: tap23538fed-fc: entered promiscuous mode
Sep 30 18:35:39 compute-0 NetworkManager[45059]: <info>  [1759257339.2968] manager: (tap23538fed-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Sep 30 18:35:39 compute-0 ovn_controller[156242]: 2025-09-30T18:35:39Z|00227|binding|INFO|Claiming lport 23538fed-fc3c-4080-bbea-55e12668af3b for this chassis.
Sep 30 18:35:39 compute-0 ovn_controller[156242]: 2025-09-30T18:35:39Z|00228|binding|INFO|23538fed-fc3c-4080-bbea-55e12668af3b: Claiming fa:16:3e:1a:4f:5c 10.100.0.10
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.305 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:4f:5c 10.100.0.10'], port_security=['fa:16:3e:1a:4f:5c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '741d9cb1-7a49-4d89-8b1a-78ae947f2c49', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=23538fed-fc3c-4080-bbea-55e12668af3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.306 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 23538fed-fc3c-4080-bbea-55e12668af3b in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 bound to our chassis
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.308 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:35:39 compute-0 ovn_controller[156242]: 2025-09-30T18:35:39Z|00229|binding|INFO|Setting lport 23538fed-fc3c-4080-bbea-55e12668af3b ovn-installed in OVS
Sep 30 18:35:39 compute-0 ovn_controller[156242]: 2025-09-30T18:35:39Z|00230|binding|INFO|Setting lport 23538fed-fc3c-4080-bbea-55e12668af3b up in Southbound
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.329 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0ecf7a-7c35-4620-bbb2-b0d91a78b9b2]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:39 compute-0 systemd-machined[219917]: New machine qemu-20-instance-0000001b.
Sep 30 18:35:39 compute-0 systemd-udevd[339761]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:35:39 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-0000001b.
Sep 30 18:35:39 compute-0 NetworkManager[45059]: <info>  [1759257339.3596] device (tap23538fed-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:35:39 compute-0 NetworkManager[45059]: <info>  [1759257339.3603] device (tap23538fed-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.363 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[258bc230-8ca0-4e47-8932-1708beeb16f8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.366 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[612c3b5d-8184-4548-8e07-168fdd5dd2a6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.403 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[7d61fe7e-1038-410b-af39-ae4d6bb61c9f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.421 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8aa7c8-0bc1-4b91-a559-584db2a97524]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591200, 'reachable_time': 17513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339772, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.443 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5939a97a-0043-4f61-865b-4db53e3170c7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591217, 'tstamp': 591217}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339774, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591220, 'tstamp': 591220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339774, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.445 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.447 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.447 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.448 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.448 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:35:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:39.449 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6712bddb-373a-403b-a7e0-09e778b99af5]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:35:39 compute-0 sshd-session[339704]: Invalid user username from 14.225.220.107 port 53126
Sep 30 18:35:39 compute-0 sshd-session[339704]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:35:39 compute-0 sshd-session[339704]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.225.220.107
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.607 2 DEBUG nova.compute.manager [req-d8f1d3ba-7d42-4e5d-904f-dec4e3e0903d req-e76a0d2c-13eb-4e6f-8fee-8f8166ccee81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.608 2 DEBUG oslo_concurrency.lockutils [req-d8f1d3ba-7d42-4e5d-904f-dec4e3e0903d req-e76a0d2c-13eb-4e6f-8fee-8f8166ccee81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.608 2 DEBUG oslo_concurrency.lockutils [req-d8f1d3ba-7d42-4e5d-904f-dec4e3e0903d req-e76a0d2c-13eb-4e6f-8fee-8f8166ccee81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.608 2 DEBUG oslo_concurrency.lockutils [req-d8f1d3ba-7d42-4e5d-904f-dec4e3e0903d req-e76a0d2c-13eb-4e6f-8fee-8f8166ccee81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:39 compute-0 nova_compute[265391]: 2025-09-30 18:35:39.608 2 DEBUG nova.compute.manager [req-d8f1d3ba-7d42-4e5d-904f-dec4e3e0903d req-e76a0d2c-13eb-4e6f-8fee-8f8166ccee81 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Processing event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:35:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1713: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 408 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Sep 30 18:35:40 compute-0 nova_compute[265391]: 2025-09-30 18:35:40.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.190 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.193 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.196 2 INFO nova.virt.libvirt.driver [-] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Instance spawned successfully.
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.196 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:41.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:41 compute-0 ceph-mon[73755]: pgmap v1713: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 408 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Sep 30 18:35:41 compute-0 sshd-session[339704]: Failed password for invalid user username from 14.225.220.107 port 53126 ssh2
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.691 2 DEBUG nova.compute.manager [req-35208972-595c-40e7-986d-d738b12ee232 req-d7468455-86a7-411f-8677-fd0c593e0383 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.692 2 DEBUG oslo_concurrency.lockutils [req-35208972-595c-40e7-986d-d738b12ee232 req-d7468455-86a7-411f-8677-fd0c593e0383 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.692 2 DEBUG oslo_concurrency.lockutils [req-35208972-595c-40e7-986d-d738b12ee232 req-d7468455-86a7-411f-8677-fd0c593e0383 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.693 2 DEBUG oslo_concurrency.lockutils [req-35208972-595c-40e7-986d-d738b12ee232 req-d7468455-86a7-411f-8677-fd0c593e0383 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.693 2 DEBUG nova.compute.manager [req-35208972-595c-40e7-986d-d738b12ee232 req-d7468455-86a7-411f-8677-fd0c593e0383 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] No waiting events found dispatching network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.694 2 WARNING nova.compute.manager [req-35208972-595c-40e7-986d-d738b12ee232 req-d7468455-86a7-411f-8677-fd0c593e0383 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received unexpected event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b for instance with vm_state building and task_state spawning.
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.708 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.709 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.709 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.710 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.711 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.711 2 DEBUG nova.virt.libvirt.driver [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:35:41 compute-0 nova_compute[265391]: 2025-09-30 18:35:41.934 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:42 compute-0 sshd-session[339704]: Received disconnect from 14.225.220.107 port 53126:11: Bye Bye [preauth]
Sep 30 18:35:42 compute-0 sshd-session[339704]: Disconnected from invalid user username 14.225.220.107 port 53126 [preauth]
Sep 30 18:35:42 compute-0 nova_compute[265391]: 2025-09-30 18:35:42.228 2 INFO nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Took 14.93 seconds to spawn the instance on the hypervisor.
Sep 30 18:35:42 compute-0 nova_compute[265391]: 2025-09-30 18:35:42.228 2 DEBUG nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:35:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1714: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 70 KiB/s rd, 48 KiB/s wr, 23 op/s
Sep 30 18:35:42 compute-0 nova_compute[265391]: 2025-09-30 18:35:42.760 2 INFO nova.compute.manager [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Took 20.66 seconds to build instance.
Sep 30 18:35:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:43 compute-0 nova_compute[265391]: 2025-09-30 18:35:43.266 2 DEBUG oslo_concurrency.lockutils [None req-563d92b0-8593-4be5-b05e-eb1ff90d2e6d 623ef4a55c9e4fc28bb65e49246b5008 c634e1c17ed54907969576a0eb8eff50 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.185s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:43.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:43 compute-0 ceph-mon[73755]: pgmap v1714: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 70 KiB/s rd, 48 KiB/s wr, 23 op/s
Sep 30 18:35:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:43.842Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:35:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:43.842Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1715: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 97 op/s
Sep 30 18:35:44 compute-0 sshd-session[339530]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:35:44 compute-0 sshd-session[339530]: banner exchange: Connection from 115.190.39.222 port 40274: Connection timed out
Sep 30 18:35:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:45.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:45 compute-0 nova_compute[265391]: 2025-09-30 18:35:45.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:45 compute-0 ceph-mon[73755]: pgmap v1715: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 61 KiB/s wr, 97 op/s
Sep 30 18:35:45 compute-0 nova_compute[265391]: 2025-09-30 18:35:45.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:46 compute-0 nova_compute[265391]: 2025-09-30 18:35:46.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1716: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:35:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1171474475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:35:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:47.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:47.312Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:47 compute-0 nova_compute[265391]: 2025-09-30 18:35:47.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:47 compute-0 nova_compute[265391]: 2025-09-30 18:35:47.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:35:47 compute-0 ceph-mon[73755]: pgmap v1716: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:35:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1717: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:35:48 compute-0 nova_compute[265391]: 2025-09-30 18:35:48.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:48 compute-0 nova_compute[265391]: 2025-09-30 18:35:48.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3128617664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:35:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:35:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:35:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:48.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:48 compute-0 nova_compute[265391]: 2025-09-30 18:35:48.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:48 compute-0 nova_compute[265391]: 2025-09-30 18:35:48.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:48 compute-0 nova_compute[265391]: 2025-09-30 18:35:48.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:48 compute-0 nova_compute[265391]: 2025-09-30 18:35:48.942 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:35:48 compute-0 nova_compute[265391]: 2025-09-30 18:35:48.942 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:49.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:49.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:35:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306343805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:35:49 compute-0 nova_compute[265391]: 2025-09-30 18:35:49.400 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:49 compute-0 ceph-mon[73755]: pgmap v1717: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:35:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/306343805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:35:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1718: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 75 op/s
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.440 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.441 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.444 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.445 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.601 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.602 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.636 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.636 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=39.92577362060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.637 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.637 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:50 compute-0 nova_compute[265391]: 2025-09-30 18:35:50.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:51.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:51 compute-0 nova_compute[265391]: 2025-09-30 18:35:51.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:51 compute-0 ceph-mon[73755]: pgmap v1718: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 75 op/s
Sep 30 18:35:51 compute-0 nova_compute[265391]: 2025-09-30 18:35:51.726 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 342a3981-de33-491a-974b-5566045fba97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:35:51 compute-0 nova_compute[265391]: 2025-09-30 18:35:51.726 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 741d9cb1-7a49-4d89-8b1a-78ae947f2c49 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:35:51 compute-0 nova_compute[265391]: 2025-09-30 18:35:51.727 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:35:51 compute-0 nova_compute[265391]: 2025-09-30 18:35:51.727 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=39GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:35:50 up  1:39,  0 user,  load average: 1.34, 0.95, 0.89\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_None': '2', 'num_os_type_None': '2', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '2', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:35:51 compute-0 nova_compute[265391]: 2025-09-30 18:35:51.804 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:35:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:35:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988588815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:35:52 compute-0 nova_compute[265391]: 2025-09-30 18:35:52.271 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:35:52 compute-0 nova_compute[265391]: 2025-09-30 18:35:52.278 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:35:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1719: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:35:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:35:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:35:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3988588815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:35:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:35:52 compute-0 nova_compute[265391]: 2025-09-30 18:35:52.786 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:35:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:35:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:35:53 compute-0 nova_compute[265391]: 2025-09-30 18:35:53.297 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:35:53 compute-0 nova_compute[265391]: 2025-09-30 18:35:53.297 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.660s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:53.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:53 compute-0 ceph-mon[73755]: pgmap v1719: 353 pgs: 353 active+clean; 167 MiB data, 393 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:35:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:53.843Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1720: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 18:35:54 compute-0 nova_compute[265391]: 2025-09-30 18:35:54.297 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:54 compute-0 nova_compute[265391]: 2025-09-30 18:35:54.298 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:54.325 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:35:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:54.325 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:35:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:35:54.325 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:35:54 compute-0 nova_compute[265391]: 2025-09-30 18:35:54.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:35:54 compute-0 ovn_controller[156242]: 2025-09-30T18:35:54Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:4f:5c 10.100.0.10
Sep 30 18:35:54 compute-0 ovn_controller[156242]: 2025-09-30T18:35:54Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:4f:5c 10.100.0.10
Sep 30 18:35:54 compute-0 ceph-mon[73755]: pgmap v1720: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 18:35:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:35:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:55.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:35:55 compute-0 podman[339883]: 2025-09-30 18:35:55.560545743 +0000 UTC m=+0.083804323 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:35:55 compute-0 podman[339881]: 2025-09-30 18:35:55.567942815 +0000 UTC m=+0.092143210 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:35:55 compute-0 podman[339882]: 2025-09-30 18:35:55.615311823 +0000 UTC m=+0.137290610 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:35:55 compute-0 nova_compute[265391]: 2025-09-30 18:35:55.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:35:56 compute-0 nova_compute[265391]: 2025-09-30 18:35:56.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:35:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1721: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:35:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:57.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:57.313Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:57 compute-0 ceph-mon[73755]: pgmap v1721: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:35:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:35:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4216399660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:35:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:35:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4216399660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:35:57 compute-0 sudo[339951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:35:57 compute-0 sudo[339951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:35:57 compute-0 sudo[339951]: pam_unix(sudo:session): session closed for user root
Sep 30 18:35:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1722: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:35:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4216399660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:35:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4216399660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:35:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:58] "GET /metrics HTTP/1.1" 200 46727 "" "Prometheus/2.51.0"
Sep 30 18:35:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:35:58] "GET /metrics HTTP/1.1" 200 46727 "" "Prometheus/2.51.0"
Sep 30 18:35:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:35:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:35:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:35:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:35:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:35:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:35:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:35:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:35:59.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:35:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:35:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:35:59.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:35:59 compute-0 ceph-mon[73755]: pgmap v1722: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:35:59 compute-0 podman[276673]: time="2025-09-30T18:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:35:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:35:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10766 "" "Go-http-client/1.1"
Sep 30 18:36:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1723: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:36:00 compute-0 nova_compute[265391]: 2025-09-30 18:36:00.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:01.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:01 compute-0 nova_compute[265391]: 2025-09-30 18:36:01.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:01.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:01 compute-0 openstack_network_exporter[279566]: ERROR   18:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:36:01 compute-0 openstack_network_exporter[279566]: ERROR   18:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:36:01 compute-0 openstack_network_exporter[279566]: ERROR   18:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:36:01 compute-0 openstack_network_exporter[279566]: ERROR   18:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:36:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:36:01 compute-0 openstack_network_exporter[279566]: ERROR   18:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:36:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:36:01 compute-0 ceph-mon[73755]: pgmap v1723: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:36:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1724: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:36:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:03.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:03.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:03 compute-0 ceph-mon[73755]: pgmap v1724: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:36:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:03.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1725: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:36:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:05.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:05 compute-0 ceph-mon[73755]: pgmap v1725: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:36:05 compute-0 nova_compute[265391]: 2025-09-30 18:36:05.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:06 compute-0 nova_compute[265391]: 2025-09-30 18:36:06.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1726: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.8 KiB/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:36:06 compute-0 podman[339985]: 2025-09-30 18:36:06.552948779 +0000 UTC m=+0.079656306 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20250930, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:36:06 compute-0 podman[339987]: 2025-09-30 18:36:06.567002823 +0000 UTC m=+0.096139313 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350)
Sep 30 18:36:06 compute-0 podman[339986]: 2025-09-30 18:36:06.568292397 +0000 UTC m=+0.095771534 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, io.buildah.version=1.41.4, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:36:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:07.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:07.315Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:36:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:36:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:36:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:36:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:36:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:36:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:36:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:36:07 compute-0 ceph-mon[73755]: pgmap v1726: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.8 KiB/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:36:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022761022050205118 of space, bias 1.0, pg target 0.45522044100410236 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:36:08
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.nfs', 'default.rgw.meta', 'backups', '.mgr']
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1727: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.8 KiB/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:36:08 compute-0 nova_compute[265391]: 2025-09-30 18:36:08.422 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:08] "GET /metrics HTTP/1.1" 200 46727 "" "Prometheus/2.51.0"
Sep 30 18:36:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:08] "GET /metrics HTTP/1.1" 200 46727 "" "Prometheus/2.51.0"
Sep 30 18:36:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:08.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:09.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:09.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:09 compute-0 ovn_controller[156242]: 2025-09-30T18:36:09Z|00231|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Sep 30 18:36:09 compute-0 ceph-mon[73755]: pgmap v1727: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.8 KiB/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:36:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1728: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 2.1 KiB/s rd, 16 KiB/s wr, 2 op/s
Sep 30 18:36:10 compute-0 nova_compute[265391]: 2025-09-30 18:36:10.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:11.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:11 compute-0 nova_compute[265391]: 2025-09-30 18:36:11.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:36:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:11.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:36:11 compute-0 ceph-mon[73755]: pgmap v1728: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 2.1 KiB/s rd, 16 KiB/s wr, 2 op/s
Sep 30 18:36:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1729: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:36:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:13.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:13.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:13 compute-0 nova_compute[265391]: 2025-09-30 18:36:13.428 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Check if temp file /var/lib/nova/instances/tmpufrp_vl0 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:36:13 compute-0 nova_compute[265391]: 2025-09-30 18:36:13.433 2 DEBUG nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpufrp_vl0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='741d9cb1-7a49-4d89-8b1a-78ae947f2c49',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:36:13 compute-0 ceph-mon[73755]: pgmap v1729: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:36:13 compute-0 nova_compute[265391]: 2025-09-30 18:36:13.625 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Check if temp file /var/lib/nova/instances/tmpjbxj26bw exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:36:13 compute-0 nova_compute[265391]: 2025-09-30 18:36:13.629 2 DEBUG nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpjbxj26bw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='342a3981-de33-491a-974b-5566045fba97',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:36:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:13.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1730: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 17 KiB/s wr, 1 op/s
Sep 30 18:36:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:15.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:15 compute-0 ceph-mon[73755]: pgmap v1730: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 17 KiB/s wr, 1 op/s
Sep 30 18:36:15 compute-0 nova_compute[265391]: 2025-09-30 18:36:15.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:16 compute-0 nova_compute[265391]: 2025-09-30 18:36:16.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1731: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:36:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:17.316Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:17 compute-0 sudo[340055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:36:17 compute-0 sudo[340055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:17 compute-0 sudo[340055]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:17 compute-0 sudo[340080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:36:17 compute-0 sudo[340080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:17 compute-0 ceph-mon[73755]: pgmap v1731: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:36:17 compute-0 nova_compute[265391]: 2025-09-30 18:36:17.707 2 DEBUG nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Preparing to wait for external event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:36:17 compute-0 nova_compute[265391]: 2025-09-30 18:36:17.707 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:17 compute-0 nova_compute[265391]: 2025-09-30 18:36:17.707 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:17 compute-0 nova_compute[265391]: 2025-09-30 18:36:17.708 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:17 compute-0 sudo[340119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:36:17 compute-0 sudo[340119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:17 compute-0 sudo[340119]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:18 compute-0 sudo[340080]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:36:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:36:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1732: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:36:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:36:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:36:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:36:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:36:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:36:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:36:18 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:36:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:36:18 compute-0 sudo[340162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:36:18 compute-0 sudo[340162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:18 compute-0 sudo[340162]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:18 compute-0 sudo[340187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:36:18 compute-0 sudo[340187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-mon[73755]: pgmap v1732: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.0 KiB/s wr, 0 op/s
Sep 30 18:36:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:36:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:36:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:36:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:18] "GET /metrics HTTP/1.1" 200 46727 "" "Prometheus/2.51.0"
Sep 30 18:36:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:18] "GET /metrics HTTP/1.1" 200 46727 "" "Prometheus/2.51.0"
Sep 30 18:36:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:18.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:18 compute-0 podman[340255]: 2025-09-30 18:36:18.995058258 +0000 UTC m=+0.050775528 container create cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 18:36:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:19 compute-0 systemd[1]: Started libpod-conmon-cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db.scope.
Sep 30 18:36:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:36:19 compute-0 podman[340255]: 2025-09-30 18:36:18.971942058 +0000 UTC m=+0.027659338 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:36:19 compute-0 podman[340255]: 2025-09-30 18:36:19.087607137 +0000 UTC m=+0.143324387 container init cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_driscoll, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:36:19 compute-0 podman[340255]: 2025-09-30 18:36:19.099671 +0000 UTC m=+0.155388240 container start cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:36:19 compute-0 romantic_driscoll[340272]: 167 167
Sep 30 18:36:19 compute-0 systemd[1]: libpod-cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db.scope: Deactivated successfully.
Sep 30 18:36:19 compute-0 podman[340255]: 2025-09-30 18:36:19.106799015 +0000 UTC m=+0.162516255 container attach cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:36:19 compute-0 podman[340255]: 2025-09-30 18:36:19.107122203 +0000 UTC m=+0.162839443 container died cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-882eef384932b3a8be70b37e208a8b373aa059fbbffcf8539e8e059cc9398d33-merged.mount: Deactivated successfully.
Sep 30 18:36:19 compute-0 podman[340255]: 2025-09-30 18:36:19.152243963 +0000 UTC m=+0.207961193 container remove cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_driscoll, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:36:19 compute-0 systemd[1]: libpod-conmon-cd166e0a47c4a9d59dea3510e5402d0dc59948bc8e4fca3695b153bbb84ad3db.scope: Deactivated successfully.
Sep 30 18:36:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:19.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:19 compute-0 podman[340296]: 2025-09-30 18:36:19.398535448 +0000 UTC m=+0.079027400 container create 2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:36:19 compute-0 systemd[1]: Started libpod-conmon-2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607.scope.
Sep 30 18:36:19 compute-0 podman[340296]: 2025-09-30 18:36:19.365419279 +0000 UTC m=+0.045911281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:36:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a67307652b35ef729a4b7826b3f5e3b586e2928df3f50c3c56fa9abe7603ec9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a67307652b35ef729a4b7826b3f5e3b586e2928df3f50c3c56fa9abe7603ec9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a67307652b35ef729a4b7826b3f5e3b586e2928df3f50c3c56fa9abe7603ec9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a67307652b35ef729a4b7826b3f5e3b586e2928df3f50c3c56fa9abe7603ec9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a67307652b35ef729a4b7826b3f5e3b586e2928df3f50c3c56fa9abe7603ec9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:19 compute-0 podman[340296]: 2025-09-30 18:36:19.48539868 +0000 UTC m=+0.165890632 container init 2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 18:36:19 compute-0 podman[340296]: 2025-09-30 18:36:19.494698721 +0000 UTC m=+0.175190633 container start 2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_spence, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:36:19 compute-0 podman[340296]: 2025-09-30 18:36:19.498407167 +0000 UTC m=+0.178899079 container attach 2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_spence, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:36:19 compute-0 eager_spence[340312]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:36:19 compute-0 eager_spence[340312]: --> All data devices are unavailable
Sep 30 18:36:19 compute-0 systemd[1]: libpod-2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607.scope: Deactivated successfully.
Sep 30 18:36:19 compute-0 podman[340296]: 2025-09-30 18:36:19.84449273 +0000 UTC m=+0.524984642 container died 2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_spence, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a67307652b35ef729a4b7826b3f5e3b586e2928df3f50c3c56fa9abe7603ec9-merged.mount: Deactivated successfully.
Sep 30 18:36:19 compute-0 podman[340296]: 2025-09-30 18:36:19.894626249 +0000 UTC m=+0.575118171 container remove 2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:36:19 compute-0 systemd[1]: libpod-conmon-2cb1e683ac312b8a38d1367b90b044adc388e26b7b326364b3c5b83d0d43b607.scope: Deactivated successfully.
Sep 30 18:36:19 compute-0 sudo[340187]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:19 compute-0 sudo[340340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:36:19 compute-0 sudo[340340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:19 compute-0 sudo[340340]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:20 compute-0 sudo[340365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:36:20 compute-0 sudo[340365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1733: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 10 KiB/s wr, 2 op/s
Sep 30 18:36:20 compute-0 podman[340432]: 2025-09-30 18:36:20.502482509 +0000 UTC m=+0.049565586 container create 231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 18:36:20 compute-0 systemd[1]: Started libpod-conmon-231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16.scope.
Sep 30 18:36:20 compute-0 podman[340432]: 2025-09-30 18:36:20.47940473 +0000 UTC m=+0.026487807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:36:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:36:20 compute-0 podman[340432]: 2025-09-30 18:36:20.607368438 +0000 UTC m=+0.154451495 container init 231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 18:36:20 compute-0 podman[340432]: 2025-09-30 18:36:20.614754839 +0000 UTC m=+0.161837876 container start 231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_moser, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:36:20 compute-0 podman[340432]: 2025-09-30 18:36:20.618278661 +0000 UTC m=+0.165361748 container attach 231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_moser, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:36:20 compute-0 zen_moser[340448]: 167 167
Sep 30 18:36:20 compute-0 systemd[1]: libpod-231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16.scope: Deactivated successfully.
Sep 30 18:36:20 compute-0 podman[340432]: 2025-09-30 18:36:20.624060981 +0000 UTC m=+0.171144018 container died 231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec1e99293fc666293a6fb5d4a11109ac98376cdad0a053cd281d222f0421ce7b-merged.mount: Deactivated successfully.
Sep 30 18:36:20 compute-0 podman[340432]: 2025-09-30 18:36:20.660437094 +0000 UTC m=+0.207520131 container remove 231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_moser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:36:20 compute-0 nova_compute[265391]: 2025-09-30 18:36:20.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:20 compute-0 systemd[1]: libpod-conmon-231a95b6557db5862212b5291fcef757dd23957977110591c7fd8510c8f70a16.scope: Deactivated successfully.
Sep 30 18:36:20 compute-0 podman[340474]: 2025-09-30 18:36:20.82845981 +0000 UTC m=+0.039910816 container create 3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_satoshi, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:36:20 compute-0 systemd[1]: Started libpod-conmon-3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41.scope.
Sep 30 18:36:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d639a312e9b0e3a3190c2d15acbe1d10f796472a7d591fccd150ec052cb455b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d639a312e9b0e3a3190c2d15acbe1d10f796472a7d591fccd150ec052cb455b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d639a312e9b0e3a3190c2d15acbe1d10f796472a7d591fccd150ec052cb455b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d639a312e9b0e3a3190c2d15acbe1d10f796472a7d591fccd150ec052cb455b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:20 compute-0 podman[340474]: 2025-09-30 18:36:20.813014279 +0000 UTC m=+0.024465315 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:36:20 compute-0 podman[340474]: 2025-09-30 18:36:20.906208046 +0000 UTC m=+0.117659082 container init 3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:36:20 compute-0 podman[340474]: 2025-09-30 18:36:20.916477682 +0000 UTC m=+0.127928698 container start 3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_satoshi, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:36:20 compute-0 podman[340474]: 2025-09-30 18:36:20.92025501 +0000 UTC m=+0.131706046 container attach 3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_satoshi, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 18:36:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.070040) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257381070078, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1470, "num_deletes": 251, "total_data_size": 2587321, "memory_usage": 2623160, "flush_reason": "Manual Compaction"}
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257381086459, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2504292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42820, "largest_seqno": 44288, "table_properties": {"data_size": 2497533, "index_size": 3830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13635, "raw_average_key_size": 18, "raw_value_size": 2483952, "raw_average_value_size": 3416, "num_data_blocks": 169, "num_entries": 727, "num_filter_entries": 727, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257244, "oldest_key_time": 1759257244, "file_creation_time": 1759257381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 16502 microseconds, and 10962 cpu microseconds.
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.086534) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2504292 bytes OK
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.086566) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.090119) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.090154) EVENT_LOG_v1 {"time_micros": 1759257381090144, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.090178) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2580908, prev total WAL file size 2580908, number of live WAL files 2.
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.091545) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2445KB)], [95(12MB)]
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257381091580, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 16034851, "oldest_snapshot_seqno": -1}
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7150 keys, 14628381 bytes, temperature: kUnknown
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257381164208, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 14628381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14580823, "index_size": 28566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 185953, "raw_average_key_size": 26, "raw_value_size": 14452802, "raw_average_value_size": 2021, "num_data_blocks": 1132, "num_entries": 7150, "num_filter_entries": 7150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.164643) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 14628381 bytes
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.166220) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.3 rd, 201.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.9 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(12.2) write-amplify(5.8) OK, records in: 7666, records dropped: 516 output_compression: NoCompression
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.166255) EVENT_LOG_v1 {"time_micros": 1759257381166240, "job": 56, "event": "compaction_finished", "compaction_time_micros": 72785, "compaction_time_cpu_micros": 28994, "output_level": 6, "num_output_files": 1, "total_output_size": 14628381, "num_input_records": 7666, "num_output_records": 7150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257381167202, "job": 56, "event": "table_file_deletion", "file_number": 97}
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257381172249, "job": 56, "event": "table_file_deletion", "file_number": 95}
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.091459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.172337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.172356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.172357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.172358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:21 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:21.172360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:21 compute-0 determined_satoshi[340491]: {
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:     "0": [
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:         {
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "devices": [
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "/dev/loop3"
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             ],
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "lv_name": "ceph_lv0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "lv_size": "21470642176",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "name": "ceph_lv0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "tags": {
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.cluster_name": "ceph",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.crush_device_class": "",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.encrypted": "0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.osd_id": "0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.type": "block",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.vdo": "0",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:                 "ceph.with_tpm": "0"
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             },
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "type": "block",
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:             "vg_name": "ceph_vg0"
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:         }
Sep 30 18:36:21 compute-0 determined_satoshi[340491]:     ]
Sep 30 18:36:21 compute-0 determined_satoshi[340491]: }
Sep 30 18:36:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:21.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:21 compute-0 systemd[1]: libpod-3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41.scope: Deactivated successfully.
Sep 30 18:36:21 compute-0 podman[340474]: 2025-09-30 18:36:21.258447498 +0000 UTC m=+0.469898544 container died 3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:36:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d639a312e9b0e3a3190c2d15acbe1d10f796472a7d591fccd150ec052cb455b-merged.mount: Deactivated successfully.
Sep 30 18:36:21 compute-0 nova_compute[265391]: 2025-09-30 18:36:21.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:21 compute-0 podman[340474]: 2025-09-30 18:36:21.307606972 +0000 UTC m=+0.519057998 container remove 3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:36:21 compute-0 systemd[1]: libpod-conmon-3e0a034ccbb00b3c33ac40b5a960b5368ce05a5453e9fcaa5fc6a3d4ae677e41.scope: Deactivated successfully.
Sep 30 18:36:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:21.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:21 compute-0 sudo[340365]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:21 compute-0 ceph-mon[73755]: pgmap v1733: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 10 KiB/s wr, 2 op/s
Sep 30 18:36:21 compute-0 sudo[340513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:36:21 compute-0 sudo[340513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:21 compute-0 sudo[340513]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:21 compute-0 sudo[340538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:36:21 compute-0 sudo[340538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:21 compute-0 podman[340606]: 2025-09-30 18:36:21.946122636 +0000 UTC m=+0.069962184 container create 943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:36:21 compute-0 systemd[1]: Started libpod-conmon-943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec.scope.
Sep 30 18:36:22 compute-0 podman[340606]: 2025-09-30 18:36:21.917187136 +0000 UTC m=+0.041026744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:36:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:36:22 compute-0 podman[340606]: 2025-09-30 18:36:22.032056644 +0000 UTC m=+0.155896162 container init 943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_tu, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 18:36:22 compute-0 podman[340606]: 2025-09-30 18:36:22.038897262 +0000 UTC m=+0.162736780 container start 943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_tu, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:36:22 compute-0 podman[340606]: 2025-09-30 18:36:22.042168526 +0000 UTC m=+0.166008054 container attach 943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:36:22 compute-0 unruffled_tu[340623]: 167 167
Sep 30 18:36:22 compute-0 systemd[1]: libpod-943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec.scope: Deactivated successfully.
Sep 30 18:36:22 compute-0 conmon[340623]: conmon 943f3f453a375a8d12ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec.scope/container/memory.events
Sep 30 18:36:22 compute-0 podman[340606]: 2025-09-30 18:36:22.048426979 +0000 UTC m=+0.172266497 container died 943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_tu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-08e5948d4540f6e5bdab83f16ea705f01278a762e35088f27b3294d408ef3ec2-merged.mount: Deactivated successfully.
Sep 30 18:36:22 compute-0 podman[340606]: 2025-09-30 18:36:22.085397977 +0000 UTC m=+0.209237505 container remove 943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_tu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:36:22 compute-0 systemd[1]: libpod-conmon-943f3f453a375a8d12abfbe72ccffb47852e4fe433444d2cc8f19b059e3283ec.scope: Deactivated successfully.
Sep 30 18:36:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1734: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:36:22 compute-0 podman[340647]: 2025-09-30 18:36:22.307581897 +0000 UTC m=+0.046689582 container create 0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 18:36:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:36:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:36:22 compute-0 systemd[1]: Started libpod-conmon-0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb.scope.
Sep 30 18:36:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22560e2afd5b1d584e74db762ba0acf981d480a42f65e49fd2e9cd80fbb89e7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22560e2afd5b1d584e74db762ba0acf981d480a42f65e49fd2e9cd80fbb89e7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22560e2afd5b1d584e74db762ba0acf981d480a42f65e49fd2e9cd80fbb89e7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22560e2afd5b1d584e74db762ba0acf981d480a42f65e49fd2e9cd80fbb89e7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:36:22 compute-0 podman[340647]: 2025-09-30 18:36:22.288193664 +0000 UTC m=+0.027301389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:36:22 compute-0 podman[340647]: 2025-09-30 18:36:22.386617866 +0000 UTC m=+0.125725651 container init 0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:36:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:36:22 compute-0 podman[340647]: 2025-09-30 18:36:22.398627987 +0000 UTC m=+0.137735682 container start 0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:36:22 compute-0 podman[340647]: 2025-09-30 18:36:22.402186049 +0000 UTC m=+0.141293824 container attach 0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_montalcini, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:36:23 compute-0 lvm[340738]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:36:23 compute-0 lvm[340738]: VG ceph_vg0 finished
Sep 30 18:36:23 compute-0 hungry_montalcini[340664]: {}
Sep 30 18:36:23 compute-0 systemd[1]: libpod-0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb.scope: Deactivated successfully.
Sep 30 18:36:23 compute-0 podman[340647]: 2025-09-30 18:36:23.137959735 +0000 UTC m=+0.877067450 container died 0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_montalcini, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:36:23 compute-0 systemd[1]: libpod-0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb.scope: Consumed 1.168s CPU time.
Sep 30 18:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-22560e2afd5b1d584e74db762ba0acf981d480a42f65e49fd2e9cd80fbb89e7e-merged.mount: Deactivated successfully.
Sep 30 18:36:23 compute-0 podman[340647]: 2025-09-30 18:36:23.188465264 +0000 UTC m=+0.927572959 container remove 0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Sep 30 18:36:23 compute-0 systemd[1]: libpod-conmon-0a71d3dab0a155aa0806a5545c592ca070d3783a3315344d480a5ff0ac2885eb.scope: Deactivated successfully.
Sep 30 18:36:23 compute-0 sudo[340538]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:36:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:36:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:36:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:23.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:36:23 compute-0 sudo[340752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:36:23 compute-0 sudo[340752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:23 compute-0 sudo[340752]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:23.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:23 compute-0 ceph-mon[73755]: pgmap v1734: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:36:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:36:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:36:23 compute-0 nova_compute[265391]: 2025-09-30 18:36:23.647 2 DEBUG nova.compute.manager [req-84c10040-a897-4842-b4be-d1c015a116bb req-1edf0c03-cf27-41b0-ae95-c639a94ccc60 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:23 compute-0 nova_compute[265391]: 2025-09-30 18:36:23.648 2 DEBUG oslo_concurrency.lockutils [req-84c10040-a897-4842-b4be-d1c015a116bb req-1edf0c03-cf27-41b0-ae95-c639a94ccc60 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:23 compute-0 nova_compute[265391]: 2025-09-30 18:36:23.648 2 DEBUG oslo_concurrency.lockutils [req-84c10040-a897-4842-b4be-d1c015a116bb req-1edf0c03-cf27-41b0-ae95-c639a94ccc60 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:23 compute-0 nova_compute[265391]: 2025-09-30 18:36:23.648 2 DEBUG oslo_concurrency.lockutils [req-84c10040-a897-4842-b4be-d1c015a116bb req-1edf0c03-cf27-41b0-ae95-c639a94ccc60 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:23 compute-0 nova_compute[265391]: 2025-09-30 18:36:23.648 2 DEBUG nova.compute.manager [req-84c10040-a897-4842-b4be-d1c015a116bb req-1edf0c03-cf27-41b0-ae95-c639a94ccc60 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] No event matching network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b in dict_keys([('network-vif-plugged', '23538fed-fc3c-4080-bbea-55e12668af3b')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:36:23 compute-0 nova_compute[265391]: 2025-09-30 18:36:23.648 2 DEBUG nova.compute.manager [req-84c10040-a897-4842-b4be-d1c015a116bb req-1edf0c03-cf27-41b0-ae95-c639a94ccc60 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:36:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:23.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1735: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.241 2 INFO nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Took 7.53 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:36:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:25.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:25 compute-0 ceph-mon[73755]: pgmap v1735: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.727 2 DEBUG nova.compute.manager [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.727 2 DEBUG oslo_concurrency.lockutils [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.727 2 DEBUG oslo_concurrency.lockutils [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.728 2 DEBUG oslo_concurrency.lockutils [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.728 2 DEBUG nova.compute.manager [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Processing event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.728 2 DEBUG nova.compute.manager [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-changed-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.728 2 DEBUG nova.compute.manager [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Refreshing instance network info cache due to event network-changed-23538fed-fc3c-4080-bbea-55e12668af3b. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.728 2 DEBUG oslo_concurrency.lockutils [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.728 2 DEBUG oslo_concurrency.lockutils [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.728 2 DEBUG nova.network.neutron [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Refreshing network info cache for port 23538fed-fc3c-4080-bbea-55e12668af3b _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:36:25 compute-0 nova_compute[265391]: 2025-09-30 18:36:25.729 2 DEBUG nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:36:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.237 2 WARNING neutronclient.v2_0.client [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.241 2 DEBUG nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpufrp_vl0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='741d9cb1-7a49-4d89-8b1a-78ae947f2c49',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(43ba0fc0-6a86-4b35-b358-2d6cb9bb939c),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.244 2 DEBUG nova.objects.instance [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 741d9cb1-7a49-4d89-8b1a-78ae947f2c49 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.245 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.246 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.247 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1736: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:36:26 compute-0 podman[340781]: 2025-09-30 18:36:26.530047687 +0000 UTC m=+0.062400109 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:36:26 compute-0 podman[340783]: 2025-09-30 18:36:26.543498936 +0000 UTC m=+0.075812597 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:36:26 compute-0 podman[340782]: 2025-09-30 18:36:26.567301653 +0000 UTC m=+0.100610640 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.749 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.749 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.755 2 DEBUG nova.virt.libvirt.vif [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1259001093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1259001093',id=27,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:35:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-9g37rry3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:35:42Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=741d9cb1-7a49-4d89-8b1a-78ae947f2c49,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.755 2 DEBUG nova.network.os_vif_util [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.756 2 DEBUG nova.network.os_vif_util [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:4f:5c,bridge_name='br-int',has_traffic_filtering=True,id=23538fed-fc3c-4080-bbea-55e12668af3b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23538fed-fc') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.756 2 DEBUG nova.virt.libvirt.migration [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:1a:4f:5c"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <target dev="tap23538fed-fc"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]: </interface>
Sep 30 18:36:26 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.757 2 DEBUG nova.virt.libvirt.migration [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <name>instance-0000001b</name>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <uuid>741d9cb1-7a49-4d89-8b1a-78ae947f2c49</uuid>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1259001093</nova:name>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:35:34</nova:creationTime>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:port uuid="23538fed-fc3c-4080-bbea-55e12668af3b">
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <system>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="serial">741d9cb1-7a49-4d89-8b1a-78ae947f2c49</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="uuid">741d9cb1-7a49-4d89-8b1a-78ae947f2c49</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </system>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <os>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </os>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <features>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </features>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:1a:4f:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23538fed-fc"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/console.log" append="off"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </target>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/console.log" append="off"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </console>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </input>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <video>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </video>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]: </domain>
Sep 30 18:36:26 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.759 2 DEBUG nova.virt.libvirt.migration [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <name>instance-0000001b</name>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <uuid>741d9cb1-7a49-4d89-8b1a-78ae947f2c49</uuid>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1259001093</nova:name>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:35:34</nova:creationTime>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:port uuid="23538fed-fc3c-4080-bbea-55e12668af3b">
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <system>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="serial">741d9cb1-7a49-4d89-8b1a-78ae947f2c49</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="uuid">741d9cb1-7a49-4d89-8b1a-78ae947f2c49</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </system>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <os>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </os>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <features>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </features>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:1a:4f:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23538fed-fc"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/console.log" append="off"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </target>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/console.log" append="off"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </console>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </input>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <video>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </video>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]: </domain>
Sep 30 18:36:26 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.759 2 DEBUG nova.virt.libvirt.migration [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <name>instance-0000001b</name>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <uuid>741d9cb1-7a49-4d89-8b1a-78ae947f2c49</uuid>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-1259001093</nova:name>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:35:34</nova:creationTime>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <nova:port uuid="23538fed-fc3c-4080-bbea-55e12668af3b">
Sep 30 18:36:26 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <system>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="serial">741d9cb1-7a49-4d89-8b1a-78ae947f2c49</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="uuid">741d9cb1-7a49-4d89-8b1a-78ae947f2c49</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </system>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <os>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </os>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <features>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </features>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk.config">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:1a:4f:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23538fed-fc"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/console.log" append="off"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:36:26 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       </target>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49/console.log" append="off"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </console>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </input>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <video>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </video>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:36:26 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:36:26 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:36:26 compute-0 nova_compute[265391]: </domain>
Sep 30 18:36:26 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:36:26 compute-0 nova_compute[265391]: 2025-09-30 18:36:26.759 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:36:27 compute-0 nova_compute[265391]: 2025-09-30 18:36:27.252 2 DEBUG nova.virt.libvirt.migration [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:36:27 compute-0 nova_compute[265391]: 2025-09-30 18:36:27.252 2 INFO nova.virt.libvirt.migration [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:36:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:27.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:27.318Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:27.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:27 compute-0 nova_compute[265391]: 2025-09-30 18:36:27.435 2 WARNING neutronclient.v2_0.client [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:36:27 compute-0 ceph-mon[73755]: pgmap v1736: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:36:27 compute-0 nova_compute[265391]: 2025-09-30 18:36:27.596 2 DEBUG nova.network.neutron [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Updated VIF entry in instance network info cache for port 23538fed-fc3c-4080-bbea-55e12668af3b. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:36:27 compute-0 nova_compute[265391]: 2025-09-30 18:36:27.597 2 DEBUG nova.network.neutron [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Updating instance_info_cache with network_info: [{"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:36:28 compute-0 nova_compute[265391]: 2025-09-30 18:36:28.103 2 DEBUG oslo_concurrency.lockutils [req-f72b79de-0107-4c00-ba55-3afb7f522a08 req-4c25ebe8-7e43-445c-9b6d-c7aa63cdd45d 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-741d9cb1-7a49-4d89-8b1a-78ae947f2c49" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:36:28 compute-0 nova_compute[265391]: 2025-09-30 18:36:28.268 2 INFO nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:36:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1737: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:36:28 compute-0 nova_compute[265391]: 2025-09-30 18:36:28.771 2 DEBUG nova.virt.libvirt.migration [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:36:28 compute-0 nova_compute[265391]: 2025-09-30 18:36:28.771 2 DEBUG nova.virt.libvirt.migration [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:36:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:28] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:36:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:28] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:36:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:28.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:28 compute-0 kernel: tap23538fed-fc (unregistering): left promiscuous mode
Sep 30 18:36:28 compute-0 NetworkManager[45059]: <info>  [1759257388.9274] device (tap23538fed-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:36:28 compute-0 nova_compute[265391]: 2025-09-30 18:36:28.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:28 compute-0 ovn_controller[156242]: 2025-09-30T18:36:28Z|00232|binding|INFO|Releasing lport 23538fed-fc3c-4080-bbea-55e12668af3b from this chassis (sb_readonly=0)
Sep 30 18:36:28 compute-0 ovn_controller[156242]: 2025-09-30T18:36:28Z|00233|binding|INFO|Setting lport 23538fed-fc3c-4080-bbea-55e12668af3b down in Southbound
Sep 30 18:36:28 compute-0 ovn_controller[156242]: 2025-09-30T18:36:28Z|00234|binding|INFO|Removing iface tap23538fed-fc ovn-installed in OVS
Sep 30 18:36:28 compute-0 nova_compute[265391]: 2025-09-30 18:36:28.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:28.944 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:4f:5c 10.100.0.10'], port_security=['fa:16:3e:1a:4f:5c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '741d9cb1-7a49-4d89-8b1a-78ae947f2c49', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=23538fed-fc3c-4080-bbea-55e12668af3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:36:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:28.945 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 23538fed-fc3c-4080-bbea-55e12668af3b in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:36:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:28.947 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6901f664-336b-42d2-bbf7-58951befc8d1
Sep 30 18:36:28 compute-0 nova_compute[265391]: 2025-09-30 18:36:28.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:28.971 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6da3b68b-1de9-402f-be6c-fd1937dfdbb1]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:29 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Sep 30 18:36:29 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000001b.scope: Consumed 15.469s CPU time.
Sep 30 18:36:29 compute-0 systemd-machined[219917]: Machine qemu-20-instance-0000001b terminated.
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.020 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[fa569134-477f-483a-9339-d364cdf03abc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.027 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6e9eba-8fc9-47a0-9fbf-67807f0cc85c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.057 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8d7389-eee8-43ad-91cf-a51d1dc6a31f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.075 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[feb8636f-d5d4-4e8a-8424-d9e2fbf5d6c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6901f664-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:41:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591200, 'reachable_time': 17513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340864, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:29 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk: No such file or directory
Sep 30 18:36:29 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 741d9cb1-7a49-4d89-8b1a-78ae947f2c49_disk: No such file or directory
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.095 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e5329d13-1c89-4c3c-a4a6-65bd8ab5a7c1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591217, 'tstamp': 591217}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340865, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6901f664-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591220, 'tstamp': 591220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340865, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.098 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.108 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6901f664-30, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.108 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.108 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6901f664-30, col_values=(('external_ids', {'iface-id': '5b6cbf18-1826-41d0-920f-e9db4f1a1832'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.109 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.109 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3566a6e7-ebab-4fa2-9976-9345f5838295]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-6901f664-336b-42d2-bbf7-58951befc8d1\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 6901f664-336b-42d2-bbf7-58951befc8d1\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.114 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.114 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.115 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:36:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:36:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:29.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.273 2 DEBUG nova.virt.libvirt.guest [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '741d9cb1-7a49-4d89-8b1a-78ae947f2c49' (instance-0000001b) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.273 2 INFO nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Migration operation has completed
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.274 2 INFO nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] _post_live_migration() is started..
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.285 2 WARNING neutronclient.v2_0.client [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.286 2 WARNING neutronclient.v2_0.client [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:36:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:29.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:29 compute-0 ceph-mon[73755]: pgmap v1737: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.577 2 DEBUG nova.compute.manager [req-ee5b3a55-50f6-4ac5-9d7f-1000dd4e290d req-629effe8-62b5-4b28-962a-9d423c029dc7 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.577 2 DEBUG oslo_concurrency.lockutils [req-ee5b3a55-50f6-4ac5-9d7f-1000dd4e290d req-629effe8-62b5-4b28-962a-9d423c029dc7 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.578 2 DEBUG oslo_concurrency.lockutils [req-ee5b3a55-50f6-4ac5-9d7f-1000dd4e290d req-629effe8-62b5-4b28-962a-9d423c029dc7 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.578 2 DEBUG oslo_concurrency.lockutils [req-ee5b3a55-50f6-4ac5-9d7f-1000dd4e290d req-629effe8-62b5-4b28-962a-9d423c029dc7 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.578 2 DEBUG nova.compute.manager [req-ee5b3a55-50f6-4ac5-9d7f-1000dd4e290d req-629effe8-62b5-4b28-962a-9d423c029dc7 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] No waiting events found dispatching network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.578 2 DEBUG nova.compute.manager [req-ee5b3a55-50f6-4ac5-9d7f-1000dd4e290d req-629effe8-62b5-4b28-962a-9d423c029dc7 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.740 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.741 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:29.743 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:36:29 compute-0 podman[276673]: time="2025-09-30T18:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:36:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:36:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10758 "" "Go-http-client/1.1"
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.827 2 DEBUG nova.network.neutron [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port 23538fed-fc3c-4080-bbea-55e12668af3b and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.828 2 DEBUG nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.830 2 DEBUG nova.virt.libvirt.vif [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1259001093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1259001093',id=27,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:35:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-9g37rry3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:36:08Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=741d9cb1-7a49-4d89-8b1a-78ae947f2c49,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.831 2 DEBUG nova.network.os_vif_util [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "23538fed-fc3c-4080-bbea-55e12668af3b", "address": "fa:16:3e:1a:4f:5c", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23538fed-fc", "ovs_interfaceid": "23538fed-fc3c-4080-bbea-55e12668af3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.832 2 DEBUG nova.network.os_vif_util [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:4f:5c,bridge_name='br-int',has_traffic_filtering=True,id=23538fed-fc3c-4080-bbea-55e12668af3b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23538fed-fc') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.833 2 DEBUG os_vif [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:4f:5c,bridge_name='br-int',has_traffic_filtering=True,id=23538fed-fc3c-4080-bbea-55e12668af3b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23538fed-fc') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23538fed-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=9961ea65-e38d-4d4d-9dc7-d397e1b9882e) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.846 2 INFO os_vif [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:4f:5c,bridge_name='br-int',has_traffic_filtering=True,id=23538fed-fc3c-4080-bbea-55e12668af3b,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23538fed-fc')
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.847 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.847 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.847 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.848 2 DEBUG nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.848 2 INFO nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Deleting instance files /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_del
Sep 30 18:36:29 compute-0 nova_compute[265391]: 2025-09-30 18:36:29.849 2 INFO nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Deletion of /var/lib/nova/instances/741d9cb1-7a49-4d89-8b1a-78ae947f2c49_del complete
Sep 30 18:36:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1738: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:36:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:31.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:31.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:31 compute-0 openstack_network_exporter[279566]: ERROR   18:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:36:31 compute-0 openstack_network_exporter[279566]: ERROR   18:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:36:31 compute-0 openstack_network_exporter[279566]: ERROR   18:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:36:31 compute-0 openstack_network_exporter[279566]: ERROR   18:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:36:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:36:31 compute-0 openstack_network_exporter[279566]: ERROR   18:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:36:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:36:31 compute-0 ceph-mon[73755]: pgmap v1738: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 10 KiB/s wr, 7 op/s
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.683 2 DEBUG nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.683 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.683 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.683 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.684 2 DEBUG nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] No waiting events found dispatching network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.684 2 WARNING nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received unexpected event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b for instance with vm_state active and task_state migrating.
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.684 2 DEBUG nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.684 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.684 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.685 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.685 2 DEBUG nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] No waiting events found dispatching network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.685 2 DEBUG nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-unplugged-23538fed-fc3c-4080-bbea-55e12668af3b for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.685 2 DEBUG nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.685 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.685 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.686 2 DEBUG oslo_concurrency.lockutils [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.686 2 DEBUG nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] No waiting events found dispatching network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:36:31 compute-0 nova_compute[265391]: 2025-09-30 18:36:31.686 2 WARNING nova.compute.manager [req-1241dd02-8753-4b7d-bb4f-f50c003c8ed2 req-6a7dcdee-0c00-4b5a-bc62-ecf50826bafc 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Received unexpected event network-vif-plugged-23538fed-fc3c-4080-bbea-55e12668af3b for instance with vm_state active and task_state migrating.
Sep 30 18:36:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1739: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:36:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:33.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:33.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:33 compute-0 ceph-mon[73755]: pgmap v1739: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.3 KiB/s wr, 5 op/s
Sep 30 18:36:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:33.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:36:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3601.3 total, 600.0 interval
                                           Cumulative writes: 9880 writes, 44K keys, 9877 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 9880 writes, 9877 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1504 writes, 6819 keys, 1504 commit groups, 1.0 writes per commit group, ingest: 10.66 MB, 0.02 MB/s
                                           Interval WAL: 1504 writes, 1504 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     99.0      0.65              0.19        28    0.023       0      0       0.0       0.0
                                             L6      1/0   13.95 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8    198.1    170.2      1.79              0.85        27    0.066    159K    14K       0.0       0.0
                                            Sum      1/0   13.95 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.8    145.6    151.4      2.44              1.04        55    0.044    159K    14K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    180.8    185.0      0.46              0.25        12    0.038     43K   3076       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    198.1    170.2      1.79              0.85        27    0.066    159K    14K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    157.9      0.40              0.19        27    0.015       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3601.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.062, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.36 GB write, 0.10 MB/s write, 0.35 GB read, 0.10 MB/s read, 2.4 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 304.00 MB usage: 34.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.00025 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1952,33.23 MB,10.9324%) FilterBlock(56,446.05 KB,0.143287%) IndexBlock(56,723.97 KB,0.232566%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 18:36:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1740: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Sep 30 18:36:34 compute-0 ceph-mon[73755]: pgmap v1740: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Sep 30 18:36:34 compute-0 nova_compute[265391]: 2025-09-30 18:36:34.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:35.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:35.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.080708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257396080759, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 386, "num_deletes": 251, "total_data_size": 324989, "memory_usage": 331424, "flush_reason": "Manual Compaction"}
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257396084617, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 305175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44289, "largest_seqno": 44674, "table_properties": {"data_size": 302795, "index_size": 480, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6329, "raw_average_key_size": 20, "raw_value_size": 298082, "raw_average_value_size": 964, "num_data_blocks": 20, "num_entries": 309, "num_filter_entries": 309, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257381, "oldest_key_time": 1759257381, "file_creation_time": 1759257396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 3956 microseconds, and 2001 cpu microseconds.
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.084664) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 305175 bytes OK
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.084685) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.086636) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.086655) EVENT_LOG_v1 {"time_micros": 1759257396086649, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.086675) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 322516, prev total WAL file size 322516, number of live WAL files 2.
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.087143) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353034' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(298KB)], [98(13MB)]
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257396087179, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 14933556, "oldest_snapshot_seqno": -1}
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6947 keys, 11092048 bytes, temperature: kUnknown
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257396143285, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 11092048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11050575, "index_size": 22993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 181909, "raw_average_key_size": 26, "raw_value_size": 10930862, "raw_average_value_size": 1573, "num_data_blocks": 899, "num_entries": 6947, "num_filter_entries": 6947, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.143598) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 11092048 bytes
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.145210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 265.7 rd, 197.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(85.3) write-amplify(36.3) OK, records in: 7459, records dropped: 512 output_compression: NoCompression
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.145232) EVENT_LOG_v1 {"time_micros": 1759257396145222, "job": 58, "event": "compaction_finished", "compaction_time_micros": 56211, "compaction_time_cpu_micros": 34119, "output_level": 6, "num_output_files": 1, "total_output_size": 11092048, "num_input_records": 7459, "num_output_records": 6947, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257396145475, "job": 58, "event": "table_file_deletion", "file_number": 100}
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257396148899, "job": 58, "event": "table_file_deletion", "file_number": 98}
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.087070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.148981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.148987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.148994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.148995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:36 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:36:36.148997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:36:36 compute-0 nova_compute[265391]: 2025-09-30 18:36:36.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1741: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Sep 30 18:36:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:36:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1701818751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:36:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:36:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1701818751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:36:37 compute-0 ceph-mon[73755]: pgmap v1741: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Sep 30 18:36:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1701818751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:36:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1701818751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:36:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:37.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:37.320Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:36:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:36:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:37.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:36:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:36:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:36:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:36:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:36:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:36:37 compute-0 podman[340889]: 2025-09-30 18:36:37.528056885 +0000 UTC m=+0.064781150 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git)
Sep 30 18:36:37 compute-0 podman[340887]: 2025-09-30 18:36:37.557779336 +0000 UTC m=+0.101137983 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:36:37 compute-0 podman[340888]: 2025-09-30 18:36:37.558367151 +0000 UTC m=+0.089556693 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:36:37 compute-0 sudo[340944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:36:37 compute-0 sudo[340944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:37 compute-0 sudo[340944]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:36:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1742: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Sep 30 18:36:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:36:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:36:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:38.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:39 compute-0 ceph-mon[73755]: pgmap v1742: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Sep 30 18:36:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:39.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.389 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.389 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.390 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "741d9cb1-7a49-4d89-8b1a-78ae947f2c49-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.900 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.900 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.901 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.901 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:36:39 compute-0 nova_compute[265391]: 2025-09-30 18:36:39.901 2 DEBUG oslo_concurrency.processutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:36:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1743: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Sep 30 18:36:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:36:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940849169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:40 compute-0 nova_compute[265391]: 2025-09-30 18:36:40.368 2 DEBUG oslo_concurrency.processutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:36:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/940849169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:41.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:41 compute-0 ceph-mon[73755]: pgmap v1743: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.410 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.410 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.553 2 WARNING nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.554 2 DEBUG oslo_concurrency.processutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.587 2 DEBUG oslo_concurrency.processutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.588 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4167MB free_disk=39.9011344909668GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.588 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:41 compute-0 nova_compute[265391]: 2025-09-30 18:36:41.589 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1744: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:36:42 compute-0 nova_compute[265391]: 2025-09-30 18:36:42.610 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 741d9cb1-7a49-4d89-8b1a-78ae947f2c49 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.119 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.119 2 INFO nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Updating resource usage from migration 3b051e9f-cf53-41ea-a9f1-d01148892c63
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.169 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 43ba0fc0-6a86-4b35-b358-2d6cb9bb939c is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.169 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 3b051e9f-cf53-41ea-a9f1-d01148892c63 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.169 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.170 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:36:41 up  1:40,  0 user,  load average: 1.15, 0.95, 0.90\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.215 2 DEBUG oslo_concurrency.processutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:36:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:43.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:43 compute-0 ceph-mon[73755]: pgmap v1744: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:36:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619482483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.754 2 DEBUG oslo_concurrency.processutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:36:43 compute-0 nova_compute[265391]: 2025-09-30 18:36:43.761 2 DEBUG nova.compute.provider_tree [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:36:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:43.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:44 compute-0 nova_compute[265391]: 2025-09-30 18:36:44.271 2 DEBUG nova.scheduler.client.report [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:36:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1745: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1619482483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:44 compute-0 nova_compute[265391]: 2025-09-30 18:36:44.780 2 DEBUG nova.compute.resource_tracker [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:36:44 compute-0 nova_compute[265391]: 2025-09-30 18:36:44.780 2 DEBUG oslo_concurrency.lockutils [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.192s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:44 compute-0 nova_compute[265391]: 2025-09-30 18:36:44.800 2 INFO nova.compute.manager [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:36:44 compute-0 nova_compute[265391]: 2025-09-30 18:36:44.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:45.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:45 compute-0 ceph-mon[73755]: pgmap v1745: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:45 compute-0 nova_compute[265391]: 2025-09-30 18:36:45.874 2 INFO nova.scheduler.client.report [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 43ba0fc0-6a86-4b35-b358-2d6cb9bb939c
Sep 30 18:36:45 compute-0 nova_compute[265391]: 2025-09-30 18:36:45.875 2 DEBUG nova.virt.libvirt.driver [None req-1f1ec937-23ba-4f7b-a3e8-b3b26a415b42 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 741d9cb1-7a49-4d89-8b1a-78ae947f2c49] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:36:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1746: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:46 compute-0 nova_compute[265391]: 2025-09-30 18:36:46.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:46 compute-0 nova_compute[265391]: 2025-09-30 18:36:46.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:46 compute-0 nova_compute[265391]: 2025-09-30 18:36:46.893 2 DEBUG nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Preparing to wait for external event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:36:46 compute-0 nova_compute[265391]: 2025-09-30 18:36:46.893 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:46 compute-0 nova_compute[265391]: 2025-09-30 18:36:46.893 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:46 compute-0 nova_compute[265391]: 2025-09-30 18:36:46.894 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:47.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:47.321Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:47.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:47 compute-0 ceph-mon[73755]: pgmap v1746: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1747: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:48 compute-0 nova_compute[265391]: 2025-09-30 18:36:48.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4262819077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:36:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:36:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:48.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:36:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:48.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:48 compute-0 nova_compute[265391]: 2025-09-30 18:36:48.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:48 compute-0 nova_compute[265391]: 2025-09-30 18:36:48.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:48 compute-0 nova_compute[265391]: 2025-09-30 18:36:48.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:48 compute-0 nova_compute[265391]: 2025-09-30 18:36:48.940 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:36:48 compute-0 nova_compute[265391]: 2025-09-30 18:36:48.940 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:36:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:49.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:36:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1350924747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:49.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:49 compute-0 nova_compute[265391]: 2025-09-30 18:36:49.356 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:36:49 compute-0 ceph-mon[73755]: pgmap v1747: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1350924747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:49 compute-0 nova_compute[265391]: 2025-09-30 18:36:49.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1748: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:50 compute-0 nova_compute[265391]: 2025-09-30 18:36:50.394 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:36:50 compute-0 nova_compute[265391]: 2025-09-30 18:36:50.394 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:36:50 compute-0 nova_compute[265391]: 2025-09-30 18:36:50.544 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:36:50 compute-0 nova_compute[265391]: 2025-09-30 18:36:50.545 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:36:50 compute-0 nova_compute[265391]: 2025-09-30 18:36:50.561 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:36:50 compute-0 nova_compute[265391]: 2025-09-30 18:36:50.562 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4166MB free_disk=39.9011344909668GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:36:50 compute-0 nova_compute[265391]: 2025-09-30 18:36:50.562 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:50 compute-0 nova_compute[265391]: 2025-09-30 18:36:50.562 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:51.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:51 compute-0 nova_compute[265391]: 2025-09-30 18:36:51.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:51.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:51 compute-0 ceph-mon[73755]: pgmap v1748: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:51 compute-0 nova_compute[265391]: 2025-09-30 18:36:51.577 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 342a3981-de33-491a-974b-5566045fba97] Updating resource usage from migration 3b051e9f-cf53-41ea-a9f1-d01148892c63
Sep 30 18:36:51 compute-0 nova_compute[265391]: 2025-09-30 18:36:51.615 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration 3b051e9f-cf53-41ea-a9f1-d01148892c63 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:36:51 compute-0 nova_compute[265391]: 2025-09-30 18:36:51.615 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:36:51 compute-0 nova_compute[265391]: 2025-09-30 18:36:51.616 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:36:50 up  1:40,  0 user,  load average: 1.06, 0.93, 0.89\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_c634e1c17ed54907969576a0eb8eff50': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:36:51 compute-0 nova_compute[265391]: 2025-09-30 18:36:51.654 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:36:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:36:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468757023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.081 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.090 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:36:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1749: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:36:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:36:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2468757023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.599 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.821 2 DEBUG nova.compute.manager [req-d4f75973-74f6-4ff7-b4f3-f346922c364a req-55cb283b-4e1b-49a4-85ce-9956d41b06b2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.822 2 DEBUG oslo_concurrency.lockutils [req-d4f75973-74f6-4ff7-b4f3-f346922c364a req-55cb283b-4e1b-49a4-85ce-9956d41b06b2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.822 2 DEBUG oslo_concurrency.lockutils [req-d4f75973-74f6-4ff7-b4f3-f346922c364a req-55cb283b-4e1b-49a4-85ce-9956d41b06b2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.823 2 DEBUG oslo_concurrency.lockutils [req-d4f75973-74f6-4ff7-b4f3-f346922c364a req-55cb283b-4e1b-49a4-85ce-9956d41b06b2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.823 2 DEBUG nova.compute.manager [req-d4f75973-74f6-4ff7-b4f3-f346922c364a req-55cb283b-4e1b-49a4-85ce-9956d41b06b2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] No event matching network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 in dict_keys([('network-vif-plugged', 'f05039eb-b7e1-4072-bc17-63c6787538a1')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:36:52 compute-0 nova_compute[265391]: 2025-09-30 18:36:52.824 2 DEBUG nova.compute.manager [req-d4f75973-74f6-4ff7-b4f3-f346922c364a req-55cb283b-4e1b-49a4-85ce-9956d41b06b2 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:36:53 compute-0 nova_compute[265391]: 2025-09-30 18:36:53.107 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:36:53 compute-0 nova_compute[265391]: 2025-09-30 18:36:53.108 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.546s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:53.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:53.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:53 compute-0 ceph-mon[73755]: pgmap v1749: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 852 B/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2102236554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:36:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:53.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.108 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.108 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.108 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.109 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.109 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:36:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1750: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:54.326 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:54.326 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:54.327 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.413 2 INFO nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Took 7.52 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.949 2 DEBUG nova.compute.manager [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.949 2 DEBUG oslo_concurrency.lockutils [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.949 2 DEBUG oslo_concurrency.lockutils [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.950 2 DEBUG oslo_concurrency.lockutils [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.950 2 DEBUG nova.compute.manager [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Processing event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.950 2 DEBUG nova.compute.manager [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-changed-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.950 2 DEBUG nova.compute.manager [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Refreshing instance network info cache due to event network-changed-f05039eb-b7e1-4072-bc17-63c6787538a1. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.950 2 DEBUG oslo_concurrency.lockutils [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.950 2 DEBUG oslo_concurrency.lockutils [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.950 2 DEBUG nova.network.neutron [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Refreshing network info cache for port f05039eb-b7e1-4072-bc17-63c6787538a1 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:36:54 compute-0 nova_compute[265391]: 2025-09-30 18:36:54.951 2 DEBUG nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:36:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:36:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:55.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:36:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:55.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.458 2 DEBUG nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpjbxj26bw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='342a3981-de33-491a-974b-5566045fba97',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(3b051e9f-cf53-41ea-a9f1-d01148892c63),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.460 2 WARNING neutronclient.v2_0.client [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.470 2 DEBUG nova.objects.instance [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 342a3981-de33-491a-974b-5566045fba97 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.472 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.474 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.475 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:36:55 compute-0 ceph-mon[73755]: pgmap v1750: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.977 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.978 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.983 2 DEBUG nova.virt.libvirt.vif [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:34:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2049754641',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2049754641',id=26,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:35:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ryu3h0vp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:35:15Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=342a3981-de33-491a-974b-5566045fba97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.984 2 DEBUG nova.network.os_vif_util [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.984 2 DEBUG nova.network.os_vif_util [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:d4:b5,bridge_name='br-int',has_traffic_filtering=True,id=f05039eb-b7e1-4072-bc17-63c6787538a1,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05039eb-b7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.984 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:84:d4:b5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <target dev="tapf05039eb-b7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]: </interface>
Sep 30 18:36:55 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.985 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <name>instance-0000001a</name>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <uuid>342a3981-de33-491a-974b-5566045fba97</uuid>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-2049754641</nova:name>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:35:08</nova:creationTime>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:port uuid="f05039eb-b7e1-4072-bc17-63c6787538a1">
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <system>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="serial">342a3981-de33-491a-974b-5566045fba97</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="uuid">342a3981-de33-491a-974b-5566045fba97</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </system>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <os>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </os>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <features>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </features>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/342a3981-de33-491a-974b-5566045fba97_disk">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/342a3981-de33-491a-974b-5566045fba97_disk.config">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:84:d4:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf05039eb-b7"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/console.log" append="off"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </target>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/console.log" append="off"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </console>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </input>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <video>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </video>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]: </domain>
Sep 30 18:36:55 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.987 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <name>instance-0000001a</name>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <uuid>342a3981-de33-491a-974b-5566045fba97</uuid>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-2049754641</nova:name>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:35:08</nova:creationTime>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:port uuid="f05039eb-b7e1-4072-bc17-63c6787538a1">
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <system>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="serial">342a3981-de33-491a-974b-5566045fba97</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="uuid">342a3981-de33-491a-974b-5566045fba97</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </system>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <os>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </os>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <features>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </features>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/342a3981-de33-491a-974b-5566045fba97_disk">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/342a3981-de33-491a-974b-5566045fba97_disk.config">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:84:d4:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf05039eb-b7"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/console.log" append="off"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </target>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/console.log" append="off"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </console>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </input>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <video>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </video>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]: </domain>
Sep 30 18:36:55 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.988 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <name>instance-0000001a</name>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <uuid>342a3981-de33-491a-974b-5566045fba97</uuid>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteStrategies-server-2049754641</nova:name>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:35:08</nova:creationTime>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:user uuid="623ef4a55c9e4fc28bb65e49246b5008">tempest-TestExecuteStrategies-1883747907-project-admin</nova:user>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:project uuid="c634e1c17ed54907969576a0eb8eff50">tempest-TestExecuteStrategies-1883747907</nova:project>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <nova:port uuid="f05039eb-b7e1-4072-bc17-63c6787538a1">
Sep 30 18:36:55 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <system>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="serial">342a3981-de33-491a-974b-5566045fba97</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="uuid">342a3981-de33-491a-974b-5566045fba97</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </system>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <os>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </os>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <features>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </features>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/342a3981-de33-491a-974b-5566045fba97_disk">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/342a3981-de33-491a-974b-5566045fba97_disk.config">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </source>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:84:d4:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf05039eb-b7"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/console.log" append="off"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:36:55 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       </target>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97/console.log" append="off"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </console>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </input>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <video>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </video>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:36:55 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:36:55 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:36:55 compute-0 nova_compute[265391]: </domain>
Sep 30 18:36:55 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:36:55 compute-0 nova_compute[265391]: 2025-09-30 18:36:55.989 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:36:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:36:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1751: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:36:56 compute-0 nova_compute[265391]: 2025-09-30 18:36:56.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:56 compute-0 nova_compute[265391]: 2025-09-30 18:36:56.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:36:56 compute-0 nova_compute[265391]: 2025-09-30 18:36:56.480 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:36:56 compute-0 nova_compute[265391]: 2025-09-30 18:36:56.480 2 INFO nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:36:57 compute-0 sshd-session[341073]: Connection closed by 154.125.120.7 port 42911 [preauth]
Sep 30 18:36:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:57.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:57.322Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:36:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:36:57 compute-0 nova_compute[265391]: 2025-09-30 18:36:57.448 2 WARNING neutronclient.v2_0.client [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:36:57 compute-0 nova_compute[265391]: 2025-09-30 18:36:57.502 2 INFO nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:36:57 compute-0 podman[341082]: 2025-09-30 18:36:57.535579853 +0000 UTC m=+0.063438885 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Sep 30 18:36:57 compute-0 podman[341084]: 2025-09-30 18:36:57.544600557 +0000 UTC m=+0.068451676 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:36:57 compute-0 podman[341083]: 2025-09-30 18:36:57.561650279 +0000 UTC m=+0.085744334 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:36:57 compute-0 ceph-mon[73755]: pgmap v1751: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:36:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:36:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471880056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:36:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:36:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471880056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:36:57 compute-0 sudo[341146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:36:57 compute-0 sudo[341146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:36:57 compute-0 sudo[341146]: pam_unix(sudo:session): session closed for user root
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.004 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.005 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:36:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1752: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.508 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.509 2 DEBUG nova.virt.libvirt.migration [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:36:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1471880056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:36:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1471880056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.612 2 DEBUG nova.network.neutron [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Updated VIF entry in instance network info cache for port f05039eb-b7e1-4072-bc17-63c6787538a1. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.612 2 DEBUG nova.network.neutron [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Updating instance_info_cache with network_info: [{"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:36:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:58] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:36:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:36:58] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:36:58 compute-0 kernel: tapf05039eb-b7 (unregistering): left promiscuous mode
Sep 30 18:36:58 compute-0 NetworkManager[45059]: <info>  [1759257418.8623] device (tapf05039eb-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:36:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:36:58.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:36:58 compute-0 ovn_controller[156242]: 2025-09-30T18:36:58Z|00235|binding|INFO|Releasing lport f05039eb-b7e1-4072-bc17-63c6787538a1 from this chassis (sb_readonly=0)
Sep 30 18:36:58 compute-0 ovn_controller[156242]: 2025-09-30T18:36:58Z|00236|binding|INFO|Setting lport f05039eb-b7e1-4072-bc17-63c6787538a1 down in Southbound
Sep 30 18:36:58 compute-0 ovn_controller[156242]: 2025-09-30T18:36:58Z|00237|binding|INFO|Removing iface tapf05039eb-b7 ovn-installed in OVS
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:58.879 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:d4:b5 10.100.0.7'], port_security=['fa:16:3e:84:d4:b5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '342a3981-de33-491a-974b-5566045fba97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6901f664-336b-42d2-bbf7-58951befc8d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c634e1c17ed54907969576a0eb8eff50', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11cf84c1-9641-409c-b5f5-6c5fe3a8afe5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c24d9de-651b-4bf8-842a-1286ab88b11d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=f05039eb-b7e1-4072-bc17-63c6787538a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:36:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:58.880 166158 INFO neutron.agent.ovn.metadata.agent [-] Port f05039eb-b7e1-4072-bc17-63c6787538a1 in datapath 6901f664-336b-42d2-bbf7-58951befc8d1 unbound from our chassis
Sep 30 18:36:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:58.881 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6901f664-336b-42d2-bbf7-58951befc8d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:36:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:58.883 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7e207531-b025-46e2-afb2-788afb0d4574]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:58 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:58.883 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 namespace which is not needed anymore
Sep 30 18:36:58 compute-0 nova_compute[265391]: 2025-09-30 18:36:58.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:58 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Sep 30 18:36:58 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001a.scope: Consumed 16.932s CPU time.
Sep 30 18:36:58 compute-0 systemd-machined[219917]: Machine qemu-19-instance-0000001a terminated.
Sep 30 18:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:36:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:36:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:36:59 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 342a3981-de33-491a-974b-5566045fba97_disk: No such file or directory
Sep 30 18:36:59 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 342a3981-de33-491a-974b-5566045fba97_disk: No such file or directory
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:59 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[338828]: [NOTICE]   (338832) : haproxy version is 3.0.5-8e879a5
Sep 30 18:36:59 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[338828]: [NOTICE]   (338832) : path to executable is /usr/sbin/haproxy
Sep 30 18:36:59 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[338828]: [WARNING]  (338832) : Exiting Master process...
Sep 30 18:36:59 compute-0 podman[341199]: 2025-09-30 18:36:59.034515877 +0000 UTC m=+0.032127833 container kill dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:59 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[338828]: [ALERT]    (338832) : Current worker (338841) exited with code 143 (Terminated)
Sep 30 18:36:59 compute-0 neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1[338828]: [WARNING]  (338832) : All workers exited. Exiting... (0)
Sep 30 18:36:59 compute-0 systemd[1]: libpod-dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7.scope: Deactivated successfully.
Sep 30 18:36:59 compute-0 conmon[338828]: conmon dd810f3cc9d6ddc37bac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7.scope/container/memory.events
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.042 2 DEBUG nova.virt.libvirt.guest [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.043 2 INFO nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Migration operation has completed
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.044 2 INFO nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] _post_live_migration() is started..
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.048 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.049 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.049 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.063 2 WARNING neutronclient.v2_0.client [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.064 2 WARNING neutronclient.v2_0.client [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:36:59 compute-0 podman[341225]: 2025-09-30 18:36:59.084126907 +0000 UTC m=+0.019477010 container died dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7-userdata-shm.mount: Deactivated successfully.
Sep 30 18:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd575eac7b19b3fc574f216322c53321edcd97eb4f24a44744c03c0f44c898e3-merged.mount: Deactivated successfully.
Sep 30 18:36:59 compute-0 podman[341225]: 2025-09-30 18:36:59.1198016 +0000 UTC m=+0.055151693 container cleanup dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.121 2 DEBUG oslo_concurrency.lockutils [req-ce95c023-18d0-4123-bc55-fce12b5bc609 req-103a8be0-59a7-4842-a750-987bc3277285 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-342a3981-de33-491a-974b-5566045fba97" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:36:59 compute-0 systemd[1]: libpod-conmon-dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7.scope: Deactivated successfully.
Sep 30 18:36:59 compute-0 podman[341226]: 2025-09-30 18:36:59.135207804 +0000 UTC m=+0.062732486 container remove dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.140 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b5304a95-ea39-4760-94c0-2b2bc5a807e7]: (4, ("Tue Sep 30 06:36:58 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7)\ndd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7\nTue Sep 30 06:36:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 (dd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7)\ndd810f3cc9d6ddc37bac76261e3bd7e218205941c3b83f53a3bd30b74d9718e7\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.141 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[15f71931-d2ad-48b2-baeb-2ac7f65d68bc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.141 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6901f664-336b-42d2-bbf7-58951befc8d1.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.142 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffadfc9-84ec-48db-8351-e38a435b9fad]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.142 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6901f664-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:59 compute-0 kernel: tap6901f664-30: left promiscuous mode
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.163 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8032e10d-560b-4b95-8559-c9eee6cbdff0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.194 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f90a7d46-7b89-4cf8-9cfa-ede6f26b4a30]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.195 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[60bd5716-58e4-4241-9b08-95f2cd28c732]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.209 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[24ef8d84-5202-4117-ac07-0a238d3523ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591191, 'reachable_time': 39472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341258, 'error': None, 'target': 'ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.211 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6901f664-336b-42d2-bbf7-58951befc8d1 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:36:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:36:59.211 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[d19d856f-cf38-4b8d-8e94-d0f4459182ab]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:36:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d6901f664\x2d336b\x2d42d2\x2dbbf7\x2d58951befc8d1.mount: Deactivated successfully.
Sep 30 18:36:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:36:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:36:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:36:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:36:59.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:36:59 compute-0 ceph-mon[73755]: pgmap v1752: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.631 2 DEBUG nova.compute.manager [req-7e56938d-6c8a-49b1-818b-fe9927a77361 req-98ef9435-d349-46d2-8982-9152fe5cca36 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.631 2 DEBUG oslo_concurrency.lockutils [req-7e56938d-6c8a-49b1-818b-fe9927a77361 req-98ef9435-d349-46d2-8982-9152fe5cca36 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.632 2 DEBUG oslo_concurrency.lockutils [req-7e56938d-6c8a-49b1-818b-fe9927a77361 req-98ef9435-d349-46d2-8982-9152fe5cca36 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.632 2 DEBUG oslo_concurrency.lockutils [req-7e56938d-6c8a-49b1-818b-fe9927a77361 req-98ef9435-d349-46d2-8982-9152fe5cca36 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.632 2 DEBUG nova.compute.manager [req-7e56938d-6c8a-49b1-818b-fe9927a77361 req-98ef9435-d349-46d2-8982-9152fe5cca36 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] No waiting events found dispatching network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.633 2 DEBUG nova.compute.manager [req-7e56938d-6c8a-49b1-818b-fe9927a77361 req-98ef9435-d349-46d2-8982-9152fe5cca36 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:36:59 compute-0 podman[276673]: time="2025-09-30T18:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:36:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:36:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10303 "" "Go-http-client/1.1"
Sep 30 18:36:59 compute-0 nova_compute[265391]: 2025-09-30 18:36:59.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1753: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 170 B/s wr, 5 op/s
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.668 2 DEBUG nova.network.neutron [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port f05039eb-b7e1-4072-bc17-63c6787538a1 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.669 2 DEBUG nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.670 2 DEBUG nova.virt.libvirt.vif [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:34:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2049754641',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2049754641',id=26,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:35:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c634e1c17ed54907969576a0eb8eff50',ramdisk_id='',reservation_id='r-ryu3h0vp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1883747907',owner_user_name='tempest-TestExecuteStrategies-1883747907-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:36:09Z,user_data=None,user_id='623ef4a55c9e4fc28bb65e49246b5008',uuid=342a3981-de33-491a-974b-5566045fba97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.670 2 DEBUG nova.network.os_vif_util [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "f05039eb-b7e1-4072-bc17-63c6787538a1", "address": "fa:16:3e:84:d4:b5", "network": {"id": "6901f664-336b-42d2-bbf7-58951befc8d1", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1634944481-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08fc2cbd16474855b7ae474fa9859f76", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf05039eb-b7", "ovs_interfaceid": "f05039eb-b7e1-4072-bc17-63c6787538a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.671 2 DEBUG nova.network.os_vif_util [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:d4:b5,bridge_name='br-int',has_traffic_filtering=True,id=f05039eb-b7e1-4072-bc17-63c6787538a1,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05039eb-b7') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.671 2 DEBUG os_vif [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:d4:b5,bridge_name='br-int',has_traffic_filtering=True,id=f05039eb-b7e1-4072-bc17-63c6787538a1,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05039eb-b7') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf05039eb-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:37:00 compute-0 ceph-mon[73755]: pgmap v1753: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 170 B/s wr, 5 op/s
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=452d081b-12d2-4e75-9796-973ac4069ac5) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.681 2 INFO os_vif [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:d4:b5,bridge_name='br-int',has_traffic_filtering=True,id=f05039eb-b7e1-4072-bc17-63c6787538a1,network=Network(6901f664-336b-42d2-bbf7-58951befc8d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf05039eb-b7')
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.681 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.682 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.682 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.682 2 DEBUG nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.682 2 INFO nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Deleting instance files /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97_del
Sep 30 18:37:00 compute-0 nova_compute[265391]: 2025-09-30 18:37:00.683 2 INFO nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Deletion of /var/lib/nova/instances/342a3981-de33-491a-974b-5566045fba97_del complete
Sep 30 18:37:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:01.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:01 compute-0 openstack_network_exporter[279566]: ERROR   18:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:37:01 compute-0 openstack_network_exporter[279566]: ERROR   18:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:37:01 compute-0 openstack_network_exporter[279566]: ERROR   18:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:37:01 compute-0 openstack_network_exporter[279566]: ERROR   18:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:37:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:37:01 compute-0 openstack_network_exporter[279566]: ERROR   18:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:37:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.702 2 DEBUG nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.702 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.703 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.703 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.704 2 DEBUG nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] No waiting events found dispatching network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.704 2 WARNING nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received unexpected event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 for instance with vm_state active and task_state migrating.
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.704 2 DEBUG nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.705 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.705 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.706 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.706 2 DEBUG nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] No waiting events found dispatching network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.707 2 DEBUG nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-unplugged-f05039eb-b7e1-4072-bc17-63c6787538a1 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.707 2 DEBUG nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.708 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.708 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.709 2 DEBUG oslo_concurrency.lockutils [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.709 2 DEBUG nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] No waiting events found dispatching network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:37:01 compute-0 nova_compute[265391]: 2025-09-30 18:37:01.710 2 WARNING nova.compute.manager [req-da8d6c0e-676c-4ccb-a4bd-4b05f47ba673 req-0c4b6662-da58-47ee-a470-0dbf7e2c5035 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Received unexpected event network-vif-plugged-f05039eb-b7e1-4072-bc17-63c6787538a1 for instance with vm_state active and task_state migrating.
Sep 30 18:37:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1754: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 170 B/s wr, 5 op/s
Sep 30 18:37:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:03.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:03.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:03 compute-0 ceph-mon[73755]: pgmap v1754: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 170 B/s wr, 5 op/s
Sep 30 18:37:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:03.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1755: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:37:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:05.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:05 compute-0 ceph-mon[73755]: pgmap v1755: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:37:05 compute-0 nova_compute[265391]: 2025-09-30 18:37:05.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:06 compute-0 nova_compute[265391]: 2025-09-30 18:37:06.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1756: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:37:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:07.322Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:07.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:37:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:37:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:07.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:37:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:37:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:37:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:37:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:37:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:37:07 compute-0 ceph-mon[73755]: pgmap v1756: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:37:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022767699074526275 of space, bias 1.0, pg target 0.4553539814905255 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:37:08
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['vms', '.nfs', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr']
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1757: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:37:08 compute-0 podman[341270]: 2025-09-30 18:37:08.543149461 +0000 UTC m=+0.071083390 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20250930)
Sep 30 18:37:08 compute-0 podman[341269]: 2025-09-30 18:37:08.54778424 +0000 UTC m=+0.078466160 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:37:08 compute-0 podman[341271]: 2025-09-30 18:37:08.560045293 +0000 UTC m=+0.082557443 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git)
Sep 30 18:37:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:08] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:37:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:08] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:37:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:08.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:09.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:09.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:09 compute-0 ceph-mon[73755]: pgmap v1757: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.228 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "342a3981-de33-491a-974b-5566045fba97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.228 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.229 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "342a3981-de33-491a-974b-5566045fba97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1758: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.743 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.744 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.744 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.744 2 DEBUG nova.compute.resource_tracker [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:37:10 compute-0 nova_compute[265391]: 2025-09-30 18:37:10.745 2 DEBUG oslo_concurrency.processutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:37:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:37:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633928403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:11 compute-0 nova_compute[265391]: 2025-09-30 18:37:11.217 2 DEBUG oslo_concurrency.processutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:37:11 compute-0 nova_compute[265391]: 2025-09-30 18:37:11.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:11.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:11.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:11 compute-0 nova_compute[265391]: 2025-09-30 18:37:11.385 2 WARNING nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:37:11 compute-0 nova_compute[265391]: 2025-09-30 18:37:11.387 2 DEBUG oslo_concurrency.processutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:37:11 compute-0 nova_compute[265391]: 2025-09-30 18:37:11.406 2 DEBUG oslo_concurrency.processutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:37:11 compute-0 nova_compute[265391]: 2025-09-30 18:37:11.407 2 DEBUG nova.compute.resource_tracker [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4386MB free_disk=39.9011344909668GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:37:11 compute-0 nova_compute[265391]: 2025-09-30 18:37:11.407 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:11 compute-0 nova_compute[265391]: 2025-09-30 18:37:11.407 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:11 compute-0 ceph-mon[73755]: pgmap v1758: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 170 B/s wr, 6 op/s
Sep 30 18:37:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/633928403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1759: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:37:12 compute-0 nova_compute[265391]: 2025-09-30 18:37:12.426 2 DEBUG nova.compute.resource_tracker [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 342a3981-de33-491a-974b-5566045fba97 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:37:12 compute-0 ceph-mon[73755]: pgmap v1759: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:37:12 compute-0 nova_compute[265391]: 2025-09-30 18:37:12.938 2 DEBUG nova.compute.resource_tracker [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:37:12 compute-0 nova_compute[265391]: 2025-09-30 18:37:12.962 2 DEBUG nova.compute.resource_tracker [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 3b051e9f-cf53-41ea-a9f1-d01148892c63 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:37:12 compute-0 nova_compute[265391]: 2025-09-30 18:37:12.962 2 DEBUG nova.compute.resource_tracker [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:37:12 compute-0 nova_compute[265391]: 2025-09-30 18:37:12.963 2 DEBUG nova.compute.resource_tracker [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:37:11 up  1:40,  0 user,  load average: 1.16, 0.97, 0.90\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:37:12 compute-0 nova_compute[265391]: 2025-09-30 18:37:12.989 2 DEBUG oslo_concurrency.processutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:37:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:13.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:37:13 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208745743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:13 compute-0 nova_compute[265391]: 2025-09-30 18:37:13.457 2 DEBUG oslo_concurrency.processutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:37:13 compute-0 nova_compute[265391]: 2025-09-30 18:37:13.463 2 DEBUG nova.compute.provider_tree [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:37:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1208745743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:13.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:13 compute-0 nova_compute[265391]: 2025-09-30 18:37:13.972 2 DEBUG nova.scheduler.client.report [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:37:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1760: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Sep 30 18:37:14 compute-0 nova_compute[265391]: 2025-09-30 18:37:14.485 2 DEBUG nova.compute.resource_tracker [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:37:14 compute-0 nova_compute[265391]: 2025-09-30 18:37:14.486 2 DEBUG oslo_concurrency.lockutils [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.079s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:14 compute-0 nova_compute[265391]: 2025-09-30 18:37:14.513 2 INFO nova.compute.manager [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:37:14 compute-0 ceph-mon[73755]: pgmap v1760: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Sep 30 18:37:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:15.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:15.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:15 compute-0 nova_compute[265391]: 2025-09-30 18:37:15.574 2 INFO nova.scheduler.client.report [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 3b051e9f-cf53-41ea-a9f1-d01148892c63
Sep 30 18:37:15 compute-0 nova_compute[265391]: 2025-09-30 18:37:15.575 2 DEBUG nova.virt.libvirt.driver [None req-1f634a2c-f1ad-4186-836f-397e55ef424a 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 342a3981-de33-491a-974b-5566045fba97] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:37:15 compute-0 nova_compute[265391]: 2025-09-30 18:37:15.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:16 compute-0 nova_compute[265391]: 2025-09-30 18:37:16.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1761: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:37:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:17.324Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:17.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:17.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:17 compute-0 ceph-mon[73755]: pgmap v1761: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:37:18 compute-0 sudo[341387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:37:18 compute-0 sudo[341387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:18 compute-0 sudo[341387]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1762: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:37:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:18] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:37:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:18] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:37:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:18.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:19.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:19.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:19 compute-0 ceph-mon[73755]: pgmap v1762: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:37:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1763: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:37:20 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1675240793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:20 compute-0 nova_compute[265391]: 2025-09-30 18:37:20.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:21 compute-0 nova_compute[265391]: 2025-09-30 18:37:21.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:21.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:21.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:21 compute-0 ceph-mon[73755]: pgmap v1763: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:37:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 18:37:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:37:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1764: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:37:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:37:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:37:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:23.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:23.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:23 compute-0 ceph-mon[73755]: pgmap v1764: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:37:23 compute-0 sudo[341417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:37:23 compute-0 sudo[341417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:23 compute-0 sudo[341417]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:23 compute-0 sudo[341443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:37:23 compute-0 sudo[341443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:23.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:24 compute-0 sudo[341443]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1765: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1766: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:37:24 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:37:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:37:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:37:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:37:24 compute-0 sudo[341502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:37:24 compute-0 sudo[341502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:24 compute-0 sudo[341502]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:24 compute-0 sudo[341527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:37:24 compute-0 sudo[341527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:24 compute-0 podman[341594]: 2025-09-30 18:37:24.944944723 +0000 UTC m=+0.048392490 container create 002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:37:24 compute-0 systemd[1]: Started libpod-conmon-002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0.scope.
Sep 30 18:37:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:37:25 compute-0 podman[341594]: 2025-09-30 18:37:24.923944175 +0000 UTC m=+0.027391992 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:37:25 compute-0 podman[341594]: 2025-09-30 18:37:25.026661874 +0000 UTC m=+0.130109661 container init 002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:37:25 compute-0 podman[341594]: 2025-09-30 18:37:25.032872343 +0000 UTC m=+0.136320110 container start 002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:37:25 compute-0 podman[341594]: 2025-09-30 18:37:25.035624183 +0000 UTC m=+0.139071950 container attach 002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:37:25 compute-0 admiring_shannon[341610]: 167 167
Sep 30 18:37:25 compute-0 systemd[1]: libpod-002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0.scope: Deactivated successfully.
Sep 30 18:37:25 compute-0 podman[341594]: 2025-09-30 18:37:25.038746953 +0000 UTC m=+0.142194750 container died 002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c786c896b9de56e8f985c2139ec3a918680cf102ae8d2b1012afe91dd13b1ba7-merged.mount: Deactivated successfully.
Sep 30 18:37:25 compute-0 podman[341594]: 2025-09-30 18:37:25.071889772 +0000 UTC m=+0.175337539 container remove 002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shannon, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:37:25 compute-0 systemd[1]: libpod-conmon-002123abf4ff5099bbbd5920b7a453e60fd058ebc4a56f6e53083671ecc43af0.scope: Deactivated successfully.
Sep 30 18:37:25 compute-0 podman[341636]: 2025-09-30 18:37:25.237425258 +0000 UTC m=+0.040234671 container create d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mcnulty, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:37:25 compute-0 systemd[1]: Started libpod-conmon-d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5.scope.
Sep 30 18:37:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807a3ac41cd919f384c0582e5b6043149ff2870abb6dac547374c44d45a7b47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807a3ac41cd919f384c0582e5b6043149ff2870abb6dac547374c44d45a7b47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807a3ac41cd919f384c0582e5b6043149ff2870abb6dac547374c44d45a7b47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807a3ac41cd919f384c0582e5b6043149ff2870abb6dac547374c44d45a7b47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a807a3ac41cd919f384c0582e5b6043149ff2870abb6dac547374c44d45a7b47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:25 compute-0 podman[341636]: 2025-09-30 18:37:25.219657713 +0000 UTC m=+0.022467156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:37:25 compute-0 podman[341636]: 2025-09-30 18:37:25.316323527 +0000 UTC m=+0.119132990 container init d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:37:25 compute-0 podman[341636]: 2025-09-30 18:37:25.323738727 +0000 UTC m=+0.126548140 container start d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mcnulty, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 18:37:25 compute-0 podman[341636]: 2025-09-30 18:37:25.327251147 +0000 UTC m=+0.130060600 container attach d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mcnulty, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:37:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:25.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:25.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:25 compute-0 ceph-mon[73755]: pgmap v1765: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:37:25 compute-0 ceph-mon[73755]: pgmap v1766: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:25 compute-0 hardcore_mcnulty[341653]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:37:25 compute-0 hardcore_mcnulty[341653]: --> All data devices are unavailable
Sep 30 18:37:25 compute-0 systemd[1]: libpod-d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5.scope: Deactivated successfully.
Sep 30 18:37:25 compute-0 podman[341636]: 2025-09-30 18:37:25.711792787 +0000 UTC m=+0.514602200 container died d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a807a3ac41cd919f384c0582e5b6043149ff2870abb6dac547374c44d45a7b47-merged.mount: Deactivated successfully.
Sep 30 18:37:25 compute-0 podman[341636]: 2025-09-30 18:37:25.755130286 +0000 UTC m=+0.557939699 container remove d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 18:37:25 compute-0 systemd[1]: libpod-conmon-d831d66293a3d41449e253ebfe2ff65bafe2a4ad925f05714b9e1181c35153e5.scope: Deactivated successfully.
Sep 30 18:37:25 compute-0 sudo[341527]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:25 compute-0 sudo[341683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:37:25 compute-0 sudo[341683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:25 compute-0 sudo[341683]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:25 compute-0 sudo[341708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:37:25 compute-0 sudo[341708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:25 compute-0 nova_compute[265391]: 2025-09-30 18:37:25.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.091649) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257446091689, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 720, "num_deletes": 259, "total_data_size": 868656, "memory_usage": 883112, "flush_reason": "Manual Compaction"}
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257446097884, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 852435, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44675, "largest_seqno": 45394, "table_properties": {"data_size": 848866, "index_size": 1349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8476, "raw_average_key_size": 18, "raw_value_size": 841465, "raw_average_value_size": 1882, "num_data_blocks": 60, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257397, "oldest_key_time": 1759257397, "file_creation_time": 1759257446, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 6286 microseconds, and 3414 cpu microseconds.
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.097934) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 852435 bytes OK
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.097954) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.099437) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.099455) EVENT_LOG_v1 {"time_micros": 1759257446099449, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.099470) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 864929, prev total WAL file size 864929, number of live WAL files 2.
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.099927) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353032' seq:72057594037927935, type:22 .. '6C6F676D0031373537' seq:0, type:0; will stop at (end)
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(832KB)], [101(10MB)]
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257446099959, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11944483, "oldest_snapshot_seqno": -1}
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6864 keys, 11833904 bytes, temperature: kUnknown
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257446144822, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 11833904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11791708, "index_size": 23908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 181162, "raw_average_key_size": 26, "raw_value_size": 11672113, "raw_average_value_size": 1700, "num_data_blocks": 935, "num_entries": 6864, "num_filter_entries": 6864, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257446, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.145026) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 11833904 bytes
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.146387) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 265.9 rd, 263.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.6 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(27.9) write-amplify(13.9) OK, records in: 7394, records dropped: 530 output_compression: NoCompression
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.146402) EVENT_LOG_v1 {"time_micros": 1759257446146395, "job": 60, "event": "compaction_finished", "compaction_time_micros": 44919, "compaction_time_cpu_micros": 23641, "output_level": 6, "num_output_files": 1, "total_output_size": 11833904, "num_input_records": 7394, "num_output_records": 6864, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257446146599, "job": 60, "event": "table_file_deletion", "file_number": 103}
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257446148203, "job": 60, "event": "table_file_deletion", "file_number": 101}
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.099862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.148245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.148249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.148250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.148251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:26 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:26.148253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:26 compute-0 podman[341772]: 2025-09-30 18:37:26.255216824 +0000 UTC m=+0.031332913 container create 94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:37:26 compute-0 systemd[1]: Started libpod-conmon-94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0.scope.
Sep 30 18:37:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:37:26 compute-0 podman[341772]: 2025-09-30 18:37:26.319660753 +0000 UTC m=+0.095776862 container init 94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:37:26 compute-0 nova_compute[265391]: 2025-09-30 18:37:26.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:26 compute-0 podman[341772]: 2025-09-30 18:37:26.327159365 +0000 UTC m=+0.103275454 container start 94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 18:37:26 compute-0 podman[341772]: 2025-09-30 18:37:26.330313286 +0000 UTC m=+0.106429395 container attach 94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:37:26 compute-0 priceless_germain[341789]: 167 167
Sep 30 18:37:26 compute-0 systemd[1]: libpod-94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0.scope: Deactivated successfully.
Sep 30 18:37:26 compute-0 podman[341772]: 2025-09-30 18:37:26.331462085 +0000 UTC m=+0.107578194 container died 94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:37:26 compute-0 podman[341772]: 2025-09-30 18:37:26.242102638 +0000 UTC m=+0.018218737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66327f1fc899211e52ca1c1c11124f1d37af0edaf95235fbee9f44713562d41-merged.mount: Deactivated successfully.
Sep 30 18:37:26 compute-0 podman[341772]: 2025-09-30 18:37:26.368057592 +0000 UTC m=+0.144173681 container remove 94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 18:37:26 compute-0 systemd[1]: libpod-conmon-94e6ba7c2f5e6dbb87b01828efe3395f921afd3a8e0b852d21a27066b180d7b0.scope: Deactivated successfully.
Sep 30 18:37:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1767: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:26 compute-0 podman[341812]: 2025-09-30 18:37:26.572027481 +0000 UTC m=+0.049145639 container create ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:37:26 compute-0 systemd[1]: Started libpod-conmon-ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2.scope.
Sep 30 18:37:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ec6cdd211f8328fee3186a8bed64e71bd249faf4e613f702f29ccd6dc7b9a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ec6cdd211f8328fee3186a8bed64e71bd249faf4e613f702f29ccd6dc7b9a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ec6cdd211f8328fee3186a8bed64e71bd249faf4e613f702f29ccd6dc7b9a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ec6cdd211f8328fee3186a8bed64e71bd249faf4e613f702f29ccd6dc7b9a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:26 compute-0 podman[341812]: 2025-09-30 18:37:26.557701684 +0000 UTC m=+0.034819872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:37:26 compute-0 podman[341812]: 2025-09-30 18:37:26.662769513 +0000 UTC m=+0.139887701 container init ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:37:26 compute-0 podman[341812]: 2025-09-30 18:37:26.670381838 +0000 UTC m=+0.147499996 container start ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 18:37:26 compute-0 podman[341812]: 2025-09-30 18:37:26.673877457 +0000 UTC m=+0.150995635 container attach ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 18:37:26 compute-0 youthful_booth[341829]: {
Sep 30 18:37:26 compute-0 youthful_booth[341829]:     "0": [
Sep 30 18:37:26 compute-0 youthful_booth[341829]:         {
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "devices": [
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "/dev/loop3"
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             ],
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "lv_name": "ceph_lv0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "lv_size": "21470642176",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "name": "ceph_lv0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "tags": {
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.cluster_name": "ceph",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.crush_device_class": "",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.encrypted": "0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.osd_id": "0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.type": "block",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.vdo": "0",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:                 "ceph.with_tpm": "0"
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             },
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "type": "block",
Sep 30 18:37:26 compute-0 youthful_booth[341829]:             "vg_name": "ceph_vg0"
Sep 30 18:37:26 compute-0 youthful_booth[341829]:         }
Sep 30 18:37:26 compute-0 youthful_booth[341829]:     ]
Sep 30 18:37:26 compute-0 youthful_booth[341829]: }
Sep 30 18:37:26 compute-0 systemd[1]: libpod-ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2.scope: Deactivated successfully.
Sep 30 18:37:26 compute-0 podman[341812]: 2025-09-30 18:37:26.961079577 +0000 UTC m=+0.438197795 container died ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-43ec6cdd211f8328fee3186a8bed64e71bd249faf4e613f702f29ccd6dc7b9a0-merged.mount: Deactivated successfully.
Sep 30 18:37:27 compute-0 podman[341812]: 2025-09-30 18:37:27.013365745 +0000 UTC m=+0.490483913 container remove ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 18:37:27 compute-0 systemd[1]: libpod-conmon-ee95ce1f3d54cc2eec17374dcc8e9400f7d1b39d91dc729864a75140912ebcd2.scope: Deactivated successfully.
Sep 30 18:37:27 compute-0 sudo[341708]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:27 compute-0 ceph-mon[73755]: pgmap v1767: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:27 compute-0 sudo[341850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:37:27 compute-0 sudo[341850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:27 compute-0 sudo[341850]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:27 compute-0 sudo[341875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:37:27 compute-0 sudo[341875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:27.325Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:27.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:27.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:27 compute-0 podman[341942]: 2025-09-30 18:37:27.672865512 +0000 UTC m=+0.079740771 container create 40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_chatterjee, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:37:27 compute-0 systemd[1]: Started libpod-conmon-40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4.scope.
Sep 30 18:37:27 compute-0 podman[341942]: 2025-09-30 18:37:27.635005223 +0000 UTC m=+0.041880572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:37:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:37:27 compute-0 podman[341942]: 2025-09-30 18:37:27.78260373 +0000 UTC m=+0.189479019 container init 40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:37:27 compute-0 podman[341942]: 2025-09-30 18:37:27.789836196 +0000 UTC m=+0.196711455 container start 40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 18:37:27 compute-0 podman[341942]: 2025-09-30 18:37:27.794386792 +0000 UTC m=+0.201262091 container attach 40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_chatterjee, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:37:27 compute-0 brave_chatterjee[341961]: 167 167
Sep 30 18:37:27 compute-0 systemd[1]: libpod-40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4.scope: Deactivated successfully.
Sep 30 18:37:27 compute-0 conmon[341961]: conmon 40037254bb4cdf63540b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4.scope/container/memory.events
Sep 30 18:37:27 compute-0 podman[341942]: 2025-09-30 18:37:27.79901176 +0000 UTC m=+0.205887049 container died 40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 18:37:27 compute-0 podman[341958]: 2025-09-30 18:37:27.814444545 +0000 UTC m=+0.086823423 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:37:27 compute-0 podman[341960]: 2025-09-30 18:37:27.814977269 +0000 UTC m=+0.082772229 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:37:27 compute-0 podman[341959]: 2025-09-30 18:37:27.831540503 +0000 UTC m=+0.103362676 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Sep 30 18:37:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7534bc5bb6228b92a0b7b9979c14089e1e16ff71b7f061e1e520e3b8201396a1-merged.mount: Deactivated successfully.
Sep 30 18:37:27 compute-0 podman[341942]: 2025-09-30 18:37:27.847186223 +0000 UTC m=+0.254061472 container remove 40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:37:27 compute-0 systemd[1]: libpod-conmon-40037254bb4cdf63540bb501dd5c143ca0adafa98b363370fdf97f4b102687d4.scope: Deactivated successfully.
Sep 30 18:37:27 compute-0 podman[342049]: 2025-09-30 18:37:27.992119682 +0000 UTC m=+0.034058942 container create 6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:37:28 compute-0 systemd[1]: Started libpod-conmon-6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853.scope.
Sep 30 18:37:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0daf8e3faabd4882de9f950e303ec89b5b8aaae00e08927781ab997e17a6be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0daf8e3faabd4882de9f950e303ec89b5b8aaae00e08927781ab997e17a6be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0daf8e3faabd4882de9f950e303ec89b5b8aaae00e08927781ab997e17a6be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0daf8e3faabd4882de9f950e303ec89b5b8aaae00e08927781ab997e17a6be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:37:28 compute-0 podman[342049]: 2025-09-30 18:37:28.068171068 +0000 UTC m=+0.110110348 container init 6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:37:28 compute-0 podman[342049]: 2025-09-30 18:37:27.977836077 +0000 UTC m=+0.019775357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:37:28 compute-0 podman[342049]: 2025-09-30 18:37:28.079524099 +0000 UTC m=+0.121463359 container start 6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:37:28 compute-0 podman[342049]: 2025-09-30 18:37:28.082626898 +0000 UTC m=+0.124566178 container attach 6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:37:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1768: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.448277) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257448448310, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 277, "num_deletes": 251, "total_data_size": 77456, "memory_usage": 82992, "flush_reason": "Manual Compaction"}
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257448450150, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 76971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45395, "largest_seqno": 45671, "table_properties": {"data_size": 75043, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4859, "raw_average_key_size": 18, "raw_value_size": 71360, "raw_average_value_size": 268, "num_data_blocks": 7, "num_entries": 266, "num_filter_entries": 266, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257447, "oldest_key_time": 1759257447, "file_creation_time": 1759257448, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 1899 microseconds, and 637 cpu microseconds.
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.450177) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 76971 bytes OK
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.450191) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.450997) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.451009) EVENT_LOG_v1 {"time_micros": 1759257448451005, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.451019) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 75362, prev total WAL file size 75362, number of live WAL files 2.
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.451294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(75KB)], [104(11MB)]
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257448451323, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11910875, "oldest_snapshot_seqno": -1}
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 6621 keys, 9955637 bytes, temperature: kUnknown
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257448493545, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 9955637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9916531, "index_size": 21458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 176802, "raw_average_key_size": 26, "raw_value_size": 9802553, "raw_average_value_size": 1480, "num_data_blocks": 825, "num_entries": 6621, "num_filter_entries": 6621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257448, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.493766) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 9955637 bytes
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.494853) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 281.6 rd, 235.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.3 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(284.1) write-amplify(129.3) OK, records in: 7130, records dropped: 509 output_compression: NoCompression
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.494870) EVENT_LOG_v1 {"time_micros": 1759257448494862, "job": 62, "event": "compaction_finished", "compaction_time_micros": 42301, "compaction_time_cpu_micros": 22667, "output_level": 6, "num_output_files": 1, "total_output_size": 9955637, "num_input_records": 7130, "num_output_records": 6621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257448494974, "job": 62, "event": "table_file_deletion", "file_number": 106}
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257448496676, "job": 62, "event": "table_file_deletion", "file_number": 104}
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.451242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.496735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.496740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.496742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.496744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:28 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:37:28.496746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:37:28 compute-0 lvm[342139]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:37:28 compute-0 lvm[342139]: VG ceph_vg0 finished
Sep 30 18:37:28 compute-0 gallant_hawking[342065]: {}
Sep 30 18:37:28 compute-0 systemd[1]: libpod-6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853.scope: Deactivated successfully.
Sep 30 18:37:28 compute-0 systemd[1]: libpod-6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853.scope: Consumed 1.065s CPU time.
Sep 30 18:37:28 compute-0 conmon[342065]: conmon 6cb5109d14c28bcfe5d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853.scope/container/memory.events
Sep 30 18:37:28 compute-0 podman[342049]: 2025-09-30 18:37:28.734904131 +0000 UTC m=+0.776843391 container died 6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c0daf8e3faabd4882de9f950e303ec89b5b8aaae00e08927781ab997e17a6be-merged.mount: Deactivated successfully.
Sep 30 18:37:28 compute-0 podman[342049]: 2025-09-30 18:37:28.780937939 +0000 UTC m=+0.822877199 container remove 6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:37:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:28] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:37:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:28] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:37:28 compute-0 systemd[1]: libpod-conmon-6cb5109d14c28bcfe5d5b0303a929339431d0d9e3f453e35fca4a05c6d8c1853.scope: Deactivated successfully.
Sep 30 18:37:28 compute-0 sudo[341875]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:37:28 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:37:28 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:37:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:28.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:28 compute-0 sudo[342154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:37:28 compute-0 sudo[342154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:28 compute-0 sudo[342154]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:29.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:29.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:29 compute-0 ceph-mon[73755]: pgmap v1768: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:29 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:37:29 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:37:29 compute-0 podman[276673]: time="2025-09-30T18:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:37:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:37:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10304 "" "Go-http-client/1.1"
Sep 30 18:37:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:29.900 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:37:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:29.901 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:37:29 compute-0 nova_compute[265391]: 2025-09-30 18:37:29.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1769: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:30 compute-0 sshd-session[341414]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:37:30 compute-0 sshd-session[341414]: banner exchange: Connection from 115.190.39.222 port 50586: Connection timed out
Sep 30 18:37:30 compute-0 nova_compute[265391]: 2025-09-30 18:37:30.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:31 compute-0 nova_compute[265391]: 2025-09-30 18:37:31.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:31.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:31.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:31 compute-0 openstack_network_exporter[279566]: ERROR   18:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:37:31 compute-0 openstack_network_exporter[279566]: ERROR   18:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:37:31 compute-0 openstack_network_exporter[279566]: ERROR   18:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:37:31 compute-0 openstack_network_exporter[279566]: ERROR   18:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:37:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:37:31 compute-0 openstack_network_exporter[279566]: ERROR   18:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:37:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:37:31 compute-0 ceph-mon[73755]: pgmap v1769: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:31.903 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:37:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1770: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:32 compute-0 nova_compute[265391]: 2025-09-30 18:37:32.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:33.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:33.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:33 compute-0 ceph-mon[73755]: pgmap v1770: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:33.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1771: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:35.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:35.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:35 compute-0 ceph-mon[73755]: pgmap v1771: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:37:35 compute-0 nova_compute[265391]: 2025-09-30 18:37:35.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:36 compute-0 nova_compute[265391]: 2025-09-30 18:37:36.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1772: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:37:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:37:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2228795638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:37:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:37:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2228795638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:37:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2228795638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:37:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2228795638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:37:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:37:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:37:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:37.327Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:37.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:37.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:37:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:37:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:37:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:37:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:37:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:37:37 compute-0 ceph-mon[73755]: pgmap v1772: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:37:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:37:38 compute-0 sudo[342190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:37:38 compute-0 sudo[342190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:38 compute-0 sudo[342190]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1773: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:37:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:38] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:37:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:38] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:37:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:38.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:39 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 18:37:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:39.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:39.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:39 compute-0 podman[342216]: 2025-09-30 18:37:39.416306346 +0000 UTC m=+0.121345326 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Sep 30 18:37:39 compute-0 podman[342217]: 2025-09-30 18:37:39.417354913 +0000 UTC m=+0.114395658 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Sep 30 18:37:39 compute-0 podman[342218]: 2025-09-30 18:37:39.443652526 +0000 UTC m=+0.135394026 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc.)
Sep 30 18:37:39 compute-0 ceph-mon[73755]: pgmap v1773: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:37:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1774: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:37:40 compute-0 nova_compute[265391]: 2025-09-30 18:37:40.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:41 compute-0 nova_compute[265391]: 2025-09-30 18:37:41.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:41.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:41.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:41 compute-0 ceph-mon[73755]: pgmap v1774: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:37:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1775: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:43.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:43 compute-0 ceph-mon[73755]: pgmap v1775: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:43.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1776: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:37:44 compute-0 nova_compute[265391]: 2025-09-30 18:37:44.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:37:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:44.964 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:e3:e0 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9692f6197b3545b1bf37bd84c3928d41', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aef43a77-fc58-48dd-8195-5e83e09646ef, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=475706b8-809c-4da9-92ac-7152f6d17fbe) old=Port_Binding(mac=['fa:16:3e:69:e3:e0'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9692f6197b3545b1bf37bd84c3928d41', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:37:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:44.965 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 475706b8-809c-4da9-92ac-7152f6d17fbe in datapath e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 updated
Sep 30 18:37:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:44.966 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:37:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:44.967 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[55861df9-f409-4090-bd7d-6f3910c61658]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:37:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:45.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:45.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:45 compute-0 ceph-mon[73755]: pgmap v1776: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:37:45 compute-0 nova_compute[265391]: 2025-09-30 18:37:45.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:46 compute-0 nova_compute[265391]: 2025-09-30 18:37:46.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1777: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:46 compute-0 nova_compute[265391]: 2025-09-30 18:37:46.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:37:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:47.328Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:47.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:47.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:47 compute-0 ceph-mon[73755]: pgmap v1777: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1778: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3838396643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:48] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:37:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:48] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:37:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:48.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:49.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:37:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:49.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:37:49 compute-0 nova_compute[265391]: 2025-09-30 18:37:49.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:37:49 compute-0 nova_compute[265391]: 2025-09-30 18:37:49.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:37:49 compute-0 nova_compute[265391]: 2025-09-30 18:37:49.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:37:49 compute-0 ceph-mon[73755]: pgmap v1778: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:49 compute-0 nova_compute[265391]: 2025-09-30 18:37:49.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:49 compute-0 nova_compute[265391]: 2025-09-30 18:37:49.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:49 compute-0 nova_compute[265391]: 2025-09-30 18:37:49.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:49 compute-0 nova_compute[265391]: 2025-09-30 18:37:49.944 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:37:49 compute-0 nova_compute[265391]: 2025-09-30 18:37:49.945 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:37:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:37:50 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736378507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1779: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:37:50 compute-0 nova_compute[265391]: 2025-09-30 18:37:50.413 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:37:50 compute-0 nova_compute[265391]: 2025-09-30 18:37:50.570 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:37:50 compute-0 nova_compute[265391]: 2025-09-30 18:37:50.571 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:37:50 compute-0 nova_compute[265391]: 2025-09-30 18:37:50.589 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:37:50 compute-0 nova_compute[265391]: 2025-09-30 18:37:50.590 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4381MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:37:50 compute-0 nova_compute[265391]: 2025-09-30 18:37:50.590 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:50 compute-0 nova_compute[265391]: 2025-09-30 18:37:50.590 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/245067866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2736378507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:50 compute-0 ceph-mon[73755]: pgmap v1779: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:37:50 compute-0 nova_compute[265391]: 2025-09-30 18:37:50.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:51 compute-0 nova_compute[265391]: 2025-09-30 18:37:51.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:51.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:51.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:51 compute-0 nova_compute[265391]: 2025-09-30 18:37:51.626 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:37:51 compute-0 nova_compute[265391]: 2025-09-30 18:37:51.626 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:37:50 up  1:41,  0 user,  load average: 0.65, 0.86, 0.87\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:37:51 compute-0 nova_compute[265391]: 2025-09-30 18:37:51.641 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:37:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:37:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2276774194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:52 compute-0 nova_compute[265391]: 2025-09-30 18:37:52.102 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:37:52 compute-0 nova_compute[265391]: 2025-09-30 18:37:52.107 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:37:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2276774194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:37:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:37:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:37:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1780: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:52 compute-0 nova_compute[265391]: 2025-09-30 18:37:52.615 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:37:53 compute-0 nova_compute[265391]: 2025-09-30 18:37:53.126 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:37:53 compute-0 nova_compute[265391]: 2025-09-30 18:37:53.126 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.536s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:37:53 compute-0 ceph-mon[73755]: pgmap v1780: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:53.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000051s ======
Sep 30 18:37:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:53.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Sep 30 18:37:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:53.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:54.233 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:a5:37 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b9e368f8-9637-474a-a0f3-2785ed8b6bea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9e368f8-9637-474a-a0f3-2785ed8b6bea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '003b1a96324d40b683381237c3cec243', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=340490f5-0a9d-48f0-992c-0f05c72a9a6c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3de11ef7-a7bb-4299-b982-f045dbd9b956) old=Port_Binding(mac=['fa:16:3e:db:a5:37'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-b9e368f8-9637-474a-a0f3-2785ed8b6bea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9e368f8-9637-474a-a0f3-2785ed8b6bea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '003b1a96324d40b683381237c3cec243', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:37:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:54.234 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3de11ef7-a7bb-4299-b982-f045dbd9b956 in datapath b9e368f8-9637-474a-a0f3-2785ed8b6bea updated
Sep 30 18:37:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:54.236 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9e368f8-9637-474a-a0f3-2785ed8b6bea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:37:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:54.237 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[010ac5b7-0692-4796-b492-66af240b1608]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:37:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:54.327 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:37:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:54.328 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:37:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:37:54.328 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:37:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1781: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:37:55 compute-0 nova_compute[265391]: 2025-09-30 18:37:55.127 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:37:55 compute-0 nova_compute[265391]: 2025-09-30 18:37:55.127 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:37:55 compute-0 nova_compute[265391]: 2025-09-30 18:37:55.127 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:37:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:55.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:55.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:55 compute-0 ceph-mon[73755]: pgmap v1781: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:37:55 compute-0 nova_compute[265391]: 2025-09-30 18:37:55.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:37:56 compute-0 nova_compute[265391]: 2025-09-30 18:37:56.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:37:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1782: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:57.328Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:57.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:57 compute-0 nova_compute[265391]: 2025-09-30 18:37:57.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:37:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:57.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:57 compute-0 ceph-mon[73755]: pgmap v1782: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:58 compute-0 sudo[342344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:37:58 compute-0 sudo[342344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:37:58 compute-0 sudo[342344]: pam_unix(sudo:session): session closed for user root
Sep 30 18:37:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1783: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:58 compute-0 podman[342368]: 2025-09-30 18:37:58.401942792 +0000 UTC m=+0.072027084 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Sep 30 18:37:58 compute-0 podman[342370]: 2025-09-30 18:37:58.413088457 +0000 UTC m=+0.072264930 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:37:58 compute-0 podman[342369]: 2025-09-30 18:37:58.446107062 +0000 UTC m=+0.113230589 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Sep 30 18:37:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:58] "GET /metrics HTTP/1.1" 200 46711 "" "Prometheus/2.51.0"
Sep 30 18:37:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:37:58] "GET /metrics HTTP/1.1" 200 46711 "" "Prometheus/2.51.0"
Sep 30 18:37:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:37:58.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:37:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:37:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:37:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:37:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:37:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:37:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:37:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:37:59.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:37:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:37:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:37:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:37:59.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:37:59 compute-0 ceph-mon[73755]: pgmap v1783: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:37:59 compute-0 podman[276673]: time="2025-09-30T18:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:37:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:37:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10311 "" "Go-http-client/1.1"
Sep 30 18:38:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1784: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:38:00 compute-0 nova_compute[265391]: 2025-09-30 18:38:00.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:01 compute-0 nova_compute[265391]: 2025-09-30 18:38:01.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:01 compute-0 openstack_network_exporter[279566]: ERROR   18:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:38:01 compute-0 openstack_network_exporter[279566]: ERROR   18:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:38:01 compute-0 openstack_network_exporter[279566]: ERROR   18:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:38:01 compute-0 openstack_network_exporter[279566]: ERROR   18:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:38:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:38:01 compute-0 openstack_network_exporter[279566]: ERROR   18:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:38:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:38:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:01.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f10c75d0 =====
Sep 30 18:38:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f10c75d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:01 compute-0 radosgw[96126]: beast: 0x7f25f10c75d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:01.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:01 compute-0 ceph-mon[73755]: pgmap v1784: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:38:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1785: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:38:03 compute-0 nova_compute[265391]: 2025-09-30 18:38:03.354 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:03 compute-0 nova_compute[265391]: 2025-09-30 18:38:03.354 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:03 compute-0 ceph-mon[73755]: pgmap v1785: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:38:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:03.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:03 compute-0 nova_compute[265391]: 2025-09-30 18:38:03.860 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:38:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1786: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:38:04 compute-0 nova_compute[265391]: 2025-09-30 18:38:04.401 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:04 compute-0 nova_compute[265391]: 2025-09-30 18:38:04.402 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:04 compute-0 nova_compute[265391]: 2025-09-30 18:38:04.407 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:38:04 compute-0 nova_compute[265391]: 2025-09-30 18:38:04.407 2 INFO nova.compute.claims [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:38:04 compute-0 ceph-mon[73755]: pgmap v1786: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:38:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:05.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:05 compute-0 nova_compute[265391]: 2025-09-30 18:38:05.453 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:38:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689806097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:05 compute-0 nova_compute[265391]: 2025-09-30 18:38:05.867 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:05 compute-0 nova_compute[265391]: 2025-09-30 18:38:05.873 2 DEBUG nova.compute.provider_tree [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:38:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3689806097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:05 compute-0 nova_compute[265391]: 2025-09-30 18:38:05.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:06 compute-0 ovn_controller[156242]: 2025-09-30T18:38:06Z|00238|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Sep 30 18:38:06 compute-0 nova_compute[265391]: 2025-09-30 18:38:06.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:06 compute-0 nova_compute[265391]: 2025-09-30 18:38:06.383 2 DEBUG nova.scheduler.client.report [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:38:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1787: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:38:06 compute-0 nova_compute[265391]: 2025-09-30 18:38:06.893 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.492s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:06 compute-0 nova_compute[265391]: 2025-09-30 18:38:06.895 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:38:06 compute-0 ceph-mon[73755]: pgmap v1787: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:38:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:38:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:38:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:07.329Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:07 compute-0 nova_compute[265391]: 2025-09-30 18:38:07.407 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:38:07 compute-0 nova_compute[265391]: 2025-09-30 18:38:07.407 2 DEBUG nova.network.neutron [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:38:07 compute-0 nova_compute[265391]: 2025-09-30 18:38:07.407 2 WARNING neutronclient.v2_0.client [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:38:07 compute-0 nova_compute[265391]: 2025-09-30 18:38:07.408 2 WARNING neutronclient.v2_0.client [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:38:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:38:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:38:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:38:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:38:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:38:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:38:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:07.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:07 compute-0 nova_compute[265391]: 2025-09-30 18:38:07.918 2 INFO nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:38:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:38:08
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'images', 'volumes', 'backups', 'vms', '.rgw.root', 'default.rgw.control', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1788: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:38:08 compute-0 nova_compute[265391]: 2025-09-30 18:38:08.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:08 compute-0 nova_compute[265391]: 2025-09-30 18:38:08.426 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:38:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:08.626 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:38:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:08.626 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:38:08 compute-0 nova_compute[265391]: 2025-09-30 18:38:08.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:08 compute-0 nova_compute[265391]: 2025-09-30 18:38:08.767 2 DEBUG nova.network.neutron [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Successfully created port: 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:38:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:08] "GET /metrics HTTP/1.1" 200 46706 "" "Prometheus/2.51.0"
Sep 30 18:38:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:08] "GET /metrics HTTP/1.1" 200 46706 "" "Prometheus/2.51.0"
Sep 30 18:38:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:08.875Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:38:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:08.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:38:08 compute-0 ceph-mon[73755]: pgmap v1788: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:38:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:09.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.441 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.442 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.442 2 INFO nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Creating image(s)
Sep 30 18:38:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:09.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.475 2 DEBUG nova.storage.rbd_utils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] rbd image 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:38:09 compute-0 podman[342467]: 2025-09-30 18:38:09.512604921 +0000 UTC m=+0.053857748 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.513 2 DEBUG nova.storage.rbd_utils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] rbd image 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:38:09 compute-0 podman[342476]: 2025-09-30 18:38:09.515038893 +0000 UTC m=+0.047763112 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, config_id=edpm)
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.543 2 DEBUG nova.storage.rbd_utils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] rbd image 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:38:09 compute-0 podman[342475]: 2025-09-30 18:38:09.546912389 +0000 UTC m=+0.083842316 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid)
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.550 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.606 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.607 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.607 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.608 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.634 2 DEBUG nova.storage.rbd_utils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] rbd image 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:38:09 compute-0 nova_compute[265391]: 2025-09-30 18:38:09.637 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.023 2 DEBUG nova.network.neutron [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Successfully updated port: 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.128 2 DEBUG nova.compute.manager [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-changed-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.129 2 DEBUG nova.compute.manager [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Refreshing instance network info cache due to event network-changed-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.129 2 DEBUG oslo_concurrency.lockutils [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.129 2 DEBUG oslo_concurrency.lockutils [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.129 2 DEBUG nova.network.neutron [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Refreshing network info cache for port 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:38:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1789: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.426 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.788s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.503 2 DEBUG nova.storage.rbd_utils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] resizing rbd image 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.559 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Acquiring lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.636 2 WARNING neutronclient.v2_0.client [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.680 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.681 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Ensure instance console log exists: /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.681 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.682 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.682 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.710 2 DEBUG nova.network.neutron [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:38:10 compute-0 nova_compute[265391]: 2025-09-30 18:38:10.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:11 compute-0 nova_compute[265391]: 2025-09-30 18:38:11.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:11.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:11.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:11 compute-0 ceph-mon[73755]: pgmap v1789: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:38:11 compute-0 nova_compute[265391]: 2025-09-30 18:38:11.534 2 DEBUG nova.network.neutron [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:38:12 compute-0 nova_compute[265391]: 2025-09-30 18:38:12.039 2 DEBUG oslo_concurrency.lockutils [req-ce5cf884-acc9-4737-8148-a633433c3722 req-cfce4513-371f-409e-8584-5267833bf761 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:38:12 compute-0 nova_compute[265391]: 2025-09-30 18:38:12.040 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Acquired lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:38:12 compute-0 nova_compute[265391]: 2025-09-30 18:38:12.040 2 DEBUG nova.network.neutron [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:38:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1790: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:38:12 compute-0 nova_compute[265391]: 2025-09-30 18:38:12.634 2 DEBUG nova.network.neutron [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:38:12 compute-0 ceph-mon[73755]: pgmap v1790: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:38:12 compute-0 nova_compute[265391]: 2025-09-30 18:38:12.870 2 WARNING neutronclient.v2_0.client [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.352 2 DEBUG nova.network.neutron [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Updating instance_info_cache with network_info: [{"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:38:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:13.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:13.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:13.628 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:13.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.858 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Releasing lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.858 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Instance network_info: |[{"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.860 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Start _get_guest_xml network_info=[{"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.863 2 WARNING nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.864 2 DEBUG nova.virt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835', uuid='4f91975d-d44b-46af-9879-dbf7a693fbd2'), owner=OwnerMeta(userid='eda3e60f66494c8682f36b8a8fa20793', username='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin', projectid='003b1a96324d40b683381237c3cec243', projectname='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257493.8646321) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.867 2 DEBUG nova.virt.libvirt.host [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.867 2 DEBUG nova.virt.libvirt.host [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.870 2 DEBUG nova.virt.libvirt.host [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.870 2 DEBUG nova.virt.libvirt.host [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.870 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.871 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.871 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.871 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.871 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.871 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.872 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.872 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.872 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.872 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.872 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.872 2 DEBUG nova.virt.hardware [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:38:13 compute-0 nova_compute[265391]: 2025-09-30 18:38:13.875 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:38:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203473178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:38:14 compute-0 nova_compute[265391]: 2025-09-30 18:38:14.333 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:14 compute-0 nova_compute[265391]: 2025-09-30 18:38:14.362 2 DEBUG nova.storage.rbd_utils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] rbd image 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:38:14 compute-0 nova_compute[265391]: 2025-09-30 18:38:14.365 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1791: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:38:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3203473178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:38:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:38:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/288309622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:38:14 compute-0 nova_compute[265391]: 2025-09-30 18:38:14.800 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:14 compute-0 nova_compute[265391]: 2025-09-30 18:38:14.802 2 DEBUG nova.virt.libvirt.vif [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:38:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-965053835',id=28,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='003b1a96324d40b683381237c3cec243',ramdisk_id='',reservation_id='r-x8rgjbtk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:38:08Z,user_data=None,user_id='eda3e60f66494c8682f36b8a8fa20793',uuid=4f91975d-d44b-46af-9879-dbf7a693fbd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:38:14 compute-0 nova_compute[265391]: 2025-09-30 18:38:14.802 2 DEBUG nova.network.os_vif_util [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Converting VIF {"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:38:14 compute-0 nova_compute[265391]: 2025-09-30 18:38:14.803 2 DEBUG nova.network.os_vif_util [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:be:81,bridge_name='br-int',has_traffic_filtering=True,id=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e,network=Network(e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0ab88d-97') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:38:14 compute-0 nova_compute[265391]: 2025-09-30 18:38:14.804 2 DEBUG nova.objects.instance [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f91975d-d44b-46af-9879-dbf7a693fbd2 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.312 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <uuid>4f91975d-d44b-46af-9879-dbf7a693fbd2</uuid>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <name>instance-0000001c</name>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835</nova:name>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:38:13</nova:creationTime>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:38:15 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:38:15 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:user uuid="eda3e60f66494c8682f36b8a8fa20793">tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin</nova:user>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:project uuid="003b1a96324d40b683381237c3cec243">tempest-TestExecuteVmWorkloadBalanceStrategy-765295423</nova:project>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <nova:port uuid="6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e">
Sep 30 18:38:15 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <system>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <entry name="serial">4f91975d-d44b-46af-9879-dbf7a693fbd2</entry>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <entry name="uuid">4f91975d-d44b-46af-9879-dbf7a693fbd2</entry>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </system>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <os>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   </os>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <features>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   </features>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/4f91975d-d44b-46af-9879-dbf7a693fbd2_disk">
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       </source>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config">
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       </source>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:38:15 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:6c:be:81"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <target dev="tap6e0ab88d-97"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/console.log" append="off"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <video>
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </video>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:38:15 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:38:15 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:38:15 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:38:15 compute-0 nova_compute[265391]: </domain>
Sep 30 18:38:15 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.313 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Preparing to wait for external event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.313 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.314 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.314 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.315 2 DEBUG nova.virt.libvirt.vif [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:38:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-965053835',id=28,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='003b1a96324d40b683381237c3cec243',ramdisk_id='',reservation_id='r-x8rgjbtk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:38:08Z,user_data=None,user_id='eda3e60f66494c8682f36b8a8fa20793',uuid=4f91975d-d44b-46af-9879-dbf7a693fbd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.315 2 DEBUG nova.network.os_vif_util [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Converting VIF {"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.316 2 DEBUG nova.network.os_vif_util [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:be:81,bridge_name='br-int',has_traffic_filtering=True,id=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e,network=Network(e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0ab88d-97') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.316 2 DEBUG os_vif [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:be:81,bridge_name='br-int',has_traffic_filtering=True,id=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e,network=Network(e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0ab88d-97') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.317 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '7cb96a31-e69f-5c2d-84ad-c4f4dfd38576', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e0ab88d-97, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap6e0ab88d-97, col_values=(('qos', UUID('bb974198-f5e8-43d3-859b-6a8ce7b978ad')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap6e0ab88d-97, col_values=(('external_ids', {'iface-id': '6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:be:81', 'vm-uuid': '4f91975d-d44b-46af-9879-dbf7a693fbd2'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:15 compute-0 NetworkManager[45059]: <info>  [1759257495.3274] manager: (tap6e0ab88d-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:15 compute-0 nova_compute[265391]: 2025-09-30 18:38:15.333 2 INFO os_vif [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:be:81,bridge_name='br-int',has_traffic_filtering=True,id=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e,network=Network(e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0ab88d-97')
Sep 30 18:38:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:15.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:15.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:15 compute-0 ceph-mon[73755]: pgmap v1791: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:38:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/288309622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:38:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:16 compute-0 nova_compute[265391]: 2025-09-30 18:38:16.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1792: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:38:16 compute-0 ceph-mon[73755]: pgmap v1792: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:38:16 compute-0 nova_compute[265391]: 2025-09-30 18:38:16.892 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:38:16 compute-0 nova_compute[265391]: 2025-09-30 18:38:16.892 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:38:16 compute-0 nova_compute[265391]: 2025-09-30 18:38:16.892 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] No VIF found with MAC fa:16:3e:6c:be:81, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:38:16 compute-0 nova_compute[265391]: 2025-09-30 18:38:16.893 2 INFO nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Using config drive
Sep 30 18:38:16 compute-0 nova_compute[265391]: 2025-09-30 18:38:16.919 2 DEBUG nova.storage.rbd_utils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] rbd image 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:38:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:17.330Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:17 compute-0 nova_compute[265391]: 2025-09-30 18:38:17.435 2 WARNING neutronclient.v2_0.client [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:38:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:17.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:17.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:17 compute-0 nova_compute[265391]: 2025-09-30 18:38:17.732 2 INFO nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Creating config drive at /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/disk.config
Sep 30 18:38:17 compute-0 nova_compute[265391]: 2025-09-30 18:38:17.738 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpgn_m5sjr execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:17 compute-0 nova_compute[265391]: 2025-09-30 18:38:17.885 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpgn_m5sjr" returned: 0 in 0.146s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:17 compute-0 nova_compute[265391]: 2025-09-30 18:38:17.929 2 DEBUG nova.storage.rbd_utils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] rbd image 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:38:17 compute-0 nova_compute[265391]: 2025-09-30 18:38:17.934 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/disk.config 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1793: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:38:18 compute-0 sudo[342824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:38:18 compute-0 sudo[342824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:18 compute-0 sudo[342824]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:18 compute-0 nova_compute[265391]: 2025-09-30 18:38:18.711 2 DEBUG oslo_concurrency.processutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/disk.config 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.778s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:18 compute-0 nova_compute[265391]: 2025-09-30 18:38:18.713 2 INFO nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Deleting local config drive /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/disk.config because it was imported into RBD.
Sep 30 18:38:18 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 18:38:18 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 18:38:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:18] "GET /metrics HTTP/1.1" 200 46706 "" "Prometheus/2.51.0"
Sep 30 18:38:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:18] "GET /metrics HTTP/1.1" 200 46706 "" "Prometheus/2.51.0"
Sep 30 18:38:18 compute-0 kernel: tap6e0ab88d-97: entered promiscuous mode
Sep 30 18:38:18 compute-0 NetworkManager[45059]: <info>  [1759257498.8195] manager: (tap6e0ab88d-97): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Sep 30 18:38:18 compute-0 nova_compute[265391]: 2025-09-30 18:38:18.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:18 compute-0 ovn_controller[156242]: 2025-09-30T18:38:18Z|00239|binding|INFO|Claiming lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for this chassis.
Sep 30 18:38:18 compute-0 ovn_controller[156242]: 2025-09-30T18:38:18Z|00240|binding|INFO|6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e: Claiming fa:16:3e:6c:be:81 10.100.0.11
Sep 30 18:38:18 compute-0 nova_compute[265391]: 2025-09-30 18:38:18.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.833 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:be:81 10.100.0.11'], port_security=['fa:16:3e:6c:be:81 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4f91975d-d44b-46af-9879-dbf7a693fbd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '003b1a96324d40b683381237c3cec243', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a12466c2-b0c7-418c-b73a-38db6de1f821', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aef43a77-fc58-48dd-8195-5e83e09646ef, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.834 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e in datapath e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 bound to our chassis
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.836 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.848 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0fb300-863b-491f-a3f7-508b765ce6b7]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.849 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape214ea0f-11 in ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.850 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape214ea0f-10 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.851 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[32b87cf7-df33-4322-9ac4-4fe39f0fdff1]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.851 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0590efd-84cc-4b00-a8fb-ca79017c8fc3]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 systemd-udevd[342883]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:38:18 compute-0 NetworkManager[45059]: <info>  [1759257498.8694] device (tap6e0ab88d-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:38:18 compute-0 systemd-machined[219917]: New machine qemu-21-instance-0000001c.
Sep 30 18:38:18 compute-0 NetworkManager[45059]: <info>  [1759257498.8708] device (tap6e0ab88d-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.868 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[5d026517-42bb-46db-921a-bb01fdcca9eb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:18.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:18 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-0000001c.
Sep 30 18:38:18 compute-0 nova_compute[265391]: 2025-09-30 18:38:18.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.890 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4c2c6e-b279-41a7-bf1f-473d6fff1818]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 nova_compute[265391]: 2025-09-30 18:38:18.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:18 compute-0 ovn_controller[156242]: 2025-09-30T18:38:18Z|00241|binding|INFO|Setting lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e ovn-installed in OVS
Sep 30 18:38:18 compute-0 ovn_controller[156242]: 2025-09-30T18:38:18Z|00242|binding|INFO|Setting lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e up in Southbound
Sep 30 18:38:18 compute-0 nova_compute[265391]: 2025-09-30 18:38:18.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.917 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c70a99-5798-4a8b-a91a-b92373aaaf37]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.921 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c7f636-fcb3-40e2-a234-676e4aa53c06]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 NetworkManager[45059]: <info>  [1759257498.9226] manager: (tape214ea0f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/96)
Sep 30 18:38:18 compute-0 systemd-udevd[342886]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.953 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[865a3498-2353-4d2f-9537-ab367e5a8aea]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.955 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1552ba-0be1-4f39-a7a4-91ec7424fe10]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 NetworkManager[45059]: <info>  [1759257498.9770] device (tape214ea0f-10): carrier: link connected
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.982 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a397dc-37c9-4e8d-8ae1-54599973e671]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:18 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:18.998 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a23fd64-a582-4cfc-9c1b-a5020f5ec7cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape214ea0f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:e3:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609767, 'reachable_time': 29458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342915, 'error': None, 'target': 'ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.012 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[808f1ef5-48a0-4b35-82fd-4531fdcd2ace]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:e3e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609767, 'tstamp': 609767}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342916, 'error': None, 'target': 'ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.029 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[386f21d9-9011-4730-9309-57241e65106b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape214ea0f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:e3:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609767, 'reachable_time': 29458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342917, 'error': None, 'target': 'ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.062 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc27639-c73b-42b1-9ca1-71c40fef227c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.079 2 DEBUG nova.compute.manager [req-0037f581-314e-4c1b-9560-fb83ede36be3 req-5645cec5-487f-422e-86da-212f9faf6518 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.079 2 DEBUG oslo_concurrency.lockutils [req-0037f581-314e-4c1b-9560-fb83ede36be3 req-5645cec5-487f-422e-86da-212f9faf6518 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.079 2 DEBUG oslo_concurrency.lockutils [req-0037f581-314e-4c1b-9560-fb83ede36be3 req-5645cec5-487f-422e-86da-212f9faf6518 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.080 2 DEBUG oslo_concurrency.lockutils [req-0037f581-314e-4c1b-9560-fb83ede36be3 req-5645cec5-487f-422e-86da-212f9faf6518 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.080 2 DEBUG nova.compute.manager [req-0037f581-314e-4c1b-9560-fb83ede36be3 req-5645cec5-487f-422e-86da-212f9faf6518 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Processing event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.124 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[976a40dd-059c-4fb1-9e3a-edcf46b1c318]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.125 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape214ea0f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.126 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.127 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape214ea0f-10, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:19 compute-0 kernel: tape214ea0f-10: entered promiscuous mode
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:19 compute-0 NetworkManager[45059]: <info>  [1759257499.1294] manager: (tape214ea0f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.134 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape214ea0f-10, col_values=(('external_ids', {'iface-id': '475706b8-809c-4da9-92ac-7152f6d17fbe'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:19 compute-0 ovn_controller[156242]: 2025-09-30T18:38:19Z|00243|binding|INFO|Releasing lport 475706b8-809c-4da9-92ac-7152f6d17fbe from this chassis (sb_readonly=0)
Sep 30 18:38:19 compute-0 nova_compute[265391]: 2025-09-30 18:38:19.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.158 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbdf112-13af-44ed-be82-5b584eb01c37]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.158 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.159 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.159 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.159 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.160 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[308ef335-38b0-4722-a951-221de111d2aa]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.160 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.161 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae4effd-9f8c-4bcc-a19e-254f27dd07bb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.161 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:38:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:19.162 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'env', 'PROCESS_TAG=haproxy-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:38:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:19.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:19 compute-0 podman[342967]: 2025-09-30 18:38:19.521981858 +0000 UTC m=+0.042652032 container create 778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:38:19 compute-0 systemd[1]: Started libpod-conmon-778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b.scope.
Sep 30 18:38:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75db38e2986d69eea3e5e76df5cd53a8fcd73f8e7f0cdfd99ea3c2b763906598/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:19 compute-0 podman[342967]: 2025-09-30 18:38:19.499720269 +0000 UTC m=+0.020390473 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:38:19 compute-0 podman[342967]: 2025-09-30 18:38:19.59823431 +0000 UTC m=+0.118904514 container init 778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:38:19 compute-0 podman[342967]: 2025-09-30 18:38:19.608764159 +0000 UTC m=+0.129434333 container start 778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:38:19 compute-0 neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4[342998]: [NOTICE]   (343010) : New worker (343013) forked
Sep 30 18:38:19 compute-0 neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4[342998]: [NOTICE]   (343010) : Loading success.
Sep 30 18:38:19 compute-0 ceph-mon[73755]: pgmap v1793: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.032 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.035 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.038 2 INFO nova.virt.libvirt.driver [-] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Instance spawned successfully.
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.038 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1794: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.550 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.550 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.551 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.551 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.552 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:38:20 compute-0 nova_compute[265391]: 2025-09-30 18:38:20.552 2 DEBUG nova.virt.libvirt.driver [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:38:20 compute-0 ceph-mon[73755]: pgmap v1794: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.061 2 INFO nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Took 11.62 seconds to spawn the instance on the hypervisor.
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.062 2 DEBUG nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:38:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.141 2 DEBUG nova.compute.manager [req-a2428b64-d2c7-4ff8-81b2-21bc8a7c46d3 req-349486e3-0a2b-4915-9719-803c1e83cfef 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.142 2 DEBUG oslo_concurrency.lockutils [req-a2428b64-d2c7-4ff8-81b2-21bc8a7c46d3 req-349486e3-0a2b-4915-9719-803c1e83cfef 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.143 2 DEBUG oslo_concurrency.lockutils [req-a2428b64-d2c7-4ff8-81b2-21bc8a7c46d3 req-349486e3-0a2b-4915-9719-803c1e83cfef 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.143 2 DEBUG oslo_concurrency.lockutils [req-a2428b64-d2c7-4ff8-81b2-21bc8a7c46d3 req-349486e3-0a2b-4915-9719-803c1e83cfef 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.144 2 DEBUG nova.compute.manager [req-a2428b64-d2c7-4ff8-81b2-21bc8a7c46d3 req-349486e3-0a2b-4915-9719-803c1e83cfef 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.145 2 WARNING nova.compute.manager [req-a2428b64-d2c7-4ff8-81b2-21bc8a7c46d3 req-349486e3-0a2b-4915-9719-803c1e83cfef 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received unexpected event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with vm_state active and task_state None.
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:21.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:21.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:21 compute-0 nova_compute[265391]: 2025-09-30 18:38:21.598 2 INFO nova.compute.manager [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Took 17.23 seconds to build instance.
Sep 30 18:38:22 compute-0 nova_compute[265391]: 2025-09-30 18:38:22.106 2 DEBUG oslo_concurrency.lockutils [None req-8f1c6c10-07e2-4bde-84f3-b07287b9767f eda3e60f66494c8682f36b8a8fa20793 003b1a96324d40b683381237c3cec243 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.752s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:38:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:38:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:38:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1795: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:38:23 compute-0 ceph-mon[73755]: pgmap v1795: 353 pgs: 353 active+clean; 88 MiB data, 350 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:38:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:23.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:23.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:23.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1796: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:38:25 compute-0 sshd-session[343027]: Invalid user mysql from 185.156.73.233 port 61600
Sep 30 18:38:25 compute-0 sshd-session[343027]: Failed none for invalid user mysql from 185.156.73.233 port 61600 ssh2
Sep 30 18:38:25 compute-0 nova_compute[265391]: 2025-09-30 18:38:25.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:25 compute-0 sshd-session[343027]: Connection closed by invalid user mysql 185.156.73.233 port 61600 [preauth]
Sep 30 18:38:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:25 compute-0 ceph-mon[73755]: pgmap v1796: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:38:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:25.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:26 compute-0 nova_compute[265391]: 2025-09-30 18:38:26.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1797: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:38:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:27.332Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:27.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:27 compute-0 ceph-mon[73755]: pgmap v1797: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:38:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:27.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:38:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 22K writes, 81K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 22K writes, 7463 syncs, 2.98 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4034 writes, 15K keys, 4034 commit groups, 1.0 writes per commit group, ingest: 17.63 MB, 0.03 MB/s
                                           Interval WAL: 4034 writes, 1593 syncs, 2.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:38:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1798: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:38:28 compute-0 podman[343033]: 2025-09-30 18:38:28.53395156 +0000 UTC m=+0.064215884 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:38:28 compute-0 podman[343034]: 2025-09-30 18:38:28.534126265 +0000 UTC m=+0.067196711 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:38:28 compute-0 podman[343035]: 2025-09-30 18:38:28.560290034 +0000 UTC m=+0.091167304 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:38:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:28] "GET /metrics HTTP/1.1" 200 46730 "" "Prometheus/2.51.0"
Sep 30 18:38:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:28] "GET /metrics HTTP/1.1" 200 46730 "" "Prometheus/2.51.0"
Sep 30 18:38:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:28.877Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:38:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:28.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:38:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:29 compute-0 sudo[343099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:38:29 compute-0 sudo[343099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:29 compute-0 sudo[343099]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:29 compute-0 sudo[343124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:38:29 compute-0 sudo[343124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:29.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:29 compute-0 ceph-mon[73755]: pgmap v1798: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:38:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1467164217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:29 compute-0 sudo[343124]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:29 compute-0 podman[276673]: time="2025-09-30T18:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:38:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:38:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10761 "" "Go-http-client/1.1"
Sep 30 18:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:38:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1799: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 77 op/s
Sep 30 18:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:38:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:38:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:38:29 compute-0 sudo[343182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:38:29 compute-0 sudo[343182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:29 compute-0 sudo[343182]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:30 compute-0 sudo[343207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:38:30 compute-0 sudo[343207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:30 compute-0 nova_compute[265391]: 2025-09-30 18:38:30.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:30 compute-0 podman[343274]: 2025-09-30 18:38:30.408100252 +0000 UTC m=+0.039543563 container create 3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:38:30 compute-0 systemd[1]: Started libpod-conmon-3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003.scope.
Sep 30 18:38:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:38:30 compute-0 podman[343274]: 2025-09-30 18:38:30.483631824 +0000 UTC m=+0.115075135 container init 3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_margulis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:38:30 compute-0 podman[343274]: 2025-09-30 18:38:30.392729238 +0000 UTC m=+0.024172579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:38:30 compute-0 podman[343274]: 2025-09-30 18:38:30.491291481 +0000 UTC m=+0.122734802 container start 3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:38:30 compute-0 podman[343274]: 2025-09-30 18:38:30.494765129 +0000 UTC m=+0.126208470 container attach 3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_margulis, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:38:30 compute-0 affectionate_margulis[343290]: 167 167
Sep 30 18:38:30 compute-0 systemd[1]: libpod-3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003.scope: Deactivated successfully.
Sep 30 18:38:30 compute-0 conmon[343290]: conmon 3392c5cff423fe895a36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003.scope/container/memory.events
Sep 30 18:38:30 compute-0 podman[343274]: 2025-09-30 18:38:30.499558102 +0000 UTC m=+0.131001413 container died 3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe45126522ed43466c67dc2e37bcd69be178b2f7028b4e901c0065077536f23e-merged.mount: Deactivated successfully.
Sep 30 18:38:30 compute-0 podman[343274]: 2025-09-30 18:38:30.536726813 +0000 UTC m=+0.168170124 container remove 3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:38:30 compute-0 systemd[1]: libpod-conmon-3392c5cff423fe895a369a123998e1ae6a39193f06167301ddd66f5d77fc5003.scope: Deactivated successfully.
Sep 30 18:38:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:38:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:38:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:38:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:38:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:38:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:38:30 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:38:30 compute-0 podman[343316]: 2025-09-30 18:38:30.714053171 +0000 UTC m=+0.040142288 container create 4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:38:30 compute-0 systemd[1]: Started libpod-conmon-4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6.scope.
Sep 30 18:38:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6147773a1a44bd80c7ba59cbc412395c69b00a3af5ecb7aae04114d8a36db975/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6147773a1a44bd80c7ba59cbc412395c69b00a3af5ecb7aae04114d8a36db975/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6147773a1a44bd80c7ba59cbc412395c69b00a3af5ecb7aae04114d8a36db975/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6147773a1a44bd80c7ba59cbc412395c69b00a3af5ecb7aae04114d8a36db975/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:30 compute-0 podman[343316]: 2025-09-30 18:38:30.697311943 +0000 UTC m=+0.023401060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6147773a1a44bd80c7ba59cbc412395c69b00a3af5ecb7aae04114d8a36db975/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:30 compute-0 podman[343316]: 2025-09-30 18:38:30.8109038 +0000 UTC m=+0.136992947 container init 4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_faraday, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:38:30 compute-0 podman[343316]: 2025-09-30 18:38:30.822464285 +0000 UTC m=+0.148553402 container start 4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:38:30 compute-0 podman[343316]: 2025-09-30 18:38:30.825971295 +0000 UTC m=+0.152060432 container attach 4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_faraday, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:38:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:31 compute-0 flamboyant_faraday[343333]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:38:31 compute-0 flamboyant_faraday[343333]: --> All data devices are unavailable
Sep 30 18:38:31 compute-0 systemd[1]: libpod-4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6.scope: Deactivated successfully.
Sep 30 18:38:31 compute-0 podman[343316]: 2025-09-30 18:38:31.248096407 +0000 UTC m=+0.574185534 container died 4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6147773a1a44bd80c7ba59cbc412395c69b00a3af5ecb7aae04114d8a36db975-merged.mount: Deactivated successfully.
Sep 30 18:38:31 compute-0 podman[343316]: 2025-09-30 18:38:31.292481413 +0000 UTC m=+0.618570530 container remove 4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:38:31 compute-0 systemd[1]: libpod-conmon-4f8a3649befc20c48995f4d3bbdb3a189e0d95c26fd889c0c0ba0825e81228c6.scope: Deactivated successfully.
Sep 30 18:38:31 compute-0 sudo[343207]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:31 compute-0 nova_compute[265391]: 2025-09-30 18:38:31.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:31 compute-0 sudo[343362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:38:31 compute-0 sudo[343362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:31 compute-0 sudo[343362]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:31 compute-0 openstack_network_exporter[279566]: ERROR   18:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:38:31 compute-0 openstack_network_exporter[279566]: ERROR   18:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:38:31 compute-0 openstack_network_exporter[279566]: ERROR   18:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:38:31 compute-0 openstack_network_exporter[279566]: ERROR   18:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:38:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:38:31 compute-0 openstack_network_exporter[279566]: ERROR   18:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:38:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:38:31 compute-0 sudo[343387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:38:31 compute-0 sudo[343387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:31.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:31.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:31 compute-0 ceph-mon[73755]: pgmap v1799: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 77 op/s
Sep 30 18:38:31 compute-0 podman[343454]: 2025-09-30 18:38:31.825897233 +0000 UTC m=+0.038133847 container create b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:38:31 compute-0 systemd[1]: Started libpod-conmon-b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf.scope.
Sep 30 18:38:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1800: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 72 op/s
Sep 30 18:38:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:38:31 compute-0 podman[343454]: 2025-09-30 18:38:31.90508751 +0000 UTC m=+0.117324124 container init b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:38:31 compute-0 podman[343454]: 2025-09-30 18:38:31.810024927 +0000 UTC m=+0.022261561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:38:31 compute-0 podman[343454]: 2025-09-30 18:38:31.912030327 +0000 UTC m=+0.124266941 container start b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:38:31 compute-0 thirsty_shamir[343471]: 167 167
Sep 30 18:38:31 compute-0 podman[343454]: 2025-09-30 18:38:31.915376923 +0000 UTC m=+0.127613557 container attach b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shamir, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:38:31 compute-0 systemd[1]: libpod-b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf.scope: Deactivated successfully.
Sep 30 18:38:31 compute-0 podman[343454]: 2025-09-30 18:38:31.916102462 +0000 UTC m=+0.128339076 container died b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shamir, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-559e7eb55f92027ff36c3acffecf3845984eb459cf23f51e8be2adb9eaaa65cc-merged.mount: Deactivated successfully.
Sep 30 18:38:31 compute-0 podman[343454]: 2025-09-30 18:38:31.950930203 +0000 UTC m=+0.163166817 container remove b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_shamir, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 18:38:31 compute-0 systemd[1]: libpod-conmon-b05b21fa7ebff06fb216ad4d60f86cb0512fcbc10a507eb35af204864e0faccf.scope: Deactivated successfully.
Sep 30 18:38:32 compute-0 podman[343495]: 2025-09-30 18:38:32.110968409 +0000 UTC m=+0.043732081 container create 2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:38:32 compute-0 systemd[1]: Started libpod-conmon-2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041.scope.
Sep 30 18:38:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63bd0c04a0ceaec800bfe18948648bb7d1db1b40338a20dd52a453aee40d9ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63bd0c04a0ceaec800bfe18948648bb7d1db1b40338a20dd52a453aee40d9ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63bd0c04a0ceaec800bfe18948648bb7d1db1b40338a20dd52a453aee40d9ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63bd0c04a0ceaec800bfe18948648bb7d1db1b40338a20dd52a453aee40d9ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:32 compute-0 podman[343495]: 2025-09-30 18:38:32.092141257 +0000 UTC m=+0.024904939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:38:32 compute-0 podman[343495]: 2025-09-30 18:38:32.194173878 +0000 UTC m=+0.126937570 container init 2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:38:32 compute-0 podman[343495]: 2025-09-30 18:38:32.199880964 +0000 UTC m=+0.132644636 container start 2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 18:38:32 compute-0 podman[343495]: 2025-09-30 18:38:32.204528153 +0000 UTC m=+0.137291845 container attach 2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]: {
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:     "0": [
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:         {
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "devices": [
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "/dev/loop3"
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             ],
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "lv_name": "ceph_lv0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "lv_size": "21470642176",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "name": "ceph_lv0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "tags": {
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.cluster_name": "ceph",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.crush_device_class": "",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.encrypted": "0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.osd_id": "0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.type": "block",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.vdo": "0",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:                 "ceph.with_tpm": "0"
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             },
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "type": "block",
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:             "vg_name": "ceph_vg0"
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:         }
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]:     ]
Sep 30 18:38:32 compute-0 xenodochial_lamarr[343511]: }
Sep 30 18:38:32 compute-0 systemd[1]: libpod-2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041.scope: Deactivated successfully.
Sep 30 18:38:32 compute-0 podman[343495]: 2025-09-30 18:38:32.530199238 +0000 UTC m=+0.462962920 container died 2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a63bd0c04a0ceaec800bfe18948648bb7d1db1b40338a20dd52a453aee40d9ec-merged.mount: Deactivated successfully.
Sep 30 18:38:32 compute-0 podman[343495]: 2025-09-30 18:38:32.572695965 +0000 UTC m=+0.505459627 container remove 2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Sep 30 18:38:32 compute-0 systemd[1]: libpod-conmon-2c77904613dcee72eb3aa561133c7f3261b6886c389a18a5e67ea2910b1d8041.scope: Deactivated successfully.
Sep 30 18:38:32 compute-0 sudo[343387]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:32 compute-0 sudo[343533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:38:32 compute-0 sudo[343533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:32 compute-0 sudo[343533]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:32 compute-0 sudo[343558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:38:32 compute-0 sudo[343558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:33 compute-0 podman[343622]: 2025-09-30 18:38:33.097993738 +0000 UTC m=+0.040171029 container create ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:38:33 compute-0 systemd[1]: Started libpod-conmon-ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e.scope.
Sep 30 18:38:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:38:33 compute-0 podman[343622]: 2025-09-30 18:38:33.081502216 +0000 UTC m=+0.023679527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:38:33 compute-0 podman[343622]: 2025-09-30 18:38:33.187090618 +0000 UTC m=+0.129267929 container init ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:38:33 compute-0 podman[343622]: 2025-09-30 18:38:33.193223225 +0000 UTC m=+0.135400516 container start ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:38:33 compute-0 podman[343622]: 2025-09-30 18:38:33.195712318 +0000 UTC m=+0.137889609 container attach ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:38:33 compute-0 mystifying_cray[343638]: 167 167
Sep 30 18:38:33 compute-0 systemd[1]: libpod-ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e.scope: Deactivated successfully.
Sep 30 18:38:33 compute-0 podman[343622]: 2025-09-30 18:38:33.1973228 +0000 UTC m=+0.139500091 container died ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8e69cf8e92527d6a2ff250798926074ac650a6e42eb0cb7848ff667a4f96c81-merged.mount: Deactivated successfully.
Sep 30 18:38:33 compute-0 podman[343622]: 2025-09-30 18:38:33.229831862 +0000 UTC m=+0.172009153 container remove ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:38:33 compute-0 systemd[1]: libpod-conmon-ad35fd192c5524578c1f4c8f75beb6794bfb91a6f97484c25df4e8f9364c879e.scope: Deactivated successfully.
Sep 30 18:38:33 compute-0 ovn_controller[156242]: 2025-09-30T18:38:33Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:be:81 10.100.0.11
Sep 30 18:38:33 compute-0 ovn_controller[156242]: 2025-09-30T18:38:33Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:be:81 10.100.0.11
Sep 30 18:38:33 compute-0 podman[343664]: 2025-09-30 18:38:33.384160021 +0000 UTC m=+0.035591842 container create e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_jemison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:38:33 compute-0 systemd[1]: Started libpod-conmon-e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae.scope.
Sep 30 18:38:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4540c843d49dccd6f486f687565c191c03f4c2f5075c169741e93e5192031d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4540c843d49dccd6f486f687565c191c03f4c2f5075c169741e93e5192031d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4540c843d49dccd6f486f687565c191c03f4c2f5075c169741e93e5192031d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:33.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4540c843d49dccd6f486f687565c191c03f4c2f5075c169741e93e5192031d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:38:33 compute-0 podman[343664]: 2025-09-30 18:38:33.369789303 +0000 UTC m=+0.021221154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:38:33 compute-0 podman[343664]: 2025-09-30 18:38:33.481084531 +0000 UTC m=+0.132516412 container init e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Sep 30 18:38:33 compute-0 podman[343664]: 2025-09-30 18:38:33.488870661 +0000 UTC m=+0.140302492 container start e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_jemison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:38:33 compute-0 podman[343664]: 2025-09-30 18:38:33.492717919 +0000 UTC m=+0.144149750 container attach e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:38:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:33.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:33 compute-0 ceph-mon[73755]: pgmap v1800: 353 pgs: 353 active+clean; 88 MiB data, 351 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 72 op/s
Sep 30 18:38:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:33.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:38:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:33.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:38:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1801: 353 pgs: 353 active+clean; 167 MiB data, 391 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 163 op/s
Sep 30 18:38:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:34 compute-0 lvm[343756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:38:34 compute-0 lvm[343756]: VG ceph_vg0 finished
Sep 30 18:38:34 compute-0 eloquent_jemison[343680]: {}
Sep 30 18:38:34 compute-0 systemd[1]: libpod-e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae.scope: Deactivated successfully.
Sep 30 18:38:34 compute-0 podman[343664]: 2025-09-30 18:38:34.176263781 +0000 UTC m=+0.827695612 container died e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:38:34 compute-0 systemd[1]: libpod-e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae.scope: Consumed 1.057s CPU time.
Sep 30 18:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc4540c843d49dccd6f486f687565c191c03f4c2f5075c169741e93e5192031d-merged.mount: Deactivated successfully.
Sep 30 18:38:34 compute-0 podman[343664]: 2025-09-30 18:38:34.227470792 +0000 UTC m=+0.878902623 container remove e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_jemison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 18:38:34 compute-0 systemd[1]: libpod-conmon-e4e7120de48f17cb02419c1cf8199c499a96650bf6a78e26aea39dfc41d0a4ae.scope: Deactivated successfully.
Sep 30 18:38:34 compute-0 sudo[343558]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:38:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:38:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:38:34 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:38:34 compute-0 sudo[343773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:38:34 compute-0 sudo[343773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:34 compute-0 sudo[343773]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:35 compute-0 ceph-mon[73755]: pgmap v1801: 353 pgs: 353 active+clean; 167 MiB data, 391 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 163 op/s
Sep 30 18:38:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:38:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:38:35 compute-0 nova_compute[265391]: 2025-09-30 18:38:35.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:35.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:35.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1802: 353 pgs: 353 active+clean; 167 MiB data, 391 MiB used, 40 GiB / 40 GiB avail; 352 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Sep 30 18:38:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2887908528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:38:36 compute-0 nova_compute[265391]: 2025-09-30 18:38:36.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:38:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3144201013' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:38:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:38:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3144201013' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:38:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:38:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:38:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:37.333Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:37 compute-0 ceph-mon[73755]: pgmap v1802: 353 pgs: 353 active+clean; 167 MiB data, 391 MiB used, 40 GiB / 40 GiB avail; 352 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Sep 30 18:38:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3144201013' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:38:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3144201013' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:38:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/405714817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:38:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:38:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:38:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:38:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:38:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:38:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:38:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:38:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:37.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:37.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1803: 353 pgs: 353 active+clean; 167 MiB data, 391 MiB used, 40 GiB / 40 GiB avail; 352 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Sep 30 18:38:38 compute-0 sudo[343802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:38:38 compute-0 sudo[343802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:38 compute-0 sudo[343802]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:38] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:38:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:38] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:38:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:38.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:39 compute-0 ceph-mon[73755]: pgmap v1803: 353 pgs: 353 active+clean; 167 MiB data, 391 MiB used, 40 GiB / 40 GiB avail; 352 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Sep 30 18:38:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:39.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:39.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1804: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 354 KiB/s rd, 4.1 MiB/s wr, 94 op/s
Sep 30 18:38:40 compute-0 nova_compute[265391]: 2025-09-30 18:38:40.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:40 compute-0 podman[343830]: 2025-09-30 18:38:40.540728829 +0000 UTC m=+0.071130942 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:38:40 compute-0 podman[343829]: 2025-09-30 18:38:40.545240744 +0000 UTC m=+0.076695744 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Sep 30 18:38:40 compute-0 podman[343831]: 2025-09-30 18:38:40.576455743 +0000 UTC m=+0.097121597 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Sep 30 18:38:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:41 compute-0 nova_compute[265391]: 2025-09-30 18:38:41.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:41 compute-0 ceph-mon[73755]: pgmap v1804: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 354 KiB/s rd, 4.1 MiB/s wr, 94 op/s
Sep 30 18:38:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:41.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:41.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1805: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:38:43 compute-0 ceph-mon[73755]: pgmap v1805: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Sep 30 18:38:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:43.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:43.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:43.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1806: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Sep 30 18:38:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:44 compute-0 nova_compute[265391]: 2025-09-30 18:38:44.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:45 compute-0 nova_compute[265391]: 2025-09-30 18:38:45.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:45.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:45 compute-0 ceph-mon[73755]: pgmap v1806: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Sep 30 18:38:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:38:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:45.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:38:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1807: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 76 op/s
Sep 30 18:38:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:46 compute-0 nova_compute[265391]: 2025-09-30 18:38:46.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:46 compute-0 nova_compute[265391]: 2025-09-30 18:38:46.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:47.334Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:47.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:47 compute-0 ceph-mon[73755]: pgmap v1807: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 76 op/s
Sep 30 18:38:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:47.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1808: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 76 op/s
Sep 30 18:38:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:48] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:38:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:48] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:38:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:48.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:49 compute-0 nova_compute[265391]: 2025-09-30 18:38:49.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:49.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:49.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:49 compute-0 ceph-mon[73755]: pgmap v1808: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 76 op/s
Sep 30 18:38:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1976696295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1809: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 77 op/s
Sep 30 18:38:49 compute-0 nova_compute[265391]: 2025-09-30 18:38:49.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:49 compute-0 nova_compute[265391]: 2025-09-30 18:38:49.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:49 compute-0 nova_compute[265391]: 2025-09-30 18:38:49.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:49 compute-0 nova_compute[265391]: 2025-09-30 18:38:49.940 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:38:49 compute-0 nova_compute[265391]: 2025-09-30 18:38:49.940 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:38:50 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3195171026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:50 compute-0 nova_compute[265391]: 2025-09-30 18:38:50.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:50 compute-0 nova_compute[265391]: 2025-09-30 18:38:50.398 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3195171026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.439 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.439 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:38:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:51.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:51.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.585 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.586 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.615 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.029s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.616 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4134MB free_disk=39.925785064697266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.616 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:51 compute-0 nova_compute[265391]: 2025-09-30 18:38:51.616 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:51 compute-0 ceph-mon[73755]: pgmap v1809: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 77 op/s
Sep 30 18:38:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1810: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Sep 30 18:38:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:38:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:38:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2835959614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:38:52 compute-0 nova_compute[265391]: 2025-09-30 18:38:52.664 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 4f91975d-d44b-46af-9879-dbf7a693fbd2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:38:52 compute-0 nova_compute[265391]: 2025-09-30 18:38:52.664 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:38:52 compute-0 nova_compute[265391]: 2025-09-30 18:38:52.665 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:38:51 up  1:42,  0 user,  load average: 0.75, 0.84, 0.86\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_003b1a96324d40b683381237c3cec243': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:38:52 compute-0 nova_compute[265391]: 2025-09-30 18:38:52.696 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:38:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:38:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366044204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:53 compute-0 nova_compute[265391]: 2025-09-30 18:38:53.122 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:38:53 compute-0 nova_compute[265391]: 2025-09-30 18:38:53.127 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:38:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:53.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:53.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:53 compute-0 nova_compute[265391]: 2025-09-30 18:38:53.636 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:38:53 compute-0 ceph-mon[73755]: pgmap v1810: 353 pgs: 353 active+clean; 167 MiB data, 397 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Sep 30 18:38:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/366044204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:38:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:53.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1811: 353 pgs: 353 active+clean; 188 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Sep 30 18:38:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:54 compute-0 nova_compute[265391]: 2025-09-30 18:38:54.146 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:38:54 compute-0 nova_compute[265391]: 2025-09-30 18:38:54.146 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.530s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:54.328 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:38:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:54.329 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:38:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:38:54.329 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:38:54 compute-0 ceph-mon[73755]: pgmap v1811: 353 pgs: 353 active+clean; 188 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Sep 30 18:38:55 compute-0 nova_compute[265391]: 2025-09-30 18:38:55.147 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:55 compute-0 nova_compute[265391]: 2025-09-30 18:38:55.148 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:55 compute-0 nova_compute[265391]: 2025-09-30 18:38:55.148 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:55 compute-0 nova_compute[265391]: 2025-09-30 18:38:55.149 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:55 compute-0 nova_compute[265391]: 2025-09-30 18:38:55.149 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:38:55 compute-0 nova_compute[265391]: 2025-09-30 18:38:55.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:55.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:55.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1812: 353 pgs: 353 active+clean; 188 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 234 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Sep 30 18:38:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:38:56 compute-0 nova_compute[265391]: 2025-09-30 18:38:56.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:38:56 compute-0 ceph-mon[73755]: pgmap v1812: 353 pgs: 353 active+clean; 188 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 234 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Sep 30 18:38:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:57.334Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:57.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:38:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2757991385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:38:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:38:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2757991385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:38:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:57.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1813: 353 pgs: 353 active+clean; 188 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 234 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Sep 30 18:38:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2757991385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:38:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2757991385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:38:58 compute-0 sudo[343956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:38:58 compute-0 sudo[343956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:38:58 compute-0 sudo[343956]: pam_unix(sudo:session): session closed for user root
Sep 30 18:38:58 compute-0 podman[343982]: 2025-09-30 18:38:58.78158447 +0000 UTC m=+0.060501069 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:38:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:58] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:38:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:38:58] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:38:58 compute-0 podman[343980]: 2025-09-30 18:38:58.806517988 +0000 UTC m=+0.086457673 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:38:58 compute-0 podman[343981]: 2025-09-30 18:38:58.835648774 +0000 UTC m=+0.112359527 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:38:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:38:58.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:38:58 compute-0 ceph-mon[73755]: pgmap v1813: 353 pgs: 353 active+clean; 188 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 234 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Sep 30 18:38:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:38:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:38:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:38:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:38:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:38:59 compute-0 nova_compute[265391]: 2025-09-30 18:38:59.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:38:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:38:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:38:59.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:38:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:38:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:38:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:38:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:38:59 compute-0 podman[276673]: time="2025-09-30T18:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:38:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:38:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10762 "" "Go-http-client/1.1"
Sep 30 18:38:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1814: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:39:00 compute-0 nova_compute[265391]: 2025-09-30 18:39:00.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:00 compute-0 ceph-mon[73755]: pgmap v1814: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:39:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:01 compute-0 nova_compute[265391]: 2025-09-30 18:39:01.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:01 compute-0 openstack_network_exporter[279566]: ERROR   18:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:39:01 compute-0 openstack_network_exporter[279566]: ERROR   18:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:39:01 compute-0 openstack_network_exporter[279566]: ERROR   18:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:39:01 compute-0 openstack_network_exporter[279566]: ERROR   18:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:39:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:39:01 compute-0 openstack_network_exporter[279566]: ERROR   18:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:39:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:39:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:01.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:01.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1815: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:39:03 compute-0 ceph-mon[73755]: pgmap v1815: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:39:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:03.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:03.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:03.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1816: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:39:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:04 compute-0 nova_compute[265391]: 2025-09-30 18:39:04.948 2 DEBUG nova.compute.manager [None req-32335a6c-add9-48d2-a810-6622e5d339d0 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:635
Sep 30 18:39:05 compute-0 ceph-mon[73755]: pgmap v1816: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:39:05 compute-0 nova_compute[265391]: 2025-09-30 18:39:05.113 2 DEBUG nova.compute.provider_tree [None req-32335a6c-add9-48d2-a810-6622e5d339d0 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 45 to 53 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:39:05 compute-0 nova_compute[265391]: 2025-09-30 18:39:05.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:05.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:05.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1817: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 105 KiB/s wr, 20 op/s
Sep 30 18:39:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:06 compute-0 nova_compute[265391]: 2025-09-30 18:39:06.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:07 compute-0 ceph-mon[73755]: pgmap v1817: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 105 KiB/s wr, 20 op/s
Sep 30 18:39:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:39:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:39:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:07.334Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:39:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:39:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:39:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:39:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:39:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:39:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:07.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1818: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 105 KiB/s wr, 20 op/s
Sep 30 18:39:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022767699074526275 of space, bias 1.0, pg target 0.4553539814905255 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:39:08
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'vms', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'volumes']
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:39:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:08] "GET /metrics HTTP/1.1" 200 46717 "" "Prometheus/2.51.0"
Sep 30 18:39:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:08] "GET /metrics HTTP/1.1" 200 46717 "" "Prometheus/2.51.0"
Sep 30 18:39:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:08.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:09 compute-0 ceph-mon[73755]: pgmap v1818: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 105 KiB/s wr, 20 op/s
Sep 30 18:39:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:09.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:09.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1819: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 92 KiB/s rd, 105 KiB/s wr, 20 op/s
Sep 30 18:39:10 compute-0 nova_compute[265391]: 2025-09-30 18:39:10.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:10 compute-0 ovn_controller[156242]: 2025-09-30T18:39:10Z|00244|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Sep 30 18:39:11 compute-0 ceph-mon[73755]: pgmap v1819: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 92 KiB/s rd, 105 KiB/s wr, 20 op/s
Sep 30 18:39:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:11 compute-0 nova_compute[265391]: 2025-09-30 18:39:11.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:11.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:11 compute-0 podman[344061]: 2025-09-30 18:39:11.520191656 +0000 UTC m=+0.059576206 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:39:11 compute-0 podman[344068]: 2025-09-30 18:39:11.564558331 +0000 UTC m=+0.086382871 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Sep 30 18:39:11 compute-0 podman[344062]: 2025-09-30 18:39:11.574471565 +0000 UTC m=+0.097436125 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Sep 30 18:39:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:11.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1820: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:39:13 compute-0 ceph-mon[73755]: pgmap v1820: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.428 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.428 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.429 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.429 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.430 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.430 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:13.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:13.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:13.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1821: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.897 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Check if temp file /var/lib/nova/instances/tmpmgf9x058 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:39:13 compute-0 nova_compute[265391]: 2025-09-30 18:39:13.903 2 DEBUG nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmgf9x058',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4f91975d-d44b-46af-9879-dbf7a693fbd2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:39:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:14 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.444 2 DEBUG nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:314
Sep 30 18:39:14 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.445 2 DEBUG nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Image id 5b99cbca-b655-4be5-8343-cf504005c42e yields fingerprint cb2d580238c9b109feae7f1462613dc547671457 _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:319
Sep 30 18:39:14 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.445 2 INFO nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] image 5b99cbca-b655-4be5-8343-cf504005c42e at (/var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457): checking
Sep 30 18:39:14 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.445 2 DEBUG nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] image 5b99cbca-b655-4be5-8343-cf504005c42e at (/var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457): image is in use _mark_in_use /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:279
Sep 30 18:39:14 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.446 2 INFO oslo.privsep.daemon [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp267k65yk/privsep.sock']
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:15.129 2 INFO oslo.privsep.daemon [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Spawned new privsep daemon via rootwrap
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.994 8117 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.997 8117 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.999 8117 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:14.999 8117 INFO oslo.privsep.daemon [-] privsep daemon running as pid 8117
Sep 30 18:39:15 compute-0 ceph-mon[73755]: pgmap v1821: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 16 KiB/s wr, 1 op/s
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:15.251 2 DEBUG nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:319
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:15.252 2 DEBUG nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] 4f91975d-d44b-46af-9879-dbf7a693fbd2 is a valid instance name _list_backing_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:126
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:15.252 2 INFO nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Active base files: /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:15.252 2 DEBUG nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:350
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:15.253 2 DEBUG nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:299
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:15.253 2 DEBUG nova.virt.libvirt.imagecache [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:284
Sep 30 18:39:15 compute-0 nova_compute[265391]: 2025-09-30 18:39:15.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:15.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:15.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1822: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:39:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:16 compute-0 nova_compute[265391]: 2025-09-30 18:39:16.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:17 compute-0 ceph-mon[73755]: pgmap v1822: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:39:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:17.337Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:17.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:17.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1823: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:18] "GET /metrics HTTP/1.1" 200 46717 "" "Prometheus/2.51.0"
Sep 30 18:39:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:18] "GET /metrics HTTP/1.1" 200 46717 "" "Prometheus/2.51.0"
Sep 30 18:39:18 compute-0 sudo[344135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:39:18 compute-0 sudo[344135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:18 compute-0 sudo[344135]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:18 compute-0 nova_compute[265391]: 2025-09-30 18:39:18.868 2 DEBUG nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Preparing to wait for external event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:39:18 compute-0 nova_compute[265391]: 2025-09-30 18:39:18.869 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:18 compute-0 nova_compute[265391]: 2025-09-30 18:39:18.869 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:18 compute-0 nova_compute[265391]: 2025-09-30 18:39:18.870 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:18.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:19 compute-0 ceph-mon[73755]: pgmap v1823: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:39:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:19.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:19.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1824: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:39:20 compute-0 nova_compute[265391]: 2025-09-30 18:39:20.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:21 compute-0 ceph-mon[73755]: pgmap v1824: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:39:21 compute-0 nova_compute[265391]: 2025-09-30 18:39:21.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:21.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:21.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1825: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:39:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:39:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:39:23 compute-0 ceph-mon[73755]: pgmap v1825: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:39:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:39:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:23.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:23.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1826: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:39:23 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:23.923 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:39:23 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:23.924 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:39:23 compute-0 nova_compute[265391]: 2025-09-30 18:39:23.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:23 compute-0 nova_compute[265391]: 2025-09-30 18:39:23.933 2 DEBUG nova.compute.manager [req-0e4a1c1a-15bd-44b3-8c64-3953977378c2 req-5546c977-6024-45c3-8765-42c633c4cb16 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:23 compute-0 nova_compute[265391]: 2025-09-30 18:39:23.934 2 DEBUG oslo_concurrency.lockutils [req-0e4a1c1a-15bd-44b3-8c64-3953977378c2 req-5546c977-6024-45c3-8765-42c633c4cb16 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:23 compute-0 nova_compute[265391]: 2025-09-30 18:39:23.935 2 DEBUG oslo_concurrency.lockutils [req-0e4a1c1a-15bd-44b3-8c64-3953977378c2 req-5546c977-6024-45c3-8765-42c633c4cb16 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:23 compute-0 nova_compute[265391]: 2025-09-30 18:39:23.935 2 DEBUG oslo_concurrency.lockutils [req-0e4a1c1a-15bd-44b3-8c64-3953977378c2 req-5546c977-6024-45c3-8765-42c633c4cb16 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:23 compute-0 nova_compute[265391]: 2025-09-30 18:39:23.936 2 DEBUG nova.compute.manager [req-0e4a1c1a-15bd-44b3-8c64-3953977378c2 req-5546c977-6024-45c3-8765-42c633c4cb16 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No event matching network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e in dict_keys([('network-vif-plugged', '6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:39:23 compute-0 nova_compute[265391]: 2025-09-30 18:39:23.936 2 DEBUG nova.compute.manager [req-0e4a1c1a-15bd-44b3-8c64-3953977378c2 req-5546c977-6024-45c3-8765-42c633c4cb16 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:39:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:25 compute-0 ceph-mon[73755]: pgmap v1826: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:39:25 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.389 2 INFO nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Took 6.52 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:39:25 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:25.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1827: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:39:25 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.997 2 DEBUG nova.compute.manager [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:25 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.998 2 DEBUG oslo_concurrency.lockutils [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.998 2 DEBUG oslo_concurrency.lockutils [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.998 2 DEBUG oslo_concurrency.lockutils [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.999 2 DEBUG nova.compute.manager [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Processing event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.999 2 DEBUG nova.compute.manager [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-changed-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.999 2 DEBUG nova.compute.manager [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Refreshing instance network info cache due to event network-changed-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.999 2 DEBUG oslo_concurrency.lockutils [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:25.999 2 DEBUG oslo_concurrency.lockutils [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.000 2 DEBUG nova.network.neutron [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Refreshing network info cache for port 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.001 2 DEBUG nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:39:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.509 2 WARNING neutronclient.v2_0.client [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.514 2 DEBUG nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpmgf9x058',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4f91975d-d44b-46af-9879-dbf7a693fbd2',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(7ede2a0e-7c1c-4cb9-abf9-796534210bb6),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.519 2 DEBUG nova.objects.instance [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 4f91975d-d44b-46af-9879-dbf7a693fbd2 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.520 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.522 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.523 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:39:26 compute-0 nova_compute[265391]: 2025-09-30 18:39:26.939 2 WARNING neutronclient.v2_0.client [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.025 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.026 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.035 2 DEBUG nova.virt.libvirt.vif [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:38:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-965053835',id=28,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:38:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='003b1a96324d40b683381237c3cec243',ramdisk_id='',reservation_id='r-x8rgjbtk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:38:21Z,user_data=None,user_id='eda3e60f66494c8682f36b8a8fa20793',uuid=4f91975d-d44b-46af-9879-dbf7a693fbd2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.035 2 DEBUG nova.network.os_vif_util [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.036 2 DEBUG nova.network.os_vif_util [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:be:81,bridge_name='br-int',has_traffic_filtering=True,id=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e,network=Network(e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0ab88d-97') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.037 2 DEBUG nova.virt.libvirt.migration [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:6c:be:81"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <target dev="tap6e0ab88d-97"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]: </interface>
Sep 30 18:39:27 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.038 2 DEBUG nova.virt.libvirt.migration [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <name>instance-0000001c</name>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <uuid>4f91975d-d44b-46af-9879-dbf7a693fbd2</uuid>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835</nova:name>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:38:13</nova:creationTime>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:user uuid="eda3e60f66494c8682f36b8a8fa20793">tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin</nova:user>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:project uuid="003b1a96324d40b683381237c3cec243">tempest-TestExecuteVmWorkloadBalanceStrategy-765295423</nova:project>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:port uuid="6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e">
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <system>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="serial">4f91975d-d44b-46af-9879-dbf7a693fbd2</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="uuid">4f91975d-d44b-46af-9879-dbf7a693fbd2</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </system>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <os>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </os>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <features>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </features>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/4f91975d-d44b-46af-9879-dbf7a693fbd2_disk">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </source>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </source>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:6c:be:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6e0ab88d-97"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/console.log" append="off"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </target>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/console.log" append="off"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </console>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </input>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <video>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </video>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]: </domain>
Sep 30 18:39:27 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.043 2 DEBUG nova.virt.libvirt.migration [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <name>instance-0000001c</name>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <uuid>4f91975d-d44b-46af-9879-dbf7a693fbd2</uuid>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835</nova:name>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:38:13</nova:creationTime>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:user uuid="eda3e60f66494c8682f36b8a8fa20793">tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin</nova:user>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:project uuid="003b1a96324d40b683381237c3cec243">tempest-TestExecuteVmWorkloadBalanceStrategy-765295423</nova:project>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:port uuid="6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e">
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <system>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="serial">4f91975d-d44b-46af-9879-dbf7a693fbd2</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="uuid">4f91975d-d44b-46af-9879-dbf7a693fbd2</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </system>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <os>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </os>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <features>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </features>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/4f91975d-d44b-46af-9879-dbf7a693fbd2_disk">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </source>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </source>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:6c:be:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6e0ab88d-97"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/console.log" append="off"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </target>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/console.log" append="off"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </console>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </input>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <video>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </video>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]: </domain>
Sep 30 18:39:27 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.044 2 DEBUG nova.virt.libvirt.migration [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <name>instance-0000001c</name>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <uuid>4f91975d-d44b-46af-9879-dbf7a693fbd2</uuid>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835</nova:name>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:38:13</nova:creationTime>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:user uuid="eda3e60f66494c8682f36b8a8fa20793">tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin</nova:user>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:project uuid="003b1a96324d40b683381237c3cec243">tempest-TestExecuteVmWorkloadBalanceStrategy-765295423</nova:project>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <nova:port uuid="6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e">
Sep 30 18:39:27 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <system>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="serial">4f91975d-d44b-46af-9879-dbf7a693fbd2</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="uuid">4f91975d-d44b-46af-9879-dbf7a693fbd2</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </system>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <os>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </os>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <features>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </features>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/4f91975d-d44b-46af-9879-dbf7a693fbd2_disk">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </source>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/4f91975d-d44b-46af-9879-dbf7a693fbd2_disk.config">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </source>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:6c:be:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap6e0ab88d-97"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/console.log" append="off"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:39:27 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       </target>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2/console.log" append="off"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </console>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </input>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <video>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </video>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:39:27 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:39:27 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:39:27 compute-0 nova_compute[265391]: </domain>
Sep 30 18:39:27 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.044 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.092 2 DEBUG nova.network.neutron [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Updated VIF entry in instance network info cache for port 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.093 2 DEBUG nova.network.neutron [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Updating instance_info_cache with network_info: [{"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:39:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:27.338Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:27 compute-0 ceph-mon[73755]: pgmap v1827: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:39:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.528 2 DEBUG nova.virt.libvirt.migration [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.529 2 INFO nova.virt.libvirt.migration [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:39:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:27.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:27 compute-0 nova_compute[265391]: 2025-09-30 18:39:27.623 2 DEBUG oslo_concurrency.lockutils [req-778d138b-4055-414b-a180-0082ea29d73f req-a235c7e5-068f-45df-9ffb-054ea336027f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-4f91975d-d44b-46af-9879-dbf7a693fbd2" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:39:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1828: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:39:28 compute-0 nova_compute[265391]: 2025-09-30 18:39:28.548 2 INFO nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:39:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:28] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:39:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:28] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:39:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:28.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:28 compute-0 kernel: tap6e0ab88d-97 (unregistering): left promiscuous mode
Sep 30 18:39:28 compute-0 NetworkManager[45059]: <info>  [1759257568.9590] device (tap6e0ab88d-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:39:28 compute-0 ovn_controller[156242]: 2025-09-30T18:39:28Z|00245|binding|INFO|Releasing lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e from this chassis (sb_readonly=0)
Sep 30 18:39:28 compute-0 ovn_controller[156242]: 2025-09-30T18:39:28Z|00246|binding|INFO|Setting lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e down in Southbound
Sep 30 18:39:28 compute-0 ovn_controller[156242]: 2025-09-30T18:39:28Z|00247|binding|INFO|Removing iface tap6e0ab88d-97 ovn-installed in OVS
Sep 30 18:39:28 compute-0 nova_compute[265391]: 2025-09-30 18:39:28.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:28.974 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:be:81 10.100.0.11'], port_security=['fa:16:3e:6c:be:81 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4f91975d-d44b-46af-9879-dbf7a693fbd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '003b1a96324d40b683381237c3cec243', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a12466c2-b0c7-418c-b73a-38db6de1f821', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aef43a77-fc58-48dd-8195-5e83e09646ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:39:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:28.975 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e in datapath e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 unbound from our chassis
Sep 30 18:39:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:28.976 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:39:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:28.977 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1327180d-9ba4-4f2c-89ec-d3df45ac9268]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:28.977 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 namespace which is not needed anymore
Sep 30 18:39:28 compute-0 nova_compute[265391]: 2025-09-30 18:39:28.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:29 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Sep 30 18:39:29 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000001c.scope: Consumed 14.952s CPU time.
Sep 30 18:39:29 compute-0 systemd-machined[219917]: Machine qemu-21-instance-0000001c terminated.
Sep 30 18:39:29 compute-0 podman[344174]: 2025-09-30 18:39:29.052193248 +0000 UTC m=+0.068345590 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:39:29 compute-0 podman[344179]: 2025-09-30 18:39:29.076553771 +0000 UTC m=+0.084944515 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:39:29 compute-0 podman[344178]: 2025-09-30 18:39:29.076964582 +0000 UTC m=+0.089492262 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:39:29 compute-0 neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4[342998]: [NOTICE]   (343010) : haproxy version is 3.0.5-8e879a5
Sep 30 18:39:29 compute-0 neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4[342998]: [NOTICE]   (343010) : path to executable is /usr/sbin/haproxy
Sep 30 18:39:29 compute-0 neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4[342998]: [WARNING]  (343010) : Exiting Master process...
Sep 30 18:39:29 compute-0 neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4[342998]: [ALERT]    (343010) : Current worker (343013) exited with code 143 (Terminated)
Sep 30 18:39:29 compute-0 neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4[342998]: [WARNING]  (343010) : All workers exited. Exiting... (0)
Sep 30 18:39:29 compute-0 podman[344258]: 2025-09-30 18:39:29.097691262 +0000 UTC m=+0.031178509 container kill 778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:39:29 compute-0 systemd[1]: libpod-778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b.scope: Deactivated successfully.
Sep 30 18:39:29 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk: No such file or directory
Sep 30 18:39:29 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 4f91975d-d44b-46af-9879-dbf7a693fbd2_disk: No such file or directory
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.118 2 DEBUG nova.compute.manager [req-6845fca2-0fd7-475c-9a6d-ca116257fa7d req-236a050c-b169-44eb-a24b-cb5788ae6f34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.118 2 DEBUG oslo_concurrency.lockutils [req-6845fca2-0fd7-475c-9a6d-ca116257fa7d req-236a050c-b169-44eb-a24b-cb5788ae6f34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:29 compute-0 kernel: tap6e0ab88d-97: entered promiscuous mode
Sep 30 18:39:29 compute-0 NetworkManager[45059]: <info>  [1759257569.1215] manager: (tap6e0ab88d-97): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Sep 30 18:39:29 compute-0 systemd-udevd[344217]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.122 2 DEBUG oslo_concurrency.lockutils [req-6845fca2-0fd7-475c-9a6d-ca116257fa7d req-236a050c-b169-44eb-a24b-cb5788ae6f34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.123 2 DEBUG oslo_concurrency.lockutils [req-6845fca2-0fd7-475c-9a6d-ca116257fa7d req-236a050c-b169-44eb-a24b-cb5788ae6f34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.123 2 DEBUG nova.compute.manager [req-6845fca2-0fd7-475c-9a6d-ca116257fa7d req-236a050c-b169-44eb-a24b-cb5788ae6f34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.123 2 DEBUG nova.compute.manager [req-6845fca2-0fd7-475c-9a6d-ca116257fa7d req-236a050c-b169-44eb-a24b-cb5788ae6f34 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00248|binding|INFO|Claiming lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for this chassis.
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00249|binding|INFO|6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e: Claiming fa:16:3e:6c:be:81 10.100.0.11
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.129 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:be:81 10.100.0.11'], port_security=['fa:16:3e:6c:be:81 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4f91975d-d44b-46af-9879-dbf7a693fbd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '003b1a96324d40b683381237c3cec243', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a12466c2-b0c7-418c-b73a-38db6de1f821', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aef43a77-fc58-48dd-8195-5e83e09646ef, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:39:29 compute-0 kernel: tap6e0ab88d-97 (unregistering): left promiscuous mode
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00250|binding|INFO|Setting lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e ovn-installed in OVS
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00251|binding|INFO|Setting lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e up in Southbound
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00252|binding|INFO|Releasing lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e from this chassis (sb_readonly=1)
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00253|if_status|INFO|Dropped 2 log messages in last 449 seconds (most recently, 449 seconds ago) due to excessive rate
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00254|if_status|INFO|Not setting lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e down as sb is readonly
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00255|binding|INFO|Removing iface tap6e0ab88d-97 ovn-installed in OVS
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00256|binding|INFO|Releasing lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e from this chassis (sb_readonly=0)
Sep 30 18:39:29 compute-0 ovn_controller[156242]: 2025-09-30T18:39:29Z|00257|binding|INFO|Setting lport 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e down in Southbound
Sep 30 18:39:29 compute-0 podman[344280]: 2025-09-30 18:39:29.146690576 +0000 UTC m=+0.032051541 container died 778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2)
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.150 2 DEBUG nova.virt.libvirt.guest [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.150 2 INFO nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Migration operation has completed
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.150 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:be:81 10.100.0.11'], port_security=['fa:16:3e:6c:be:81 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4f91975d-d44b-46af-9879-dbf7a693fbd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '003b1a96324d40b683381237c3cec243', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a12466c2-b0c7-418c-b73a-38db6de1f821', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aef43a77-fc58-48dd-8195-5e83e09646ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.151 2 INFO nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] _post_live_migration() is started..
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.153 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.154 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.154 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.163 2 WARNING neutronclient.v2_0.client [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.163 2 WARNING neutronclient.v2_0.client [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b-userdata-shm.mount: Deactivated successfully.
Sep 30 18:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-75db38e2986d69eea3e5e76df5cd53a8fcd73f8e7f0cdfd99ea3c2b763906598-merged.mount: Deactivated successfully.
Sep 30 18:39:29 compute-0 podman[344280]: 2025-09-30 18:39:29.190465076 +0000 UTC m=+0.075826041 container cleanup 778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:39:29 compute-0 systemd[1]: libpod-conmon-778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b.scope: Deactivated successfully.
Sep 30 18:39:29 compute-0 podman[344283]: 2025-09-30 18:39:29.210643153 +0000 UTC m=+0.091084322 container remove 778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.215 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c3c6a1-4c29-4435-8433-ea25b59ea657]: (4, ("Tue Sep 30 06:39:29 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 (778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b)\n778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b\nTue Sep 30 06:39:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 (778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b)\n778c8eb18d4466bea022c6d67bcca3404fa917a3dc9bba3a22f5ed1a0586c71b\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.217 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[70f3449f-e367-4297-ba6e-c4d1e380df61]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.217 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.217 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4d41563d-4852-41c2-b1ad-488f8dcbfc9e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.218 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape214ea0f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 kernel: tape214ea0f-10: left promiscuous mode
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.237 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[375c0f41-d6a4-4ccd-8e0c-f744bae3f8cd]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.266 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[73e2d67e-f8c8-4404-9828-b0c8fd15d083]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.267 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b26078e-6616-4692-97aa-767df96fdce8]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.280 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[72dd3561-8610-43cb-ad4c-a7437f9212ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609760, 'reachable_time': 39439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344322, 'error': None, 'target': 'ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 systemd[1]: run-netns-ovnmeta\x2de214ea0f\x2d1cc7\x2d4ff8\x2dad9d\x2d642bd3724cf4.mount: Deactivated successfully.
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.284 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.284 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[20e9fde7-e7ff-4aa3-b56e-88e82e8df016]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.285 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e in datapath e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 unbound from our chassis
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.285 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.286 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4e60cd3d-13f2-4fa6-b41f-fc60e1fac5df]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.286 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e in datapath e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4 unbound from our chassis
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.287 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.287 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[52945290-11c8-4df9-b10f-bb6a2dc8edae]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:39:29 compute-0 ceph-mon[73755]: pgmap v1828: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:39:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:29.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:29.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.715 2 DEBUG nova.compute.manager [req-7bed18e2-c5be-4ab3-a675-40b75fa6c9c3 req-985e328a-126c-4209-90b6-abb38f162804 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.716 2 DEBUG oslo_concurrency.lockutils [req-7bed18e2-c5be-4ab3-a675-40b75fa6c9c3 req-985e328a-126c-4209-90b6-abb38f162804 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.716 2 DEBUG oslo_concurrency.lockutils [req-7bed18e2-c5be-4ab3-a675-40b75fa6c9c3 req-985e328a-126c-4209-90b6-abb38f162804 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.716 2 DEBUG oslo_concurrency.lockutils [req-7bed18e2-c5be-4ab3-a675-40b75fa6c9c3 req-985e328a-126c-4209-90b6-abb38f162804 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.716 2 DEBUG nova.compute.manager [req-7bed18e2-c5be-4ab3-a675-40b75fa6c9c3 req-985e328a-126c-4209-90b6-abb38f162804 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.717 2 DEBUG nova.compute.manager [req-7bed18e2-c5be-4ab3-a675-40b75fa6c9c3 req-985e328a-126c-4209-90b6-abb38f162804 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:39:29 compute-0 podman[276673]: time="2025-09-30T18:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:39:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:39:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10308 "" "Go-http-client/1.1"
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.807 2 DEBUG nova.network.neutron [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port 6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.808 2 DEBUG nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.808 2 DEBUG nova.virt.libvirt.vif [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:38:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-965053835',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-965053835',id=28,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:38:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='003b1a96324d40b683381237c3cec243',ramdisk_id='',reservation_id='r-x8rgjbtk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-765295423-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:39:09Z,user_data=None,user_id='eda3e60f66494c8682f36b8a8fa20793',uuid=4f91975d-d44b-46af-9879-dbf7a693fbd2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.809 2 DEBUG nova.network.os_vif_util [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "address": "fa:16:3e:6c:be:81", "network": {"id": "e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-1931818952-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9692f6197b3545b1bf37bd84c3928d41", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e0ab88d-97", "ovs_interfaceid": "6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.809 2 DEBUG nova.network.os_vif_util [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:be:81,bridge_name='br-int',has_traffic_filtering=True,id=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e,network=Network(e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0ab88d-97') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.809 2 DEBUG os_vif [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:be:81,bridge_name='br-int',has_traffic_filtering=True,id=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e,network=Network(e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0ab88d-97') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.811 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e0ab88d-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=bb974198-f5e8-43d3-859b-6a8ce7b978ad) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.819 2 INFO os_vif [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:be:81,bridge_name='br-int',has_traffic_filtering=True,id=6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e,network=Network(e214ea0f-1cc7-4ff8-ad9d-642bd3724cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e0ab88d-97')
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.819 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.819 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.819 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.819 2 DEBUG nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.820 2 INFO nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Deleting instance files /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2_del
Sep 30 18:39:29 compute-0 nova_compute[265391]: 2025-09-30 18:39:29.820 2 INFO nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Deletion of /var/lib/nova/instances/4f91975d-d44b-46af-9879-dbf7a693fbd2_del complete
Sep 30 18:39:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1829: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 11 KiB/s wr, 7 op/s
Sep 30 18:39:29 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:29.925 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:39:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.172 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.173 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.173 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.173 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.173 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.173 2 WARNING nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received unexpected event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with vm_state active and task_state migrating.
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.174 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.174 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.174 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.174 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.174 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.174 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.174 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.174 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.175 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.175 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.175 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.175 2 WARNING nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received unexpected event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with vm_state active and task_state migrating.
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.175 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.175 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.175 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.175 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.176 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.176 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-unplugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.176 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.176 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.176 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.176 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.176 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.176 2 WARNING nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received unexpected event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with vm_state active and task_state migrating.
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.177 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.177 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.177 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.177 2 DEBUG oslo_concurrency.lockutils [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.177 2 DEBUG nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] No waiting events found dispatching network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.177 2 WARNING nova.compute.manager [req-1353ab6d-6f68-4dd2-915a-329b540eb2f4 req-f9e26bcf-f984-40ae-b5af-2847a4b220aa 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Received unexpected event network-vif-plugged-6e0ab88d-97b3-4ce8-9f18-d46c9c2fc74e for instance with vm_state active and task_state migrating.
Sep 30 18:39:31 compute-0 nova_compute[265391]: 2025-09-30 18:39:31.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:31 compute-0 openstack_network_exporter[279566]: ERROR   18:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:39:31 compute-0 openstack_network_exporter[279566]: ERROR   18:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:39:31 compute-0 openstack_network_exporter[279566]: ERROR   18:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:39:31 compute-0 openstack_network_exporter[279566]: ERROR   18:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:39:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:39:31 compute-0 openstack_network_exporter[279566]: ERROR   18:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:39:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:39:31 compute-0 ceph-mon[73755]: pgmap v1829: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 11 KiB/s wr, 7 op/s
Sep 30 18:39:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:31.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1830: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:33 compute-0 ceph-mon[73755]: pgmap v1830: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:33.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:33.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:33.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1831: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:34 compute-0 sudo[344329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:39:34 compute-0 nova_compute[265391]: 2025-09-30 18:39:34.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:34 compute-0 sudo[344329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:34 compute-0 sudo[344329]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:34 compute-0 sudo[344354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 18:39:34 compute-0 sudo[344354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:35 compute-0 sudo[344354]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:39:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:39:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:35 compute-0 sudo[344399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:39:35 compute-0 sudo[344399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:35 compute-0 sudo[344399]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:35 compute-0 sudo[344424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:39:35 compute-0 sudo[344424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:39:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:39:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:35 compute-0 ceph-mon[73755]: pgmap v1831: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:35.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:35 compute-0 sudo[344424]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1832: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:35 compute-0 sudo[344483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:39:35 compute-0 sudo[344483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:35 compute-0 sudo[344483]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:36 compute-0 sudo[344508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- inventory --format=json-pretty --filter-for-batch
Sep 30 18:39:36 compute-0 sudo[344508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:36 compute-0 nova_compute[265391]: 2025-09-30 18:39:36.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:36 compute-0 podman[344574]: 2025-09-30 18:39:36.454864157 +0000 UTC m=+0.050233057 container create d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:39:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:39:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2307088691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:39:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:39:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2307088691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:39:36 compute-0 systemd[1]: Started libpod-conmon-d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0.scope.
Sep 30 18:39:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:39:36 compute-0 podman[344574]: 2025-09-30 18:39:36.428104542 +0000 UTC m=+0.023473532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:39:36 compute-0 podman[344574]: 2025-09-30 18:39:36.530662967 +0000 UTC m=+0.126031907 container init d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 18:39:36 compute-0 podman[344574]: 2025-09-30 18:39:36.539739699 +0000 UTC m=+0.135108599 container start d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bose, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:39:36 compute-0 podman[344574]: 2025-09-30 18:39:36.543171097 +0000 UTC m=+0.138540017 container attach d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bose, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:39:36 compute-0 magical_bose[344590]: 167 167
Sep 30 18:39:36 compute-0 systemd[1]: libpod-d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0.scope: Deactivated successfully.
Sep 30 18:39:36 compute-0 podman[344574]: 2025-09-30 18:39:36.546715487 +0000 UTC m=+0.142084387 container died d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 18:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f06fb9b9ee051ba9b29ea32379d03df0d205b01aef76aedbf37d24c4355a6efd-merged.mount: Deactivated successfully.
Sep 30 18:39:36 compute-0 podman[344574]: 2025-09-30 18:39:36.577976848 +0000 UTC m=+0.173345748 container remove d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_bose, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:39:36 compute-0 systemd[1]: libpod-conmon-d0e25f60c32de863ab0f5fb2b85b5f70d7b8e695754c2a3cd14426963c2d4dc0.scope: Deactivated successfully.
Sep 30 18:39:36 compute-0 podman[344615]: 2025-09-30 18:39:36.729504425 +0000 UTC m=+0.043253338 container create 23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_leavitt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:39:36 compute-0 systemd[1]: Started libpod-conmon-23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398.scope.
Sep 30 18:39:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c6f5bc825c5cb4c99a1405b670009f60ec89636d160442a1ce0da5689695e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c6f5bc825c5cb4c99a1405b670009f60ec89636d160442a1ce0da5689695e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c6f5bc825c5cb4c99a1405b670009f60ec89636d160442a1ce0da5689695e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c6f5bc825c5cb4c99a1405b670009f60ec89636d160442a1ce0da5689695e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:36 compute-0 podman[344615]: 2025-09-30 18:39:36.714572373 +0000 UTC m=+0.028321316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:39:36 compute-0 podman[344615]: 2025-09-30 18:39:36.812977701 +0000 UTC m=+0.126726644 container init 23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:39:36 compute-0 podman[344615]: 2025-09-30 18:39:36.818648256 +0000 UTC m=+0.132397199 container start 23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_leavitt, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:39:36 compute-0 podman[344615]: 2025-09-30 18:39:36.82153422 +0000 UTC m=+0.135283193 container attach 23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:39:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:39:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:39:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:37.339Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:39:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:37.341Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:39:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:39:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:39:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:39:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:39:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:39:37 compute-0 ceph-mon[73755]: pgmap v1832: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2307088691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:39:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2307088691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:39:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]: [
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:     {
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "available": false,
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "being_replaced": false,
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "ceph_device_lvm": false,
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "device_id": "QEMU_DVD-ROM_QM00001",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "lsm_data": {},
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "lvs": [],
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "path": "/dev/sr0",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "rejected_reasons": [
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "Insufficient space (<5GB)",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "Has a FileSystem"
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         ],
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         "sys_api": {
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "actuators": null,
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "device_nodes": [
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:                 "sr0"
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             ],
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "devname": "sr0",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "human_readable_size": "482.00 KB",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "id_bus": "ata",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "model": "QEMU DVD-ROM",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "nr_requests": "2",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "parent": "/dev/sr0",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "partitions": {},
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "path": "/dev/sr0",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "removable": "1",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "rev": "2.5+",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "ro": "0",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "rotational": "0",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "sas_address": "",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "sas_device_handle": "",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "scheduler_mode": "mq-deadline",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "sectors": 0,
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "sectorsize": "2048",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "size": 493568.0,
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "support_discard": "2048",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "type": "disk",
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:             "vendor": "QEMU"
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:         }
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]:     }
Sep 30 18:39:37 compute-0 zealous_leavitt[344631]: ]
Sep 30 18:39:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:37 compute-0 systemd[1]: libpod-23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398.scope: Deactivated successfully.
Sep 30 18:39:37 compute-0 podman[346095]: 2025-09-30 18:39:37.614152774 +0000 UTC m=+0.046880211 container died 23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:39:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:37.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c6f5bc825c5cb4c99a1405b670009f60ec89636d160442a1ce0da5689695e9-merged.mount: Deactivated successfully.
Sep 30 18:39:37 compute-0 podman[346095]: 2025-09-30 18:39:37.658099239 +0000 UTC m=+0.090826676 container remove 23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:39:37 compute-0 systemd[1]: libpod-conmon-23c122faffb5196dcb2e01d44454e6a78e081344dfca04f5b74f1793f16fa398.scope: Deactivated successfully.
Sep 30 18:39:37 compute-0 sudo[344508]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:39:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:39:37 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:37 compute-0 nova_compute[265391]: 2025-09-30 18:39:37.846 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:37 compute-0 nova_compute[265391]: 2025-09-30 18:39:37.846 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:37 compute-0 nova_compute[265391]: 2025-09-30 18:39:37.847 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "4f91975d-d44b-46af-9879-dbf7a693fbd2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1833: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.360 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.361 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.361 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.361 2 DEBUG nova.compute.resource_tracker [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.362 2 DEBUG oslo_concurrency.processutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:39:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:39:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346492281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:38] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:39:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:38] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:39:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:38 compute-0 ceph-mon[73755]: pgmap v1833: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.810 2 DEBUG oslo_concurrency.processutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:39:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:38.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:38 compute-0 sudo[346134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:39:38 compute-0 sudo[346134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:38 compute-0 sudo[346134]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.964 2 WARNING nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.965 2 DEBUG oslo_concurrency.processutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.987 2 DEBUG oslo_concurrency.processutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.988 2 DEBUG nova.compute.resource_tracker [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4283MB free_disk=39.901119232177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.988 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:38 compute-0 nova_compute[265391]: 2025-09-30 18:39:38.988 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:39.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:39:39 compute-0 nova_compute[265391]: 2025-09-30 18:39:39.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3346492281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1834: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:39:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:39:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:39:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:39:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:39:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1835: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 810 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:39:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:39:40 compute-0 nova_compute[265391]: 2025-09-30 18:39:40.007 2 DEBUG nova.compute.resource_tracker [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 4f91975d-d44b-46af-9879-dbf7a693fbd2 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:39:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:39:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:39:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:39:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:39:40 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:39:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:39:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:39:40 compute-0 sudo[346162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:39:40 compute-0 sudo[346162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:40 compute-0 sudo[346162]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:40 compute-0 sudo[346187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:39:40 compute-0 sudo[346187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:40 compute-0 nova_compute[265391]: 2025-09-30 18:39:40.516 2 DEBUG nova.compute.resource_tracker [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:39:40 compute-0 nova_compute[265391]: 2025-09-30 18:39:40.555 2 DEBUG nova.compute.resource_tracker [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 7ede2a0e-7c1c-4cb9-abf9-796534210bb6 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:39:40 compute-0 nova_compute[265391]: 2025-09-30 18:39:40.556 2 DEBUG nova.compute.resource_tracker [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:39:40 compute-0 nova_compute[265391]: 2025-09-30 18:39:40.556 2 DEBUG nova.compute.resource_tracker [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:39:38 up  1:42,  0 user,  load average: 0.55, 0.77, 0.84\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:39:40 compute-0 nova_compute[265391]: 2025-09-30 18:39:40.608 2 DEBUG oslo_concurrency.processutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:39:40 compute-0 podman[346253]: 2025-09-30 18:39:40.832373871 +0000 UTC m=+0.045906916 container create e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_allen, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 18:39:40 compute-0 systemd[1]: Started libpod-conmon-e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227.scope.
Sep 30 18:39:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:39:40 compute-0 podman[346253]: 2025-09-30 18:39:40.905574114 +0000 UTC m=+0.119107199 container init e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:39:40 compute-0 podman[346253]: 2025-09-30 18:39:40.815024587 +0000 UTC m=+0.028557692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:39:40 compute-0 podman[346253]: 2025-09-30 18:39:40.91245378 +0000 UTC m=+0.125986825 container start e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_allen, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 18:39:40 compute-0 podman[346253]: 2025-09-30 18:39:40.915426866 +0000 UTC m=+0.128959951 container attach e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:39:40 compute-0 optimistic_allen[346288]: 167 167
Sep 30 18:39:40 compute-0 systemd[1]: libpod-e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227.scope: Deactivated successfully.
Sep 30 18:39:40 compute-0 podman[346253]: 2025-09-30 18:39:40.919370867 +0000 UTC m=+0.132903912 container died e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_allen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 18:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-75946df277d86e2c522470d0f52775e05e7b1e49429415b21e9d3d39e6c49714-merged.mount: Deactivated successfully.
Sep 30 18:39:40 compute-0 podman[346253]: 2025-09-30 18:39:40.9573963 +0000 UTC m=+0.170929345 container remove e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_allen, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:39:40 compute-0 systemd[1]: libpod-conmon-e529ecc7c0ce833e89d10db357eba24fd4f04e84ad7d3bb71b70c40b684f3227.scope: Deactivated successfully.
Sep 30 18:39:40 compute-0 ceph-mon[73755]: pgmap v1834: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:39:40 compute-0 ceph-mon[73755]: pgmap v1835: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 810 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:39:40 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:39:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:39:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532344853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:41 compute-0 nova_compute[265391]: 2025-09-30 18:39:41.102 2 DEBUG oslo_concurrency.processutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:39:41 compute-0 nova_compute[265391]: 2025-09-30 18:39:41.112 2 DEBUG nova.compute.provider_tree [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:39:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:41 compute-0 podman[346315]: 2025-09-30 18:39:41.140696251 +0000 UTC m=+0.050766340 container create 5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:39:41 compute-0 systemd[1]: Started libpod-conmon-5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac.scope.
Sep 30 18:39:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3e8b28067c2d740eee1b50496c310f74fc29bdf8fc1466c802cd7857d84c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3e8b28067c2d740eee1b50496c310f74fc29bdf8fc1466c802cd7857d84c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3e8b28067c2d740eee1b50496c310f74fc29bdf8fc1466c802cd7857d84c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3e8b28067c2d740eee1b50496c310f74fc29bdf8fc1466c802cd7857d84c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd3e8b28067c2d740eee1b50496c310f74fc29bdf8fc1466c802cd7857d84c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:41 compute-0 podman[346315]: 2025-09-30 18:39:41.116473941 +0000 UTC m=+0.026544060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:39:41 compute-0 podman[346315]: 2025-09-30 18:39:41.21333656 +0000 UTC m=+0.123406689 container init 5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:39:41 compute-0 podman[346315]: 2025-09-30 18:39:41.223289275 +0000 UTC m=+0.133359354 container start 5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:39:41 compute-0 podman[346315]: 2025-09-30 18:39:41.22663499 +0000 UTC m=+0.136705069 container attach 5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:39:41 compute-0 nova_compute[265391]: 2025-09-30 18:39:41.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:41 compute-0 amazing_satoshi[346332]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:39:41 compute-0 amazing_satoshi[346332]: --> All data devices are unavailable
Sep 30 18:39:41 compute-0 systemd[1]: libpod-5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac.scope: Deactivated successfully.
Sep 30 18:39:41 compute-0 podman[346315]: 2025-09-30 18:39:41.564091766 +0000 UTC m=+0.474161855 container died 5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fd3e8b28067c2d740eee1b50496c310f74fc29bdf8fc1466c802cd7857d84c3-merged.mount: Deactivated successfully.
Sep 30 18:39:41 compute-0 podman[346315]: 2025-09-30 18:39:41.613040269 +0000 UTC m=+0.523110358 container remove 5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Sep 30 18:39:41 compute-0 nova_compute[265391]: 2025-09-30 18:39:41.622 2 DEBUG nova.scheduler.client.report [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:39:41 compute-0 systemd[1]: libpod-conmon-5bd60041ec05e33610a348135745baed2965ac3cbf02a936cea0551b849515ac.scope: Deactivated successfully.
Sep 30 18:39:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:41.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:41 compute-0 sudo[346187]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:41 compute-0 podman[346349]: 2025-09-30 18:39:41.673522837 +0000 UTC m=+0.076465128 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:39:41 compute-0 podman[346358]: 2025-09-30 18:39:41.69007998 +0000 UTC m=+0.082347898 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:39:41 compute-0 podman[346365]: 2025-09-30 18:39:41.695810917 +0000 UTC m=+0.078642994 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Sep 30 18:39:41 compute-0 sudo[346416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:39:41 compute-0 sudo[346416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:41 compute-0 sudo[346416]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:41 compute-0 sudo[346443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:39:41 compute-0 sudo[346443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3532344853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1836: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 810 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:39:42 compute-0 nova_compute[265391]: 2025-09-30 18:39:42.132 2 DEBUG nova.compute.resource_tracker [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:39:42 compute-0 nova_compute[265391]: 2025-09-30 18:39:42.133 2 DEBUG oslo_concurrency.lockutils [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.144s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:42 compute-0 nova_compute[265391]: 2025-09-30 18:39:42.154 2 INFO nova.compute.manager [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:39:42 compute-0 podman[346512]: 2025-09-30 18:39:42.179579007 +0000 UTC m=+0.045357062 container create 7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:39:42 compute-0 systemd[1]: Started libpod-conmon-7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18.scope.
Sep 30 18:39:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:39:42 compute-0 podman[346512]: 2025-09-30 18:39:42.161960126 +0000 UTC m=+0.027738201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:39:42 compute-0 podman[346512]: 2025-09-30 18:39:42.262229272 +0000 UTC m=+0.128007317 container init 7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 18:39:42 compute-0 podman[346512]: 2025-09-30 18:39:42.268488312 +0000 UTC m=+0.134266377 container start 7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:39:42 compute-0 podman[346512]: 2025-09-30 18:39:42.272769002 +0000 UTC m=+0.138547067 container attach 7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:39:42 compute-0 pensive_dijkstra[346528]: 167 167
Sep 30 18:39:42 compute-0 systemd[1]: libpod-7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18.scope: Deactivated successfully.
Sep 30 18:39:42 compute-0 podman[346512]: 2025-09-30 18:39:42.278001556 +0000 UTC m=+0.143779671 container died 7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-81cde7a3add9b44f906855471634d1a107914bdca3a12c89e4fffd53cebdacfa-merged.mount: Deactivated successfully.
Sep 30 18:39:42 compute-0 podman[346512]: 2025-09-30 18:39:42.317970439 +0000 UTC m=+0.183748484 container remove 7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_dijkstra, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:39:42 compute-0 systemd[1]: libpod-conmon-7dfdf02973275f44ae33d27f1c0f0ff1261c85a85e02c2fa033ad3f43417df18.scope: Deactivated successfully.
Sep 30 18:39:42 compute-0 podman[346554]: 2025-09-30 18:39:42.462699102 +0000 UTC m=+0.040897347 container create 1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:39:42 compute-0 systemd[1]: Started libpod-conmon-1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed.scope.
Sep 30 18:39:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0acc472a5d2083d9e7d42057b21a3636d284cb0bbbbe55ed10ef74bf2ebc830/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0acc472a5d2083d9e7d42057b21a3636d284cb0bbbbe55ed10ef74bf2ebc830/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:42 compute-0 podman[346554]: 2025-09-30 18:39:42.44191214 +0000 UTC m=+0.020110425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0acc472a5d2083d9e7d42057b21a3636d284cb0bbbbe55ed10ef74bf2ebc830/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0acc472a5d2083d9e7d42057b21a3636d284cb0bbbbe55ed10ef74bf2ebc830/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:42 compute-0 podman[346554]: 2025-09-30 18:39:42.548108897 +0000 UTC m=+0.126307182 container init 1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_blackburn, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:39:42 compute-0 podman[346554]: 2025-09-30 18:39:42.564285811 +0000 UTC m=+0.142484076 container start 1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:39:42 compute-0 podman[346554]: 2025-09-30 18:39:42.567859633 +0000 UTC m=+0.146057888 container attach 1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:39:42 compute-0 funny_blackburn[346571]: {
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:     "0": [
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:         {
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "devices": [
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "/dev/loop3"
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             ],
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "lv_name": "ceph_lv0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "lv_size": "21470642176",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "name": "ceph_lv0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "tags": {
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.cluster_name": "ceph",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.crush_device_class": "",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.encrypted": "0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.osd_id": "0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.type": "block",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.vdo": "0",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:                 "ceph.with_tpm": "0"
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             },
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "type": "block",
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:             "vg_name": "ceph_vg0"
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:         }
Sep 30 18:39:42 compute-0 funny_blackburn[346571]:     ]
Sep 30 18:39:42 compute-0 funny_blackburn[346571]: }
Sep 30 18:39:42 compute-0 systemd[1]: libpod-1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed.scope: Deactivated successfully.
Sep 30 18:39:42 compute-0 podman[346554]: 2025-09-30 18:39:42.835067161 +0000 UTC m=+0.413265406 container died 1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_blackburn, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0acc472a5d2083d9e7d42057b21a3636d284cb0bbbbe55ed10ef74bf2ebc830-merged.mount: Deactivated successfully.
Sep 30 18:39:42 compute-0 podman[346554]: 2025-09-30 18:39:42.880790581 +0000 UTC m=+0.458988846 container remove 1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:39:42 compute-0 systemd[1]: libpod-conmon-1ed67df182d4237ad9be19002683c4affa761def37899c6e31eef61e0c7907ed.scope: Deactivated successfully.
Sep 30 18:39:42 compute-0 sudo[346443]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:42 compute-0 sudo[346593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:39:42 compute-0 sudo[346593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:42 compute-0 ceph-mon[73755]: pgmap v1836: 353 pgs: 353 active+clean; 200 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 810 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:39:42 compute-0 sudo[346593]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:43 compute-0 sudo[346618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:39:43 compute-0 sudo[346618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:43 compute-0 nova_compute[265391]: 2025-09-30 18:39:43.219 2 INFO nova.scheduler.client.report [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 7ede2a0e-7c1c-4cb9-abf9-796534210bb6
Sep 30 18:39:43 compute-0 nova_compute[265391]: 2025-09-30 18:39:43.220 2 DEBUG nova.virt.libvirt.driver [None req-a1cc36d7-36d8-4ec7-a1db-83dad0f9e880 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 4f91975d-d44b-46af-9879-dbf7a693fbd2] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:39:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:43 compute-0 podman[346686]: 2025-09-30 18:39:43.605801035 +0000 UTC m=+0.060517480 container create 883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:39:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:43.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:43 compute-0 systemd[1]: Started libpod-conmon-883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005.scope.
Sep 30 18:39:43 compute-0 podman[346686]: 2025-09-30 18:39:43.577966012 +0000 UTC m=+0.032682497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:39:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:39:43 compute-0 podman[346686]: 2025-09-30 18:39:43.710579716 +0000 UTC m=+0.165296151 container init 883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_leavitt, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:39:43 compute-0 podman[346686]: 2025-09-30 18:39:43.721010993 +0000 UTC m=+0.175727388 container start 883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 18:39:43 compute-0 podman[346686]: 2025-09-30 18:39:43.724464491 +0000 UTC m=+0.179180926 container attach 883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 18:39:43 compute-0 vigorous_leavitt[346703]: 167 167
Sep 30 18:39:43 compute-0 systemd[1]: libpod-883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005.scope: Deactivated successfully.
Sep 30 18:39:43 compute-0 podman[346686]: 2025-09-30 18:39:43.727358385 +0000 UTC m=+0.182074780 container died 883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e96e6f9066f315f9807a21948a6856d10e960cd60063801aa3b88ce7d05ec0c7-merged.mount: Deactivated successfully.
Sep 30 18:39:43 compute-0 podman[346686]: 2025-09-30 18:39:43.764841835 +0000 UTC m=+0.219558260 container remove 883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_leavitt, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:39:43 compute-0 systemd[1]: libpod-conmon-883b301e85b7b99e81435a9cff2d992759f68b285d75b02430b43e6ff6789005.scope: Deactivated successfully.
Sep 30 18:39:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:43.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:43 compute-0 podman[346729]: 2025-09-30 18:39:43.945452077 +0000 UTC m=+0.046953073 container create fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:39:43 compute-0 systemd[1]: Started libpod-conmon-fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3.scope.
Sep 30 18:39:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1837: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:39:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:39:44 compute-0 podman[346729]: 2025-09-30 18:39:43.922445608 +0000 UTC m=+0.023946634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05d5c4e442ac64cc55a9fff1ae35ab963af4be3eb3e91560e3d5190264528d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05d5c4e442ac64cc55a9fff1ae35ab963af4be3eb3e91560e3d5190264528d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05d5c4e442ac64cc55a9fff1ae35ab963af4be3eb3e91560e3d5190264528d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd05d5c4e442ac64cc55a9fff1ae35ab963af4be3eb3e91560e3d5190264528d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:39:44 compute-0 podman[346729]: 2025-09-30 18:39:44.03937908 +0000 UTC m=+0.140880106 container init fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:39:44 compute-0 podman[346729]: 2025-09-30 18:39:44.051649954 +0000 UTC m=+0.153150930 container start fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 18:39:44 compute-0 podman[346729]: 2025-09-30 18:39:44.055627766 +0000 UTC m=+0.157128782 container attach fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 18:39:44 compute-0 lvm[346819]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:39:44 compute-0 lvm[346819]: VG ceph_vg0 finished
Sep 30 18:39:44 compute-0 lvm[346822]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:39:44 compute-0 lvm[346822]: VG ceph_vg0 finished
Sep 30 18:39:44 compute-0 happy_golick[346745]: {}
Sep 30 18:39:44 compute-0 systemd[1]: libpod-fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3.scope: Deactivated successfully.
Sep 30 18:39:44 compute-0 systemd[1]: libpod-fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3.scope: Consumed 1.089s CPU time.
Sep 30 18:39:44 compute-0 podman[346729]: 2025-09-30 18:39:44.750068147 +0000 UTC m=+0.851569163 container died fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 18:39:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd05d5c4e442ac64cc55a9fff1ae35ab963af4be3eb3e91560e3d5190264528d-merged.mount: Deactivated successfully.
Sep 30 18:39:44 compute-0 podman[346729]: 2025-09-30 18:39:44.802179371 +0000 UTC m=+0.903680347 container remove fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Sep 30 18:39:44 compute-0 systemd[1]: libpod-conmon-fa830b8ec937878ad7badc77b12a026f7119843206f8b8688a612b3d9ab481b3.scope: Deactivated successfully.
Sep 30 18:39:44 compute-0 nova_compute[265391]: 2025-09-30 18:39:44.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:44 compute-0 sudo[346618]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:39:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:39:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:44 compute-0 sudo[346837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:39:44 compute-0 sudo[346837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:44 compute-0 sudo[346837]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:45 compute-0 ceph-mon[73755]: pgmap v1837: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:39:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:39:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:45.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:45.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1838: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:39:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:46 compute-0 nova_compute[265391]: 2025-09-30 18:39:46.254 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:46 compute-0 nova_compute[265391]: 2025-09-30 18:39:46.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:47 compute-0 ceph-mon[73755]: pgmap v1838: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:39:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:47.341Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:47 compute-0 nova_compute[265391]: 2025-09-30 18:39:47.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:47.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:47.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1839: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:39:48 compute-0 nova_compute[265391]: 2025-09-30 18:39:48.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:48 compute-0 nova_compute[265391]: 2025-09-30 18:39:48.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:39:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:48] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:39:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:48] "GET /metrics HTTP/1.1" 200 46723 "" "Prometheus/2.51.0"
Sep 30 18:39:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:48.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:39:48 compute-0 nova_compute[265391]: 2025-09-30 18:39:48.935 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:39:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:49 compute-0 ceph-mon[73755]: pgmap v1839: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:39:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/434357496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:49.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:49 compute-0 nova_compute[265391]: 2025-09-30 18:39:49.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1840: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:39:50 compute-0 nova_compute[265391]: 2025-09-30 18:39:50.935 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:51 compute-0 ceph-mon[73755]: pgmap v1840: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Sep 30 18:39:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:51 compute-0 nova_compute[265391]: 2025-09-30 18:39:51.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:51 compute-0 nova_compute[265391]: 2025-09-30 18:39:51.449 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:51 compute-0 nova_compute[265391]: 2025-09-30 18:39:51.449 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:51 compute-0 nova_compute[265391]: 2025-09-30 18:39:51.449 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:51 compute-0 nova_compute[265391]: 2025-09-30 18:39:51.450 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:39:51 compute-0 nova_compute[265391]: 2025-09-30 18:39:51.450 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:39:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:51.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:51.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:39:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351520998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:51 compute-0 nova_compute[265391]: 2025-09-30 18:39:51.881 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:39:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1841: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:39:52 compute-0 nova_compute[265391]: 2025-09-30 18:39:52.046 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:39:52 compute-0 nova_compute[265391]: 2025-09-30 18:39:52.047 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:39:52 compute-0 nova_compute[265391]: 2025-09-30 18:39:52.067 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:39:52 compute-0 nova_compute[265391]: 2025-09-30 18:39:52.068 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4256MB free_disk=39.94663619995117GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:39:52 compute-0 nova_compute[265391]: 2025-09-30 18:39:52.068 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:52 compute-0 nova_compute[265391]: 2025-09-30 18:39:52.069 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/217708246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/351520998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:39:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:39:53 compute-0 ceph-mon[73755]: pgmap v1841: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.121 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:39:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.121 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:39:52 up  1:43,  0 user,  load average: 0.92, 0.85, 0.86\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.142 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.161 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.162 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.172 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.203 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_STATUS_DISABLED,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.227 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:39:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:53.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:39:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2408308363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.703 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:39:53 compute-0 nova_compute[265391]: 2025-09-30 18:39:53.708 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:39:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:53.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:39:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1842: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:39:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2408308363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:54 compute-0 nova_compute[265391]: 2025-09-30 18:39:54.215 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:39:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:54.330 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:39:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:54.331 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:39:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:39:54.331 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:54 compute-0 nova_compute[265391]: 2025-09-30 18:39:54.727 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:39:54 compute-0 nova_compute[265391]: 2025-09-30 18:39:54.727 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.659s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:39:54 compute-0 nova_compute[265391]: 2025-09-30 18:39:54.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:55 compute-0 ceph-mon[73755]: pgmap v1842: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:39:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2147569816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:55 compute-0 nova_compute[265391]: 2025-09-30 18:39:55.220 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:55 compute-0 nova_compute[265391]: 2025-09-30 18:39:55.220 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:55 compute-0 nova_compute[265391]: 2025-09-30 18:39:55.220 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:55 compute-0 nova_compute[265391]: 2025-09-30 18:39:55.221 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:39:55 compute-0 nova_compute[265391]: 2025-09-30 18:39:55.221 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:39:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:39:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:39:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1843: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:39:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:39:56 compute-0 nova_compute[265391]: 2025-09-30 18:39:56.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:39:57 compute-0 ceph-mon[73755]: pgmap v1843: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:39:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:57.343Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:39:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:39:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/525515443' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:39:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:39:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/525515443' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:39:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:39:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:57.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:39:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1844: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:39:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/525515443' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:39:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/525515443' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:39:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2633899959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:39:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:58] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:39:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:39:58] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:39:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:39:58.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:39:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:39:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:39:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:39:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:39:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:39:59 compute-0 sudo[346922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:39:59 compute-0 sudo[346922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:39:59 compute-0 sudo[346922]: pam_unix(sudo:session): session closed for user root
Sep 30 18:39:59 compute-0 ceph-mon[73755]: pgmap v1844: 353 pgs: 353 active+clean; 121 MiB data, 376 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:39:59 compute-0 podman[346949]: 2025-09-30 18:39:59.528558728 +0000 UTC m=+0.060690634 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:39:59 compute-0 podman[346948]: 2025-09-30 18:39:59.545540223 +0000 UTC m=+0.086490165 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:39:59 compute-0 podman[346947]: 2025-09-30 18:39:59.54543264 +0000 UTC m=+0.087895400 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:39:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:39:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:39:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:39:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:39:59.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:39:59 compute-0 podman[276673]: time="2025-09-30T18:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:39:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:39:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10316 "" "Go-http-client/1.1"
Sep 30 18:39:59 compute-0 nova_compute[265391]: 2025-09-30 18:39:59.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 18:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Sep 30 18:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Sep 30 18:40:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.bsnzkg on compute-1 is in error state
Sep 30 18:40:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1845: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:40:00 compute-0 ceph-mon[73755]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 18:40:00 compute-0 ceph-mon[73755]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 18:40:00 compute-0 ceph-mon[73755]:      osd.1 observed slow operation indications in BlueStore
Sep 30 18:40:00 compute-0 ceph-mon[73755]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Sep 30 18:40:00 compute-0 ceph-mon[73755]:     daemon nfs.cephfs.0.0.compute-1.bsnzkg on compute-1 is in error state
Sep 30 18:40:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:01 compute-0 ceph-mon[73755]: pgmap v1845: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:40:01 compute-0 nova_compute[265391]: 2025-09-30 18:40:01.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:01 compute-0 openstack_network_exporter[279566]: ERROR   18:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:40:01 compute-0 openstack_network_exporter[279566]: ERROR   18:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:40:01 compute-0 openstack_network_exporter[279566]: ERROR   18:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:40:01 compute-0 nova_compute[265391]: 2025-09-30 18:40:01.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:01 compute-0 openstack_network_exporter[279566]: ERROR   18:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:40:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:40:01 compute-0 openstack_network_exporter[279566]: ERROR   18:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:40:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:40:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:01 compute-0 nova_compute[265391]: 2025-09-30 18:40:01.627 2 DEBUG nova.compute.manager [None req-7bf98f09-099e-4123-bed0-1e6648b17c98 e33f9dc9fbb84319b00517567fe4b47e 4e2dde567e5c4b1c9802c64cfc281b6d - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:631
Sep 30 18:40:01 compute-0 nova_compute[265391]: 2025-09-30 18:40:01.676 2 DEBUG nova.compute.provider_tree [None req-7bf98f09-099e-4123-bed0-1e6648b17c98 e33f9dc9fbb84319b00517567fe4b47e 4e2dde567e5c4b1c9802c64cfc281b6d - - default default] Updating resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 generation from 55 to 56 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Sep 30 18:40:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1846: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:40:03 compute-0 ceph-mon[73755]: pgmap v1846: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:40:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:03.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:03.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:03.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1847: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:40:04 compute-0 unix_chkpwd[347022]: password check failed for user (root)
Sep 30 18:40:04 compute-0 sshd-session[347016]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.125.120.7  user=root
Sep 30 18:40:04 compute-0 nova_compute[265391]: 2025-09-30 18:40:04.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:04 compute-0 nova_compute[265391]: 2025-09-30 18:40:04.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:05 compute-0 ceph-mon[73755]: pgmap v1847: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:40:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:05.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1848: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:40:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:06 compute-0 sshd-session[347016]: Failed password for root from 154.125.120.7 port 60531 ssh2
Sep 30 18:40:06 compute-0 nova_compute[265391]: 2025-09-30 18:40:06.297 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:06 compute-0 nova_compute[265391]: 2025-09-30 18:40:06.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:06 compute-0 nova_compute[265391]: 2025-09-30 18:40:06.804 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:07 compute-0 sshd-session[347016]: Received disconnect from 154.125.120.7 port 60531:11: Bye Bye [preauth]
Sep 30 18:40:07 compute-0 sshd-session[347016]: Disconnected from authenticating user root 154.125.120.7 port 60531 [preauth]
Sep 30 18:40:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:40:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:40:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:07.344Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:07 compute-0 ceph-mon[73755]: pgmap v1848: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:40:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:40:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:40:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:40:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:40:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:40:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:40:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:40:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:07.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:07.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1849: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:40:08
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', 'default.rgw.control', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta']
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:40:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:08] "GET /metrics HTTP/1.1" 200 46707 "" "Prometheus/2.51.0"
Sep 30 18:40:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:08] "GET /metrics HTTP/1.1" 200 46707 "" "Prometheus/2.51.0"
Sep 30 18:40:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:08.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:09 compute-0 ceph-mon[73755]: pgmap v1849: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:40:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:09.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:09.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:09 compute-0 nova_compute[265391]: 2025-09-30 18:40:09.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:09 compute-0 nova_compute[265391]: 2025-09-30 18:40:09.933 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:09 compute-0 nova_compute[265391]: 2025-09-30 18:40:09.934 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:40:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1850: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:40:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:11 compute-0 nova_compute[265391]: 2025-09-30 18:40:11.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:11 compute-0 ceph-mon[73755]: pgmap v1850: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:40:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:40:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:11.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:40:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:11.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1851: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:12 compute-0 podman[347032]: 2025-09-30 18:40:12.52314718 +0000 UTC m=+0.060101199 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Sep 30 18:40:12 compute-0 podman[347033]: 2025-09-30 18:40:12.548736345 +0000 UTC m=+0.082458231 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:40:12 compute-0 podman[347034]: 2025-09-30 18:40:12.558663809 +0000 UTC m=+0.077461063 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git)
Sep 30 18:40:12 compute-0 nova_compute[265391]: 2025-09-30 18:40:12.934 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:13 compute-0 ceph-mon[73755]: pgmap v1851: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:13.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:13.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:13.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1852: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 45 KiB/s rd, 0 B/s wr, 75 op/s
Sep 30 18:40:14 compute-0 nova_compute[265391]: 2025-09-30 18:40:14.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:15 compute-0 ceph-mon[73755]: pgmap v1852: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 45 KiB/s rd, 0 B/s wr, 75 op/s
Sep 30 18:40:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:40:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:15.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:40:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:40:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:15.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:40:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1853: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 45 KiB/s rd, 0 B/s wr, 74 op/s
Sep 30 18:40:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:16 compute-0 nova_compute[265391]: 2025-09-30 18:40:16.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:16.720 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:bc:cf 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9029a2856de43388bcee1a38d165449', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d2e69f29-6b3a-46dc-9ed7-12031e1b7d2b) old=Port_Binding(mac=['fa:16:3e:48:bc:cf'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9029a2856de43388bcee1a38d165449', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:40:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:16.721 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d2e69f29-6b3a-46dc-9ed7-12031e1b7d2b in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 updated
Sep 30 18:40:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:16.722 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8484b9b-b34e-4c32-b987-029d8fcb2a28, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:40:16 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:16.723 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[aefa37e0-54a7-46d5-844b-80259c8a796c]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:17.345Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:17 compute-0 ceph-mon[73755]: pgmap v1853: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 45 KiB/s rd, 0 B/s wr, 74 op/s
Sep 30 18:40:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:17.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:40:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:17.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:40:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1854: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 45 KiB/s rd, 0 B/s wr, 74 op/s
Sep 30 18:40:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:18] "GET /metrics HTTP/1.1" 200 46707 "" "Prometheus/2.51.0"
Sep 30 18:40:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:18] "GET /metrics HTTP/1.1" 200 46707 "" "Prometheus/2.51.0"
Sep 30 18:40:18 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 18:40:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:18.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:19 compute-0 sudo[347098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:40:19 compute-0 sudo[347098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:19 compute-0 sudo[347098]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:19 compute-0 ceph-mon[73755]: pgmap v1854: 353 pgs: 353 active+clean; 41 MiB data, 329 MiB used, 40 GiB / 40 GiB avail; 45 KiB/s rd, 0 B/s wr, 74 op/s
Sep 30 18:40:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:40:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:19.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:40:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:19.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:19 compute-0 nova_compute[265391]: 2025-09-30 18:40:19.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1855: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Sep 30 18:40:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:21 compute-0 nova_compute[265391]: 2025-09-30 18:40:21.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:21.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:21 compute-0 ceph-mon[73755]: pgmap v1855: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Sep 30 18:40:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:21.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1856: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 88 KiB/s rd, 0 B/s wr, 145 op/s
Sep 30 18:40:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:40:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:40:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:40:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:40:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:23.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:40:23 compute-0 ceph-mon[73755]: pgmap v1856: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 88 KiB/s rd, 0 B/s wr, 145 op/s
Sep 30 18:40:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:23.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:23.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1857: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Sep 30 18:40:24 compute-0 nova_compute[265391]: 2025-09-30 18:40:24.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:24.918 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:40:24 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:24.918 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:40:24 compute-0 nova_compute[265391]: 2025-09-30 18:40:24.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:40:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:25.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:40:25 compute-0 ceph-mon[73755]: pgmap v1857: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Sep 30 18:40:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:25.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1858: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Sep 30 18:40:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:26 compute-0 nova_compute[265391]: 2025-09-30 18:40:26.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:26 compute-0 ceph-mon[73755]: pgmap v1858: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Sep 30 18:40:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:26.975 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:1c:b1 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-eb4c3e7d-3f25-4a36-a408-04f4c58c1e3f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb4c3e7d-3f25-4a36-a408-04f4c58c1e3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7a6956e-b7b4-4d5d-bbf4-3b65f9efc0ab, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=6e704049-dbf5-46c8-b5c8-bf30b26c3ece) old=Port_Binding(mac=['fa:16:3e:c5:1c:b1'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-eb4c3e7d-3f25-4a36-a408-04f4c58c1e3f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb4c3e7d-3f25-4a36-a408-04f4c58c1e3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:40:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:26.976 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 6e704049-dbf5-46c8-b5c8-bf30b26c3ece in datapath eb4c3e7d-3f25-4a36-a408-04f4c58c1e3f updated
Sep 30 18:40:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:26.977 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb4c3e7d-3f25-4a36-a408-04f4c58c1e3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:40:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:26.977 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9d019e-fd7f-4311-beea-666a64d8fe03]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:27.345Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:27.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:27.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1859: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Sep 30 18:40:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:28] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:40:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:28] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:40:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:28.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:29 compute-0 ceph-mon[73755]: pgmap v1859: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Sep 30 18:40:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:29.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:29.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:29 compute-0 podman[276673]: time="2025-09-30T18:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:40:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:40:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10306 "" "Go-http-client/1.1"
Sep 30 18:40:29 compute-0 nova_compute[265391]: 2025-09-30 18:40:29.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1860: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Sep 30 18:40:30 compute-0 podman[347136]: 2025-09-30 18:40:30.527437539 +0000 UTC m=+0.062648575 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Sep 30 18:40:30 compute-0 podman[347141]: 2025-09-30 18:40:30.528194558 +0000 UTC m=+0.050698639 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:40:30 compute-0 podman[347137]: 2025-09-30 18:40:30.558631747 +0000 UTC m=+0.088995799 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:40:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:31 compute-0 ceph-mon[73755]: pgmap v1860: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Sep 30 18:40:31 compute-0 openstack_network_exporter[279566]: ERROR   18:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:40:31 compute-0 openstack_network_exporter[279566]: ERROR   18:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:40:31 compute-0 openstack_network_exporter[279566]: ERROR   18:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:40:31 compute-0 openstack_network_exporter[279566]: ERROR   18:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:40:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:40:31 compute-0 openstack_network_exporter[279566]: ERROR   18:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:40:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:40:31 compute-0 nova_compute[265391]: 2025-09-30 18:40:31.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:31.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:31.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1861: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:33 compute-0 ceph-mon[73755]: pgmap v1861: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:33.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:33.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:33.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:33 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:33.920 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1862: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:40:34 compute-0 nova_compute[265391]: 2025-09-30 18:40:34.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:35 compute-0 ceph-mon[73755]: pgmap v1862: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:40:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:35.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1863: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:36 compute-0 nova_compute[265391]: 2025-09-30 18:40:36.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987975264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:40:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:40:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987975264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:40:37 compute-0 ceph-mon[73755]: pgmap v1863: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1987975264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:40:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1987975264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:40:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:40:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:40:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:37.346Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:40:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:37.346Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:40:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:40:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:40:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:40:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:40:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:40:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:40:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:37.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:40:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:37.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:40:37 compute-0 ovn_controller[156242]: 2025-09-30T18:40:37Z|00258|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Sep 30 18:40:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1864: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:40:38 compute-0 nova_compute[265391]: 2025-09-30 18:40:38.570 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:38 compute-0 nova_compute[265391]: 2025-09-30 18:40:38.570 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:38] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:40:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:38] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:40:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:38.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:39 compute-0 nova_compute[265391]: 2025-09-30 18:40:39.076 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:40:39 compute-0 sudo[347210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:40:39 compute-0 sudo[347210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:39 compute-0 sudo[347210]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:39 compute-0 ceph-mon[73755]: pgmap v1864: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:40:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:39.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:40:39 compute-0 nova_compute[265391]: 2025-09-30 18:40:39.627 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:39 compute-0 nova_compute[265391]: 2025-09-30 18:40:39.628 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:39 compute-0 nova_compute[265391]: 2025-09-30 18:40:39.635 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:40:39 compute-0 nova_compute[265391]: 2025-09-30 18:40:39.635 2 INFO nova.compute.claims [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:40:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:40:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:40:39 compute-0 nova_compute[265391]: 2025-09-30 18:40:39.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1865: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:40:40 compute-0 nova_compute[265391]: 2025-09-30 18:40:40.719 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:40:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973203013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:40:41 compute-0 nova_compute[265391]: 2025-09-30 18:40:41.157 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:41 compute-0 nova_compute[265391]: 2025-09-30 18:40:41.165 2 DEBUG nova.compute.provider_tree [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:40:41 compute-0 ceph-mon[73755]: pgmap v1865: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:40:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2973203013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:40:41 compute-0 nova_compute[265391]: 2025-09-30 18:40:41.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:41.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:41 compute-0 nova_compute[265391]: 2025-09-30 18:40:41.675 2 DEBUG nova.scheduler.client.report [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:40:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1866: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:42 compute-0 nova_compute[265391]: 2025-09-30 18:40:42.189 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.561s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:42 compute-0 nova_compute[265391]: 2025-09-30 18:40:42.189 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:40:42 compute-0 nova_compute[265391]: 2025-09-30 18:40:42.735 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:40:42 compute-0 nova_compute[265391]: 2025-09-30 18:40:42.735 2 DEBUG nova.network.neutron [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:40:42 compute-0 nova_compute[265391]: 2025-09-30 18:40:42.736 2 WARNING neutronclient.v2_0.client [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:40:42 compute-0 nova_compute[265391]: 2025-09-30 18:40:42.736 2 WARNING neutronclient.v2_0.client [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:40:43 compute-0 nova_compute[265391]: 2025-09-30 18:40:43.245 2 INFO nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:40:43 compute-0 ceph-mon[73755]: pgmap v1866: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:43 compute-0 podman[347262]: 2025-09-30 18:40:43.524192091 +0000 UTC m=+0.060597492 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:40:43 compute-0 podman[347261]: 2025-09-30 18:40:43.524773366 +0000 UTC m=+0.065092297 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=multipathd)
Sep 30 18:40:43 compute-0 podman[347263]: 2025-09-30 18:40:43.542545291 +0000 UTC m=+0.067626922 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm)
Sep 30 18:40:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:43.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:43 compute-0 nova_compute[265391]: 2025-09-30 18:40:43.698 2 DEBUG nova.network.neutron [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Successfully created port: c8821620-973c-4db8-9c4b-766e7751348e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:40:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:43 compute-0 nova_compute[265391]: 2025-09-30 18:40:43.756 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:40:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:43.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:40:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:43.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1867: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.334 2 DEBUG nova.network.neutron [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Successfully updated port: c8821620-973c-4db8-9c4b-766e7751348e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.403 2 DEBUG nova.compute.manager [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-changed-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.404 2 DEBUG nova.compute.manager [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Refreshing instance network info cache due to event network-changed-c8821620-973c-4db8-9c4b-766e7751348e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.404 2 DEBUG oslo_concurrency.lockutils [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.405 2 DEBUG oslo_concurrency.lockutils [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.405 2 DEBUG nova.network.neutron [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Refreshing network info cache for port c8821620-973c-4db8-9c4b-766e7751348e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.781 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.783 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.784 2 INFO nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Creating image(s)
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.817 2 DEBUG nova.storage.rbd_utils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.844 2 DEBUG nova.storage.rbd_utils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.872 2 DEBUG nova.storage.rbd_utils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.876 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.886 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.914 2 WARNING neutronclient.v2_0.client [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.960 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.960 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.961 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.961 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.988 2 DEBUG nova.storage.rbd_utils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:40:44 compute-0 nova_compute[265391]: 2025-09-30 18:40:44.991 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:45 compute-0 sudo[347415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:40:45 compute-0 sudo[347415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:45 compute-0 sudo[347415]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:45 compute-0 sudo[347442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:40:45 compute-0 sudo[347442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.329 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:45 compute-0 ceph-mon[73755]: pgmap v1867: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.405 2 DEBUG nova.storage.rbd_utils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] resizing rbd image 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.517 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.518 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Ensure instance console log exists: /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.519 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.519 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.520 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:40:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:45.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.742 2 DEBUG nova.network.neutron [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:40:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:45.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:45 compute-0 sudo[347442]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:45 compute-0 nova_compute[265391]: 2025-09-30 18:40:45.888 2 DEBUG nova.network.neutron [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:40:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1868: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1869: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 611 B/s rd, 0 op/s
Sep 30 18:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:40:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:46 compute-0 sudo[347571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:40:46 compute-0 sudo[347571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:46 compute-0 sudo[347571]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:46 compute-0 sudo[347596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:40:46 compute-0 sudo[347596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:40:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:40:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:40:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:40:46 compute-0 nova_compute[265391]: 2025-09-30 18:40:46.395 2 DEBUG oslo_concurrency.lockutils [req-920f5727-2dc1-44a1-b4d7-b3f22a5844e1 req-6d23a36c-9d95-4fab-8979-afc5a0413239 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:40:46 compute-0 nova_compute[265391]: 2025-09-30 18:40:46.395 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquired lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:40:46 compute-0 nova_compute[265391]: 2025-09-30 18:40:46.395 2 DEBUG nova.network.neutron [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:40:46 compute-0 nova_compute[265391]: 2025-09-30 18:40:46.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:46 compute-0 podman[347660]: 2025-09-30 18:40:46.606626193 +0000 UTC m=+0.045880526 container create dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wescoff, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:40:46 compute-0 systemd[1]: Started libpod-conmon-dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5.scope.
Sep 30 18:40:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:40:46 compute-0 podman[347660]: 2025-09-30 18:40:46.585615556 +0000 UTC m=+0.024869928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:40:46 compute-0 podman[347660]: 2025-09-30 18:40:46.690960042 +0000 UTC m=+0.130214374 container init dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wescoff, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:40:46 compute-0 podman[347660]: 2025-09-30 18:40:46.697085148 +0000 UTC m=+0.136339480 container start dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wescoff, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:40:46 compute-0 podman[347660]: 2025-09-30 18:40:46.701554423 +0000 UTC m=+0.140808865 container attach dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wescoff, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Sep 30 18:40:46 compute-0 busy_wescoff[347677]: 167 167
Sep 30 18:40:46 compute-0 systemd[1]: libpod-dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5.scope: Deactivated successfully.
Sep 30 18:40:46 compute-0 conmon[347677]: conmon dca8648e95932b6f7b8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5.scope/container/memory.events
Sep 30 18:40:46 compute-0 podman[347660]: 2025-09-30 18:40:46.705787731 +0000 UTC m=+0.145042063 container died dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wescoff, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a09bc67165b9edb1d77c5369c2846483be21bd3206dfe6f1752d9fa56a3ae95-merged.mount: Deactivated successfully.
Sep 30 18:40:46 compute-0 podman[347660]: 2025-09-30 18:40:46.763844057 +0000 UTC m=+0.203098399 container remove dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:40:46 compute-0 systemd[1]: libpod-conmon-dca8648e95932b6f7b8a0ba3d7b31f5b98030c5950e89dab89beaf73e5bec9a5.scope: Deactivated successfully.
Sep 30 18:40:46 compute-0 podman[347703]: 2025-09-30 18:40:46.934149535 +0000 UTC m=+0.055467970 container create 61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_antonelli, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:40:46 compute-0 systemd[1]: Started libpod-conmon-61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489.scope.
Sep 30 18:40:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2dc13eee2d84709929e3875c2025937d4a380b45b4a6639dec41fa11792ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2dc13eee2d84709929e3875c2025937d4a380b45b4a6639dec41fa11792ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2dc13eee2d84709929e3875c2025937d4a380b45b4a6639dec41fa11792ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2dc13eee2d84709929e3875c2025937d4a380b45b4a6639dec41fa11792ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2dc13eee2d84709929e3875c2025937d4a380b45b4a6639dec41fa11792ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:46 compute-0 podman[347703]: 2025-09-30 18:40:46.901451788 +0000 UTC m=+0.022770243 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:40:46 compute-0 podman[347703]: 2025-09-30 18:40:46.997928966 +0000 UTC m=+0.119247431 container init 61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:40:47 compute-0 podman[347703]: 2025-09-30 18:40:47.005688505 +0000 UTC m=+0.127006940 container start 61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_antonelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:40:47 compute-0 podman[347703]: 2025-09-30 18:40:47.00899075 +0000 UTC m=+0.130309185 container attach 61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:40:47 compute-0 inspiring_antonelli[347719]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:40:47 compute-0 inspiring_antonelli[347719]: --> All data devices are unavailable
Sep 30 18:40:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:47.347Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:47 compute-0 systemd[1]: libpod-61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489.scope: Deactivated successfully.
Sep 30 18:40:47 compute-0 podman[347703]: 2025-09-30 18:40:47.363078421 +0000 UTC m=+0.484396876 container died 61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-23e2dc13eee2d84709929e3875c2025937d4a380b45b4a6639dec41fa11792ff-merged.mount: Deactivated successfully.
Sep 30 18:40:47 compute-0 podman[347703]: 2025-09-30 18:40:47.429289345 +0000 UTC m=+0.550607820 container remove 61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:40:47 compute-0 systemd[1]: libpod-conmon-61a9263262fe54aaedc78d855b7f6dd50d3e36746fb81a25a840e1b27e98e489.scope: Deactivated successfully.
Sep 30 18:40:47 compute-0 ceph-mon[73755]: pgmap v1868: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:40:47 compute-0 ceph-mon[73755]: pgmap v1869: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 611 B/s rd, 0 op/s
Sep 30 18:40:47 compute-0 nova_compute[265391]: 2025-09-30 18:40:47.466 2 DEBUG nova.network.neutron [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:40:47 compute-0 sudo[347596]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:47 compute-0 sudo[347746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:40:47 compute-0 sudo[347746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:47 compute-0 sudo[347746]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:47 compute-0 sudo[347772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:40:47 compute-0 sudo[347772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:47 compute-0 nova_compute[265391]: 2025-09-30 18:40:47.659 2 WARNING neutronclient.v2_0.client [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:40:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:47.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:47 compute-0 nova_compute[265391]: 2025-09-30 18:40:47.848 2 DEBUG nova.network.neutron [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Updating instance_info_cache with network_info: [{"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:40:47 compute-0 podman[347838]: 2025-09-30 18:40:47.993017942 +0000 UTC m=+0.042376346 container create ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 18:40:48 compute-0 systemd[1]: Started libpod-conmon-ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3.scope.
Sep 30 18:40:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:40:48 compute-0 podman[347838]: 2025-09-30 18:40:47.972814735 +0000 UTC m=+0.022173169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:40:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1870: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 611 B/s rd, 0 op/s
Sep 30 18:40:48 compute-0 podman[347838]: 2025-09-30 18:40:48.076901898 +0000 UTC m=+0.126260312 container init ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:40:48 compute-0 podman[347838]: 2025-09-30 18:40:48.084233936 +0000 UTC m=+0.133592380 container start ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 18:40:48 compute-0 youthful_brahmagupta[347854]: 167 167
Sep 30 18:40:48 compute-0 systemd[1]: libpod-ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3.scope: Deactivated successfully.
Sep 30 18:40:48 compute-0 conmon[347854]: conmon ebb7b9151237aa3d63d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3.scope/container/memory.events
Sep 30 18:40:48 compute-0 podman[347838]: 2025-09-30 18:40:48.090712602 +0000 UTC m=+0.140071036 container attach ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:40:48 compute-0 podman[347838]: 2025-09-30 18:40:48.091075871 +0000 UTC m=+0.140434285 container died ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a4e153384e474a419c8056bbf81919787b9d2dc3158bb9bd2fe7ef53c356a06-merged.mount: Deactivated successfully.
Sep 30 18:40:48 compute-0 podman[347838]: 2025-09-30 18:40:48.130307335 +0000 UTC m=+0.179665749 container remove ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_brahmagupta, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:40:48 compute-0 systemd[1]: libpod-conmon-ebb7b9151237aa3d63d743f71410e216888589fc2c727b0cd99e2eb1ad22a9b3.scope: Deactivated successfully.
Sep 30 18:40:48 compute-0 podman[347877]: 2025-09-30 18:40:48.285485736 +0000 UTC m=+0.039774809 container create 190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_haslett, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:40:48 compute-0 systemd[1]: Started libpod-conmon-190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd.scope.
Sep 30 18:40:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed178a4d9144842adc9f60989f96a28d076eb6848abc65b3e3b437c83f518f9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed178a4d9144842adc9f60989f96a28d076eb6848abc65b3e3b437c83f518f9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed178a4d9144842adc9f60989f96a28d076eb6848abc65b3e3b437c83f518f9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed178a4d9144842adc9f60989f96a28d076eb6848abc65b3e3b437c83f518f9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:48 compute-0 podman[347877]: 2025-09-30 18:40:48.355390985 +0000 UTC m=+0.109680108 container init 190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.356 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Releasing lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.358 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Instance network_info: |[{"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.361 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Start _get_guest_xml network_info=[{"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:40:48 compute-0 podman[347877]: 2025-09-30 18:40:48.267767793 +0000 UTC m=+0.022056896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:40:48 compute-0 podman[347877]: 2025-09-30 18:40:48.363947654 +0000 UTC m=+0.118236767 container start 190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_haslett, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.369 2 WARNING nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.370 2 DEBUG nova.virt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalanceStrategy-server-771828615', uuid='86b9b1e5-516e-43c2-b180-7ef40f7c1c67'), owner=OwnerMeta(userid='5717e8cb8548429b948a23763350ab4a', username='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin', projectid='63c45bef63ef4b9f895b3bab865e1a84', projectname='tempest-TestExecuteWorkloadBalanceStrategy-134702932'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257648.37063) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:40:48 compute-0 podman[347877]: 2025-09-30 18:40:48.372846052 +0000 UTC m=+0.127135175 container attach 190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.375 2 DEBUG nova.virt.libvirt.host [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.377 2 DEBUG nova.virt.libvirt.host [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.381 2 DEBUG nova.virt.libvirt.host [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.381 2 DEBUG nova.virt.libvirt.host [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.382 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.382 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.382 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.383 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.383 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.383 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.383 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.383 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.384 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.384 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.384 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.384 2 DEBUG nova.virt.hardware [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.387 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:48 compute-0 hungry_haslett[347893]: {
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:     "0": [
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:         {
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "devices": [
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "/dev/loop3"
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             ],
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "lv_name": "ceph_lv0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "lv_size": "21470642176",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "name": "ceph_lv0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "tags": {
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.cluster_name": "ceph",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.crush_device_class": "",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.encrypted": "0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.osd_id": "0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.type": "block",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.vdo": "0",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:                 "ceph.with_tpm": "0"
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             },
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "type": "block",
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:             "vg_name": "ceph_vg0"
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:         }
Sep 30 18:40:48 compute-0 hungry_haslett[347893]:     ]
Sep 30 18:40:48 compute-0 hungry_haslett[347893]: }
Sep 30 18:40:48 compute-0 systemd[1]: libpod-190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd.scope: Deactivated successfully.
Sep 30 18:40:48 compute-0 podman[347877]: 2025-09-30 18:40:48.663544321 +0000 UTC m=+0.417833444 container died 190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_haslett, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed178a4d9144842adc9f60989f96a28d076eb6848abc65b3e3b437c83f518f9c-merged.mount: Deactivated successfully.
Sep 30 18:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:48] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:40:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:48] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:40:48 compute-0 podman[347877]: 2025-09-30 18:40:48.803630756 +0000 UTC m=+0.557919869 container remove 190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:40:48 compute-0 systemd[1]: libpod-conmon-190ec2b92771eeae4b8c8579838669122748c951641c304fbefe07c44b0332fd.scope: Deactivated successfully.
Sep 30 18:40:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:40:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244829744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.851 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:48 compute-0 sudo[347772]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:48.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.894 2 DEBUG nova.storage.rbd_utils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:40:48 compute-0 nova_compute[265391]: 2025-09-30 18:40:48.901 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:48 compute-0 sudo[347946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:40:48 compute-0 sudo[347946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:48 compute-0 sudo[347946]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:49 compute-0 sudo[347982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:40:49 compute-0 sudo[347982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:40:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76621830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.357 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.359 2 DEBUG nova.virt.libvirt.vif [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:40:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-771828615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-771828615',id=30,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-0wzw0y3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:40:43Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=86b9b1e5-516e-43c2-b180-7ef40f7c1c67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.359 2 DEBUG nova.network.os_vif_util [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.360 2 DEBUG nova.network.os_vif_util [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:7f:fe,bridge_name='br-int',has_traffic_filtering=True,id=c8821620-973c-4db8-9c4b-766e7751348e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8821620-97') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.361 2 DEBUG nova.objects.instance [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86b9b1e5-516e-43c2-b180-7ef40f7c1c67 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:40:49 compute-0 ceph-mon[73755]: pgmap v1870: 353 pgs: 353 active+clean; 41 MiB data, 335 MiB used, 40 GiB / 40 GiB avail; 611 B/s rd, 0 op/s
Sep 30 18:40:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4244829744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:40:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/76621830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:40:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:49 compute-0 podman[348069]: 2025-09-30 18:40:49.599994946 +0000 UTC m=+0.056924558 container create 4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_grothendieck, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:40:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.005000128s ======
Sep 30 18:40:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:49.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000128s
Sep 30 18:40:49 compute-0 systemd[1]: Started libpod-conmon-4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b.scope.
Sep 30 18:40:49 compute-0 podman[348069]: 2025-09-30 18:40:49.570704776 +0000 UTC m=+0.027634468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:40:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:40:49 compute-0 podman[348069]: 2025-09-30 18:40:49.706587174 +0000 UTC m=+0.163516796 container init 4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:40:49 compute-0 podman[348069]: 2025-09-30 18:40:49.714610109 +0000 UTC m=+0.171539751 container start 4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_grothendieck, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:40:49 compute-0 cranky_grothendieck[348085]: 167 167
Sep 30 18:40:49 compute-0 systemd[1]: libpod-4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b.scope: Deactivated successfully.
Sep 30 18:40:49 compute-0 podman[348069]: 2025-09-30 18:40:49.743129249 +0000 UTC m=+0.200058891 container attach 4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:40:49 compute-0 podman[348069]: 2025-09-30 18:40:49.744203736 +0000 UTC m=+0.201133358 container died 4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:40:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:49.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-64ad04895f7cb9d465d3a477c2ecb51ab76fb14a9aed4c34d512efdb07a4abdc-merged.mount: Deactivated successfully.
Sep 30 18:40:49 compute-0 podman[348069]: 2025-09-30 18:40:49.83189737 +0000 UTC m=+0.288826992 container remove 4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:40:49 compute-0 systemd[1]: libpod-conmon-4925fd7e85f506fa005159948e86d38efb19a2e4d967aa7880014ab4dbc42b3b.scope: Deactivated successfully.
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.884 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <uuid>86b9b1e5-516e-43c2-b180-7ef40f7c1c67</uuid>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <name>instance-0000001e</name>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-771828615</nova:name>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:40:48</nova:creationTime>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:40:49 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:40:49 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <nova:port uuid="c8821620-973c-4db8-9c4b-766e7751348e">
Sep 30 18:40:49 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <system>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <entry name="serial">86b9b1e5-516e-43c2-b180-7ef40f7c1c67</entry>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <entry name="uuid">86b9b1e5-516e-43c2-b180-7ef40f7c1c67</entry>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </system>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <os>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   </os>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <features>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   </features>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk">
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config">
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:40:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:49:7f:fe"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <target dev="tapc8821620-97"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/console.log" append="off"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <video>
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </video>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:40:49 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:40:49 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:40:49 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:40:49 compute-0 nova_compute[265391]: </domain>
Sep 30 18:40:49 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.891 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Preparing to wait for external event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.892 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.892 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.893 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.894 2 DEBUG nova.virt.libvirt.vif [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:40:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-771828615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-771828615',id=30,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-0wzw0y3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:40:43Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=86b9b1e5-516e-43c2-b180-7ef40f7c1c67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.894 2 DEBUG nova.network.os_vif_util [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.895 2 DEBUG nova.network.os_vif_util [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:7f:fe,bridge_name='br-int',has_traffic_filtering=True,id=c8821620-973c-4db8-9c4b-766e7751348e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8821620-97') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.896 2 DEBUG os_vif [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:7f:fe,bridge_name='br-int',has_traffic_filtering=True,id=c8821620-973c-4db8-9c4b-766e7751348e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8821620-97') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.897 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '924e3df8-7c4c-514f-9ad6-9490097638ec', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8821620-97, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapc8821620-97, col_values=(('qos', UUID('feb95760-ec5d-479c-93a2-35095f2e20fb')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapc8821620-97, col_values=(('external_ids', {'iface-id': 'c8821620-973c-4db8-9c4b-766e7751348e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:7f:fe', 'vm-uuid': '86b9b1e5-516e-43c2-b180-7ef40f7c1c67'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:49 compute-0 NetworkManager[45059]: <info>  [1759257649.9122] manager: (tapc8821620-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:49 compute-0 nova_compute[265391]: 2025-09-30 18:40:49.919 2 INFO os_vif [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:7f:fe,bridge_name='br-int',has_traffic_filtering=True,id=c8821620-973c-4db8-9c4b-766e7751348e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8821620-97')
Sep 30 18:40:50 compute-0 podman[348114]: 2025-09-30 18:40:50.04720554 +0000 UTC m=+0.062825108 container create 1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:40:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1871: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Sep 30 18:40:50 compute-0 podman[348114]: 2025-09-30 18:40:50.011026955 +0000 UTC m=+0.026646593 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:40:50 compute-0 systemd[1]: Started libpod-conmon-1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66.scope.
Sep 30 18:40:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75eae0d4bf87c06fc8aa015dad784e84b23756d15b7ff1c867c2fbb056a64f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75eae0d4bf87c06fc8aa015dad784e84b23756d15b7ff1c867c2fbb056a64f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75eae0d4bf87c06fc8aa015dad784e84b23756d15b7ff1c867c2fbb056a64f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75eae0d4bf87c06fc8aa015dad784e84b23756d15b7ff1c867c2fbb056a64f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:50 compute-0 podman[348114]: 2025-09-30 18:40:50.191718349 +0000 UTC m=+0.207337997 container init 1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:40:50 compute-0 podman[348114]: 2025-09-30 18:40:50.202906505 +0000 UTC m=+0.218526083 container start 1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_curran, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:40:50 compute-0 podman[348114]: 2025-09-30 18:40:50.207903733 +0000 UTC m=+0.223523381 container attach 1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:40:50 compute-0 lvm[348205]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:40:50 compute-0 lvm[348205]: VG ceph_vg0 finished
Sep 30 18:40:50 compute-0 magical_curran[348131]: {}
Sep 30 18:40:50 compute-0 systemd[1]: libpod-1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66.scope: Deactivated successfully.
Sep 30 18:40:50 compute-0 systemd[1]: libpod-1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66.scope: Consumed 1.159s CPU time.
Sep 30 18:40:50 compute-0 podman[348114]: 2025-09-30 18:40:50.944860301 +0000 UTC m=+0.960479879 container died 1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_curran, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:40:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b75eae0d4bf87c06fc8aa015dad784e84b23756d15b7ff1c867c2fbb056a64f1-merged.mount: Deactivated successfully.
Sep 30 18:40:50 compute-0 podman[348114]: 2025-09-30 18:40:50.993583888 +0000 UTC m=+1.009203456 container remove 1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:40:51 compute-0 systemd[1]: libpod-conmon-1a06455eeeca95afbdf83ace10cdf13732aa77281feb46799a574db972de7d66.scope: Deactivated successfully.
Sep 30 18:40:51 compute-0 sudo[347982]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:40:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:40:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:40:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:40:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:51 compute-0 sudo[348220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:40:51 compute-0 sudo[348220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:51 compute-0 sudo[348220]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:51 compute-0 nova_compute[265391]: 2025-09-30 18:40:51.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:51 compute-0 nova_compute[265391]: 2025-09-30 18:40:51.482 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:40:51 compute-0 nova_compute[265391]: 2025-09-30 18:40:51.483 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:40:51 compute-0 nova_compute[265391]: 2025-09-30 18:40:51.483 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No VIF found with MAC fa:16:3e:49:7f:fe, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:40:51 compute-0 nova_compute[265391]: 2025-09-30 18:40:51.484 2 INFO nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Using config drive
Sep 30 18:40:51 compute-0 ceph-mon[73755]: pgmap v1871: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Sep 30 18:40:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:40:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:40:51 compute-0 nova_compute[265391]: 2025-09-30 18:40:51.507 2 DEBUG nova.storage.rbd_utils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:40:51 compute-0 nova_compute[265391]: 2025-09-30 18:40:51.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:51.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:51.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.021 2 WARNING neutronclient.v2_0.client [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:40:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1872: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Sep 30 18:40:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:40:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2830130152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:40:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.785 2 INFO nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Creating config drive at /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/disk.config
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.790 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp28ex63ot execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.917 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp28ex63ot" returned: 0 in 0.126s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.939 2 DEBUG nova.storage.rbd_utils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.941 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/disk.config 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.950 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.951 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.952 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.952 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:40:52 compute-0 nova_compute[265391]: 2025-09-30 18:40:52.952 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.121 2 DEBUG oslo_concurrency.processutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/disk.config 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.123 2 INFO nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Deleting local config drive /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/disk.config because it was imported into RBD.
Sep 30 18:40:53 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 18:40:53 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 18:40:53 compute-0 kernel: tapc8821620-97: entered promiscuous mode
Sep 30 18:40:53 compute-0 NetworkManager[45059]: <info>  [1759257653.2128] manager: (tapc8821620-97): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Sep 30 18:40:53 compute-0 systemd-udevd[348208]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:40:53 compute-0 ovn_controller[156242]: 2025-09-30T18:40:53Z|00259|binding|INFO|Claiming lport c8821620-973c-4db8-9c4b-766e7751348e for this chassis.
Sep 30 18:40:53 compute-0 ovn_controller[156242]: 2025-09-30T18:40:53Z|00260|binding|INFO|c8821620-973c-4db8-9c4b-766e7751348e: Claiming fa:16:3e:49:7f:fe 10.100.0.8
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:53 compute-0 NetworkManager[45059]: <info>  [1759257653.2294] device (tapc8821620-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:40:53 compute-0 NetworkManager[45059]: <info>  [1759257653.2309] device (tapc8821620-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.241 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:7f:fe 10.100.0.8'], port_security=['fa:16:3e:49:7f:fe 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '86b9b1e5-516e-43c2-b180-7ef40f7c1c67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9025550-4c18-4f21-a560-5b6f52684803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=c8821620-973c-4db8-9c4b-766e7751348e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.242 166158 INFO neutron.agent.ovn.metadata.agent [-] Port c8821620-973c-4db8-9c4b-766e7751348e in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 bound to our chassis
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.243 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:40:53 compute-0 systemd-machined[219917]: New machine qemu-22-instance-0000001e.
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.256 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[561f4eee-936b-4d2c-a405-8d7af6031ffe]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.257 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8484b9b-b1 in ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.258 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8484b9b-b0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.258 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2cb8b9-9369-44c4-bae3-478e7161a807]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.258 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[015acc74-c4ca-4a1a-826b-4da8775ae374]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.277 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[e35c943a-5652-4c51-a1c3-d9005bde6e65]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-0000001e.
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.295 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[85ef6200-b4dc-478e-bff9-f333ffd0d8ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_controller[156242]: 2025-09-30T18:40:53Z|00261|binding|INFO|Setting lport c8821620-973c-4db8-9c4b-766e7751348e ovn-installed in OVS
Sep 30 18:40:53 compute-0 ovn_controller[156242]: 2025-09-30T18:40:53Z|00262|binding|INFO|Setting lport c8821620-973c-4db8-9c4b-766e7751348e up in Southbound
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.333 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd7cc9e-da45-4746-814b-0df2f6a06be4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.344 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa5baf0-a468-4f13-a245-ddb3df54b9af]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 NetworkManager[45059]: <info>  [1759257653.3452] manager: (tapc8484b9b-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.377 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[876a9c0e-b418-4222-b149-1a2234b2ccaa]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.381 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[dba21072-9c5d-4216-9cc3-68ef2fc62269]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:40:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3691236522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:40:53 compute-0 NetworkManager[45059]: <info>  [1759257653.4015] device (tapc8484b9b-b0): carrier: link connected
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.407 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[f3634035-3aa8-4224-9659-edb749e0ac29]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.420 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.428 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f3855cb0-cd77-435d-96b7-53e20fdde7ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8484b9b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:bc:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625209, 'reachable_time': 34390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348388, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.443 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b0b70a-884f-4536-86d2-77760ec88f23]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe48:bccf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625209, 'tstamp': 625209}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348389, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.457 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9974ea-9085-487b-a1c6-dc1365f3605f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8484b9b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:bc:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625209, 'reachable_time': 34390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348390, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.488 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[da4d0ba9-0c37-43e9-bb3a-42e1c4059905]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ceph-mon[73755]: pgmap v1872: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Sep 30 18:40:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3691236522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.544 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff53a13-ae11-499a-9146-cc14d2bd8adb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.546 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8484b9b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.546 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.546 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8484b9b-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:53 compute-0 kernel: tapc8484b9b-b0: entered promiscuous mode
Sep 30 18:40:53 compute-0 NetworkManager[45059]: <info>  [1759257653.5481] manager: (tapc8484b9b-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.550 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8484b9b-b0, col_values=(('external_ids', {'iface-id': 'd2e69f29-6b3a-46dc-9ed7-12031e1b7d2b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:40:53 compute-0 ovn_controller[156242]: 2025-09-30T18:40:53Z|00263|binding|INFO|Releasing lport d2e69f29-6b3a-46dc-9ed7-12031e1b7d2b from this chassis (sb_readonly=0)
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.552 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[11b9efb8-456c-4b1e-9278-88ec61b5155e]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.553 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.553 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.553 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for c8484b9b-b34e-4c32-b987-029d8fcb2a28 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.553 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.553 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e34da4e5-809c-460a-912b-baddf88d00a6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.556 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.556 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f52292-a450-4846-9ab0-195de8e10499]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.557 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:40:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:53.557 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'env', 'PROCESS_TAG=haproxy-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8484b9b-b34e-4c32-b987-029d8fcb2a28.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.592502) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257653592531, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2104, "num_deletes": 251, "total_data_size": 4057492, "memory_usage": 4127216, "flush_reason": "Manual Compaction"}
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Sep 30 18:40:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:53.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257653607828, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 3879275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45672, "largest_seqno": 47775, "table_properties": {"data_size": 3869881, "index_size": 5888, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19774, "raw_average_key_size": 20, "raw_value_size": 3850981, "raw_average_value_size": 3970, "num_data_blocks": 257, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257448, "oldest_key_time": 1759257448, "file_creation_time": 1759257653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 15363 microseconds, and 6330 cpu microseconds.
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.607864) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 3879275 bytes OK
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.607880) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.610802) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.610815) EVENT_LOG_v1 {"time_micros": 1759257653610811, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.610831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 4048777, prev total WAL file size 4048777, number of live WAL files 2.
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.611572) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(3788KB)], [107(9722KB)]
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257653611597, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 13834912, "oldest_snapshot_seqno": -1}
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 7073 keys, 11858731 bytes, temperature: kUnknown
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257653675236, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 11858731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11815169, "index_size": 24762, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 186956, "raw_average_key_size": 26, "raw_value_size": 11691992, "raw_average_value_size": 1653, "num_data_blocks": 964, "num_entries": 7073, "num_filter_entries": 7073, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.675690) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 11858731 bytes
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.677944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.3 rd, 185.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 9.5 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 7591, records dropped: 518 output_compression: NoCompression
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.677959) EVENT_LOG_v1 {"time_micros": 1759257653677952, "job": 64, "event": "compaction_finished", "compaction_time_micros": 63955, "compaction_time_cpu_micros": 22621, "output_level": 6, "num_output_files": 1, "total_output_size": 11858731, "num_input_records": 7591, "num_output_records": 7073, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257653678538, "job": 64, "event": "table_file_deletion", "file_number": 109}
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257653679878, "job": 64, "event": "table_file_deletion", "file_number": 107}
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.611538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.679906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.679910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.679912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.679913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:40:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:40:53.679915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:40:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:40:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:53.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:40:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:53.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.874 2 DEBUG nova.compute.manager [req-8d3ffa7d-e1c3-4c38-8812-412afff210e4 req-84ba7149-1515-4b39-bcd5-76e42d0f1509 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.875 2 DEBUG oslo_concurrency.lockutils [req-8d3ffa7d-e1c3-4c38-8812-412afff210e4 req-84ba7149-1515-4b39-bcd5-76e42d0f1509 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.875 2 DEBUG oslo_concurrency.lockutils [req-8d3ffa7d-e1c3-4c38-8812-412afff210e4 req-84ba7149-1515-4b39-bcd5-76e42d0f1509 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.875 2 DEBUG oslo_concurrency.lockutils [req-8d3ffa7d-e1c3-4c38-8812-412afff210e4 req-84ba7149-1515-4b39-bcd5-76e42d0f1509 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:53 compute-0 nova_compute[265391]: 2025-09-30 18:40:53.876 2 DEBUG nova.compute.manager [req-8d3ffa7d-e1c3-4c38-8812-412afff210e4 req-84ba7149-1515-4b39-bcd5-76e42d0f1509 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Processing event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:40:53 compute-0 podman[348467]: 2025-09-30 18:40:53.951666769 +0000 UTC m=+0.054536657 container create 17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:40:53 compute-0 systemd[1]: Started libpod-conmon-17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748.scope.
Sep 30 18:40:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:54 compute-0 podman[348467]: 2025-09-30 18:40:53.919807604 +0000 UTC m=+0.022677512 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:40:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fa65a589576a4f57ed7e46a73547b2b1956059a47bd2f65f6fe653543e64992/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:40:54 compute-0 podman[348467]: 2025-09-30 18:40:54.040509432 +0000 UTC m=+0.143379330 container init 17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:40:54 compute-0 podman[348467]: 2025-09-30 18:40:54.047416219 +0000 UTC m=+0.150286107 container start 17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:40:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1873: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Sep 30 18:40:54 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[348482]: [NOTICE]   (348486) : New worker (348488) forked
Sep 30 18:40:54 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[348482]: [NOTICE]   (348486) : Loading success.
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.319 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.323 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.326 2 INFO nova.virt.libvirt.driver [-] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Instance spawned successfully.
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.326 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:40:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:54.332 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:54.332 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:40:54.333 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.458 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.459 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.600 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.602 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/466932660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.623 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.624 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4259MB free_disk=39.971431732177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.625 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.625 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.836 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.837 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.837 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.837 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.838 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.838 2 DEBUG nova.virt.libvirt.driver [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:40:54 compute-0 nova_compute[265391]: 2025-09-30 18:40:54.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:55 compute-0 nova_compute[265391]: 2025-09-30 18:40:55.349 2 INFO nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Took 10.57 seconds to spawn the instance on the hypervisor.
Sep 30 18:40:55 compute-0 nova_compute[265391]: 2025-09-30 18:40:55.350 2 DEBUG nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:40:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:55.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:55 compute-0 ceph-mon[73755]: pgmap v1873: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Sep 30 18:40:55 compute-0 nova_compute[265391]: 2025-09-30 18:40:55.726 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 86b9b1e5-516e-43c2-b180-7ef40f7c1c67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:40:55 compute-0 nova_compute[265391]: 2025-09-30 18:40:55.726 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:40:55 compute-0 nova_compute[265391]: 2025-09-30 18:40:55.727 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:40:54 up  1:44,  0 user,  load average: 0.70, 0.80, 0.84\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_spawning': '1', 'num_os_type_None': '1', 'num_proj_63c45bef63ef4b9f895b3bab865e1a84': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:40:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:55 compute-0 nova_compute[265391]: 2025-09-30 18:40:55.835 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:40:55 compute-0 nova_compute[265391]: 2025-09-30 18:40:55.892 2 INFO nova.compute.manager [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Took 16.30 seconds to build instance.
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.038 2 DEBUG nova.compute.manager [req-70a5e445-c440-4eea-b3d0-7dffe109f96e req-367bb68c-36c3-45bb-9a5a-ab5acf9e9721 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.039 2 DEBUG oslo_concurrency.lockutils [req-70a5e445-c440-4eea-b3d0-7dffe109f96e req-367bb68c-36c3-45bb-9a5a-ab5acf9e9721 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.039 2 DEBUG oslo_concurrency.lockutils [req-70a5e445-c440-4eea-b3d0-7dffe109f96e req-367bb68c-36c3-45bb-9a5a-ab5acf9e9721 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.040 2 DEBUG oslo_concurrency.lockutils [req-70a5e445-c440-4eea-b3d0-7dffe109f96e req-367bb68c-36c3-45bb-9a5a-ab5acf9e9721 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.040 2 DEBUG nova.compute.manager [req-70a5e445-c440-4eea-b3d0-7dffe109f96e req-367bb68c-36c3-45bb-9a5a-ab5acf9e9721 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] No waiting events found dispatching network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.040 2 WARNING nova.compute.manager [req-70a5e445-c440-4eea-b3d0-7dffe109f96e req-367bb68c-36c3-45bb-9a5a-ab5acf9e9721 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received unexpected event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e for instance with vm_state active and task_state None.
Sep 30 18:40:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1874: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Sep 30 18:40:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:40:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:40:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4222444443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.280 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.285 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.402 2 DEBUG oslo_concurrency.lockutils [None req-c25d0419-ea5d-4c6d-a992-8727203c7d88 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.831s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:40:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4222444443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:40:56 compute-0 nova_compute[265391]: 2025-09-30 18:40:56.793 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:40:57 compute-0 nova_compute[265391]: 2025-09-30 18:40:57.305 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:40:57 compute-0 nova_compute[265391]: 2025-09-30 18:40:57.305 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.680s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:40:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:57.347Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:40:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/590762389' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:40:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:40:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/590762389' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:40:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:57.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:57 compute-0 ceph-mon[73755]: pgmap v1874: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Sep 30 18:40:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/590762389' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:40:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/590762389' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:40:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1875: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Sep 30 18:40:58 compute-0 ceph-mon[73755]: pgmap v1875: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Sep 30 18:40:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:58] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:40:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:40:58] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:40:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:58.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:40:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:40:58.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:40:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:40:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:40:59 compute-0 nova_compute[265391]: 2025-09-30 18:40:59.305 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:59 compute-0 nova_compute[265391]: 2025-09-30 18:40:59.306 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:59 compute-0 nova_compute[265391]: 2025-09-30 18:40:59.306 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:40:59 compute-0 nova_compute[265391]: 2025-09-30 18:40:59.306 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:40:59 compute-0 sudo[348526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:40:59 compute-0 sudo[348526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:40:59 compute-0 sudo[348526]: pam_unix(sudo:session): session closed for user root
Sep 30 18:40:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:40:59.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:59 compute-0 podman[276673]: time="2025-09-30T18:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:40:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:40:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:40:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10770 "" "Go-http-client/1.1"
Sep 30 18:40:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:40:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:40:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:40:59 compute-0 nova_compute[265391]: 2025-09-30 18:40:59.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1876: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:41:01 compute-0 ceph-mon[73755]: pgmap v1876: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:41:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:01 compute-0 openstack_network_exporter[279566]: ERROR   18:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:41:01 compute-0 openstack_network_exporter[279566]: ERROR   18:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:41:01 compute-0 openstack_network_exporter[279566]: ERROR   18:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:41:01 compute-0 openstack_network_exporter[279566]: ERROR   18:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:41:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:41:01 compute-0 openstack_network_exporter[279566]: ERROR   18:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:41:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:41:01 compute-0 nova_compute[265391]: 2025-09-30 18:41:01.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:41:01 compute-0 nova_compute[265391]: 2025-09-30 18:41:01.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:01 compute-0 nova_compute[265391]: 2025-09-30 18:41:01.508 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:01 compute-0 nova_compute[265391]: 2025-09-30 18:41:01.508 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:01 compute-0 podman[348555]: 2025-09-30 18:41:01.53888997 +0000 UTC m=+0.071201313 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:41:01 compute-0 podman[348554]: 2025-09-30 18:41:01.560575225 +0000 UTC m=+0.096502520 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, io.buildah.version=1.41.4)
Sep 30 18:41:01 compute-0 podman[348553]: 2025-09-30 18:41:01.561387316 +0000 UTC m=+0.087809818 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 18:41:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:01.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:02 compute-0 nova_compute[265391]: 2025-09-30 18:41:02.016 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:41:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1877: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:41:02 compute-0 nova_compute[265391]: 2025-09-30 18:41:02.573 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:02 compute-0 nova_compute[265391]: 2025-09-30 18:41:02.573 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:02 compute-0 nova_compute[265391]: 2025-09-30 18:41:02.579 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:41:02 compute-0 nova_compute[265391]: 2025-09-30 18:41:02.580 2 INFO nova.compute.claims [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:41:03 compute-0 ceph-mon[73755]: pgmap v1877: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:41:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:03.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:03 compute-0 nova_compute[265391]: 2025-09-30 18:41:03.645 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:03.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:41:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:03.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:41:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1166076486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:41:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1878: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:41:04 compute-0 nova_compute[265391]: 2025-09-30 18:41:04.097 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:04 compute-0 nova_compute[265391]: 2025-09-30 18:41:04.102 2 DEBUG nova.compute.provider_tree [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:41:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1166076486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:41:04 compute-0 nova_compute[265391]: 2025-09-30 18:41:04.610 2 DEBUG nova.scheduler.client.report [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:41:04 compute-0 nova_compute[265391]: 2025-09-30 18:41:04.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:05 compute-0 nova_compute[265391]: 2025-09-30 18:41:05.160 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.587s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:05 compute-0 nova_compute[265391]: 2025-09-30 18:41:05.161 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:41:05 compute-0 ceph-mon[73755]: pgmap v1878: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:41:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:41:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:05.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:41:05 compute-0 nova_compute[265391]: 2025-09-30 18:41:05.673 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:41:05 compute-0 nova_compute[265391]: 2025-09-30 18:41:05.674 2 DEBUG nova.network.neutron [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:41:05 compute-0 nova_compute[265391]: 2025-09-30 18:41:05.674 2 WARNING neutronclient.v2_0.client [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:05 compute-0 nova_compute[265391]: 2025-09-30 18:41:05.675 2 WARNING neutronclient.v2_0.client [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1879: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 64 op/s
Sep 30 18:41:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:06 compute-0 nova_compute[265391]: 2025-09-30 18:41:06.187 2 INFO nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:41:06 compute-0 nova_compute[265391]: 2025-09-30 18:41:06.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:06 compute-0 nova_compute[265391]: 2025-09-30 18:41:06.696 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:41:07 compute-0 ovn_controller[156242]: 2025-09-30T18:41:07Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:7f:fe 10.100.0.8
Sep 30 18:41:07 compute-0 ovn_controller[156242]: 2025-09-30T18:41:07Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:7f:fe 10.100.0.8
Sep 30 18:41:07 compute-0 ceph-mon[73755]: pgmap v1879: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 64 op/s
Sep 30 18:41:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:41:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:41:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:07.349Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:41:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:41:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:41:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:41:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:41:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:41:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:07.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.719 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.720 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.721 2 INFO nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Creating image(s)
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.744 2 DEBUG nova.storage.rbd_utils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.772 2 DEBUG nova.storage.rbd_utils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:41:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:07.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.795 2 DEBUG nova.storage.rbd_utils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.798 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.807 2 DEBUG nova.network.neutron [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Successfully created port: 25ca9495-1e4e-46a1-826a-4fcb2880a7ad _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.851 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.851 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.852 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.852 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.874 2 DEBUG nova.storage.rbd_utils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:41:07 compute-0 nova_compute[265391]: 2025-09-30 18:41:07.877 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1880: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 64 op/s
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005229063904082829 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:41:08
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'vms', '.nfs', 'default.rgw.control', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:41:08 compute-0 nova_compute[265391]: 2025-09-30 18:41:08.156 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:41:08 compute-0 nova_compute[265391]: 2025-09-30 18:41:08.232 2 DEBUG nova.storage.rbd_utils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] resizing rbd image 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:41:08 compute-0 nova_compute[265391]: 2025-09-30 18:41:08.349 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:41:08 compute-0 nova_compute[265391]: 2025-09-30 18:41:08.350 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Ensure instance console log exists: /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:41:08 compute-0 nova_compute[265391]: 2025-09-30 18:41:08.350 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:08 compute-0 nova_compute[265391]: 2025-09-30 18:41:08.350 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:08 compute-0 nova_compute[265391]: 2025-09-30 18:41:08.351 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:08] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:41:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:08] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:41:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:08.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:09 compute-0 ceph-mon[73755]: pgmap v1880: 353 pgs: 353 active+clean; 88 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 64 op/s
Sep 30 18:41:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 18:41:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:09.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:09.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:09 compute-0 nova_compute[265391]: 2025-09-30 18:41:09.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1881: 353 pgs: 353 active+clean; 167 MiB data, 381 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Sep 30 18:41:10 compute-0 nova_compute[265391]: 2025-09-30 18:41:10.689 2 DEBUG nova.network.neutron [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Successfully updated port: 25ca9495-1e4e-46a1-826a-4fcb2880a7ad _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:41:10 compute-0 nova_compute[265391]: 2025-09-30 18:41:10.749 2 DEBUG nova.compute.manager [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received event network-changed-25ca9495-1e4e-46a1-826a-4fcb2880a7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:10 compute-0 nova_compute[265391]: 2025-09-30 18:41:10.749 2 DEBUG nova.compute.manager [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Refreshing instance network info cache due to event network-changed-25ca9495-1e4e-46a1-826a-4fcb2880a7ad. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:41:10 compute-0 nova_compute[265391]: 2025-09-30 18:41:10.750 2 DEBUG oslo_concurrency.lockutils [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-61db6fe2-de1c-4e4b-899f-4e40c50227b7" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:41:10 compute-0 nova_compute[265391]: 2025-09-30 18:41:10.750 2 DEBUG oslo_concurrency.lockutils [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-61db6fe2-de1c-4e4b-899f-4e40c50227b7" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:41:10 compute-0 nova_compute[265391]: 2025-09-30 18:41:10.750 2 DEBUG nova.network.neutron [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Refreshing network info cache for port 25ca9495-1e4e-46a1-826a-4fcb2880a7ad _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:41:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:11 compute-0 nova_compute[265391]: 2025-09-30 18:41:11.207 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "refresh_cache-61db6fe2-de1c-4e4b-899f-4e40c50227b7" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:41:11 compute-0 nova_compute[265391]: 2025-09-30 18:41:11.261 2 WARNING neutronclient.v2_0.client [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:11 compute-0 ceph-mon[73755]: pgmap v1881: 353 pgs: 353 active+clean; 167 MiB data, 381 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Sep 30 18:41:11 compute-0 nova_compute[265391]: 2025-09-30 18:41:11.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:11 compute-0 nova_compute[265391]: 2025-09-30 18:41:11.544 2 DEBUG nova.network.neutron [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:41:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:11.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:11 compute-0 nova_compute[265391]: 2025-09-30 18:41:11.692 2 DEBUG nova.network.neutron [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:41:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:11.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1882: 353 pgs: 353 active+clean; 167 MiB data, 381 MiB used, 40 GiB / 40 GiB avail; 335 KiB/s rd, 3.9 MiB/s wr, 80 op/s
Sep 30 18:41:12 compute-0 nova_compute[265391]: 2025-09-30 18:41:12.198 2 DEBUG oslo_concurrency.lockutils [req-ba66707b-d493-4b82-97a5-239851dfc7ab req-aecf2ad4-d7aa-4bd9-b72b-555a4c60070c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-61db6fe2-de1c-4e4b-899f-4e40c50227b7" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:41:12 compute-0 nova_compute[265391]: 2025-09-30 18:41:12.199 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquired lock "refresh_cache-61db6fe2-de1c-4e4b-899f-4e40c50227b7" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:41:12 compute-0 nova_compute[265391]: 2025-09-30 18:41:12.199 2 DEBUG nova.network.neutron [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:41:13 compute-0 nova_compute[265391]: 2025-09-30 18:41:13.270 2 DEBUG nova.network.neutron [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:41:13 compute-0 ceph-mon[73755]: pgmap v1882: 353 pgs: 353 active+clean; 167 MiB data, 381 MiB used, 40 GiB / 40 GiB avail; 335 KiB/s rd, 3.9 MiB/s wr, 80 op/s
Sep 30 18:41:13 compute-0 nova_compute[265391]: 2025-09-30 18:41:13.492 2 WARNING neutronclient.v2_0.client [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:13.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:13 compute-0 nova_compute[265391]: 2025-09-30 18:41:13.681 2 DEBUG nova.network.neutron [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Updating instance_info_cache with network_info: [{"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:41:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:13.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:13.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1883: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.187 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Releasing lock "refresh_cache-61db6fe2-de1c-4e4b-899f-4e40c50227b7" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.187 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Instance network_info: |[{"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.191 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Start _get_guest_xml network_info=[{"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.196 2 WARNING nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.198 2 DEBUG nova.virt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalanceStrategy-server-1220912296', uuid='61db6fe2-de1c-4e4b-899f-4e40c50227b7'), owner=OwnerMeta(userid='5717e8cb8548429b948a23763350ab4a', username='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin', projectid='63c45bef63ef4b9f895b3bab865e1a84', projectname='tempest-TestExecuteWorkloadBalanceStrategy-134702932'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257674.1979337) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.204 2 DEBUG nova.virt.libvirt.host [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.205 2 DEBUG nova.virt.libvirt.host [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.208 2 DEBUG nova.virt.libvirt.host [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.208 2 DEBUG nova.virt.libvirt.host [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.209 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.210 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.210 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.211 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.211 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.212 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.212 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.212 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.213 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.213 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.213 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.214 2 DEBUG nova.virt.hardware [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.219 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:14 compute-0 podman[348846]: 2025-09-30 18:41:14.54179416 +0000 UTC m=+0.065255351 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:41:14 compute-0 podman[348845]: 2025-09-30 18:41:14.552281889 +0000 UTC m=+0.080228114 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=multipathd)
Sep 30 18:41:14 compute-0 podman[348847]: 2025-09-30 18:41:14.570365092 +0000 UTC m=+0.081898447 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=)
Sep 30 18:41:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:41:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827812612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.665 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.687 2 DEBUG nova.storage.rbd_utils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.690 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:14 compute-0 nova_compute[265391]: 2025-09-30 18:41:14.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:41:15 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772842181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.106 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.107 2 DEBUG nova.virt.libvirt.vif [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1220912296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1220912296',id=31,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-0qrb7r13',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:41:06Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=61db6fe2-de1c-4e4b-899f-4e40c50227b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.108 2 DEBUG nova.network.os_vif_util [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.108 2 DEBUG nova.network.os_vif_util [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:d3:fa,bridge_name='br-int',has_traffic_filtering=True,id=25ca9495-1e4e-46a1-826a-4fcb2880a7ad,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ca9495-1e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.109 2 DEBUG nova.objects.instance [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 61db6fe2-de1c-4e4b-899f-4e40c50227b7 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:41:15 compute-0 ceph-mon[73755]: pgmap v1883: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:41:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1827812612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:41:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/772842181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.620 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <uuid>61db6fe2-de1c-4e4b-899f-4e40c50227b7</uuid>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <name>instance-0000001f</name>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-1220912296</nova:name>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:41:14</nova:creationTime>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:41:15 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:41:15 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <nova:port uuid="25ca9495-1e4e-46a1-826a-4fcb2880a7ad">
Sep 30 18:41:15 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <system>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <entry name="serial">61db6fe2-de1c-4e4b-899f-4e40c50227b7</entry>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <entry name="uuid">61db6fe2-de1c-4e4b-899f-4e40c50227b7</entry>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </system>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <os>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   </os>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <features>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   </features>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk">
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       </source>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk.config">
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       </source>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:41:15 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:b7:d3:fa"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <target dev="tap25ca9495-1e"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7/console.log" append="off"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <video>
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </video>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:41:15 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:41:15 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:41:15 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:41:15 compute-0 nova_compute[265391]: </domain>
Sep 30 18:41:15 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.621 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Preparing to wait for external event network-vif-plugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.622 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.622 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.622 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.623 2 DEBUG nova.virt.libvirt.vif [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1220912296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1220912296',id=31,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-0qrb7r13',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:41:06Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=61db6fe2-de1c-4e4b-899f-4e40c50227b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.623 2 DEBUG nova.network.os_vif_util [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.624 2 DEBUG nova.network.os_vif_util [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:d3:fa,bridge_name='br-int',has_traffic_filtering=True,id=25ca9495-1e4e-46a1-826a-4fcb2880a7ad,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ca9495-1e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.624 2 DEBUG os_vif [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:d3:fa,bridge_name='br-int',has_traffic_filtering=True,id=25ca9495-1e4e-46a1-826a-4fcb2880a7ad,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ca9495-1e') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.626 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:41:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'eda2f324-41ae-5546-92ec-e68ebdd5adfd', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.631 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25ca9495-1e, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap25ca9495-1e, col_values=(('qos', UUID('a6977821-ca04-42b4-905c-58eaa252da8e')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap25ca9495-1e, col_values=(('external_ids', {'iface-id': '25ca9495-1e4e-46a1-826a-4fcb2880a7ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:d3:fa', 'vm-uuid': '61db6fe2-de1c-4e4b-899f-4e40c50227b7'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:15 compute-0 NetworkManager[45059]: <info>  [1759257675.6345] manager: (tap25ca9495-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:15 compute-0 nova_compute[265391]: 2025-09-30 18:41:15.644 2 INFO os_vif [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:d3:fa,bridge_name='br-int',has_traffic_filtering=True,id=25ca9495-1e4e-46a1-826a-4fcb2880a7ad,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ca9495-1e')
Sep 30 18:41:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:15.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1884: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:41:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:16 compute-0 nova_compute[265391]: 2025-09-30 18:41:16.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:17 compute-0 nova_compute[265391]: 2025-09-30 18:41:17.197 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:41:17 compute-0 nova_compute[265391]: 2025-09-30 18:41:17.197 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:41:17 compute-0 nova_compute[265391]: 2025-09-30 18:41:17.197 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No VIF found with MAC fa:16:3e:b7:d3:fa, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:41:17 compute-0 nova_compute[265391]: 2025-09-30 18:41:17.198 2 INFO nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Using config drive
Sep 30 18:41:17 compute-0 nova_compute[265391]: 2025-09-30 18:41:17.219 2 DEBUG nova.storage.rbd_utils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:41:17 compute-0 ceph-mon[73755]: pgmap v1884: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:41:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:17.349Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:17.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:17 compute-0 nova_compute[265391]: 2025-09-30 18:41:17.735 2 WARNING neutronclient.v2_0.client [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:41:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:17.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:41:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1885: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:41:18 compute-0 nova_compute[265391]: 2025-09-30 18:41:18.761 2 INFO nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Creating config drive at /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7/disk.config
Sep 30 18:41:18 compute-0 nova_compute[265391]: 2025-09-30 18:41:18.768 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp6eeo57uj execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:18] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:41:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:18] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:41:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:18.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:18 compute-0 nova_compute[265391]: 2025-09-30 18:41:18.895 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmp6eeo57uj" returned: 0 in 0.127s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:18 compute-0 nova_compute[265391]: 2025-09-30 18:41:18.918 2 DEBUG nova.storage.rbd_utils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:41:18 compute-0 nova_compute[265391]: 2025-09-30 18:41:18.921 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7/disk.config 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.084 2 DEBUG oslo_concurrency.processutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7/disk.config 61db6fe2-de1c-4e4b-899f-4e40c50227b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.085 2 INFO nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Deleting local config drive /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7/disk.config because it was imported into RBD.
Sep 30 18:41:19 compute-0 kernel: tap25ca9495-1e: entered promiscuous mode
Sep 30 18:41:19 compute-0 NetworkManager[45059]: <info>  [1759257679.1324] manager: (tap25ca9495-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:19 compute-0 ovn_controller[156242]: 2025-09-30T18:41:19Z|00264|binding|INFO|Claiming lport 25ca9495-1e4e-46a1-826a-4fcb2880a7ad for this chassis.
Sep 30 18:41:19 compute-0 ovn_controller[156242]: 2025-09-30T18:41:19Z|00265|binding|INFO|25ca9495-1e4e-46a1-826a-4fcb2880a7ad: Claiming fa:16:3e:b7:d3:fa 10.100.0.13
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.139 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:d3:fa 10.100.0.13'], port_security=['fa:16:3e:b7:d3:fa 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61db6fe2-de1c-4e4b-899f-4e40c50227b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9025550-4c18-4f21-a560-5b6f52684803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=25ca9495-1e4e-46a1-826a-4fcb2880a7ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.140 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 25ca9495-1e4e-46a1-826a-4fcb2880a7ad in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 bound to our chassis
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.141 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:41:19 compute-0 ovn_controller[156242]: 2025-09-30T18:41:19Z|00266|binding|INFO|Setting lport 25ca9495-1e4e-46a1-826a-4fcb2880a7ad ovn-installed in OVS
Sep 30 18:41:19 compute-0 ovn_controller[156242]: 2025-09-30T18:41:19Z|00267|binding|INFO|Setting lport 25ca9495-1e4e-46a1-826a-4fcb2880a7ad up in Southbound
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:19 compute-0 systemd-udevd[349023]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.161 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9223048d-6581-470b-971a-822a9a7cc247]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:19 compute-0 systemd-machined[219917]: New machine qemu-23-instance-0000001f.
Sep 30 18:41:19 compute-0 NetworkManager[45059]: <info>  [1759257679.1712] device (tap25ca9495-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:41:19 compute-0 NetworkManager[45059]: <info>  [1759257679.1727] device (tap25ca9495-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:41:19 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-0000001f.
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.189 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e32cc7-ca89-4e14-87a6-b4155a2e3bc5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.192 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c60d53-febc-46c5-af6b-1be107bbcadb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.215 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[182818a3-e2c2-4f9a-8352-92a72b8a94e5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.232 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b513cb94-885d-4a05-b2d3-252c35d19afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8484b9b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:bc:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625209, 'reachable_time': 34390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349034, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.252 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e26dd497-759b-4ec1-8b2a-befdfd0138d0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc8484b9b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625220, 'tstamp': 625220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349037, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc8484b9b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625223, 'tstamp': 625223}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349037, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.253 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8484b9b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.256 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8484b9b-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.256 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.256 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8484b9b-b0, col_values=(('external_ids', {'iface-id': 'd2e69f29-6b3a-46dc-9ed7-12031e1b7d2b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.257 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:41:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:19.257 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4ddf82bf-54ad-4650-9ed3-e14b427412f4]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-c8484b9b-b34e-4c32-b987-029d8fcb2a28\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID c8484b9b-b34e-4c32-b987-029d8fcb2a28\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:19 compute-0 ceph-mon[73755]: pgmap v1885: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:41:19 compute-0 sudo[349039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:41:19 compute-0 sudo[349039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:19 compute-0 sudo[349039]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.579 2 DEBUG nova.compute.manager [req-64f83cd7-660d-423e-8634-868ff0098c82 req-e46ac300-c88b-4762-b28e-2df7fb60aa0c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received event network-vif-plugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.581 2 DEBUG oslo_concurrency.lockutils [req-64f83cd7-660d-423e-8634-868ff0098c82 req-e46ac300-c88b-4762-b28e-2df7fb60aa0c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.582 2 DEBUG oslo_concurrency.lockutils [req-64f83cd7-660d-423e-8634-868ff0098c82 req-e46ac300-c88b-4762-b28e-2df7fb60aa0c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.583 2 DEBUG oslo_concurrency.lockutils [req-64f83cd7-660d-423e-8634-868ff0098c82 req-e46ac300-c88b-4762-b28e-2df7fb60aa0c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:19 compute-0 nova_compute[265391]: 2025-09-30 18:41:19.583 2 DEBUG nova.compute.manager [req-64f83cd7-660d-423e-8634-868ff0098c82 req-e46ac300-c88b-4762-b28e-2df7fb60aa0c 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Processing event network-vif-plugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:41:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:41:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:19.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:41:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:19.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1886: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.238 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.244 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.249 2 INFO nova.virt.libvirt.driver [-] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Instance spawned successfully.
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.250 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.762 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.763 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.763 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.763 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.764 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:41:20 compute-0 nova_compute[265391]: 2025-09-30 18:41:20.764 2 DEBUG nova.virt.libvirt.driver [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:41:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.277 2 INFO nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Took 13.56 seconds to spawn the instance on the hypervisor.
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.278 2 DEBUG nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:41:21 compute-0 ceph-mon[73755]: pgmap v1886: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000051s ======
Sep 30 18:41:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:21.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.640 2 DEBUG nova.compute.manager [req-b088bff2-461f-4415-be0b-7b328d602552 req-06498cb3-7d20-478b-9cca-aea25903b811 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received event network-vif-plugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.640 2 DEBUG oslo_concurrency.lockutils [req-b088bff2-461f-4415-be0b-7b328d602552 req-06498cb3-7d20-478b-9cca-aea25903b811 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.640 2 DEBUG oslo_concurrency.lockutils [req-b088bff2-461f-4415-be0b-7b328d602552 req-06498cb3-7d20-478b-9cca-aea25903b811 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.641 2 DEBUG oslo_concurrency.lockutils [req-b088bff2-461f-4415-be0b-7b328d602552 req-06498cb3-7d20-478b-9cca-aea25903b811 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.641 2 DEBUG nova.compute.manager [req-b088bff2-461f-4415-be0b-7b328d602552 req-06498cb3-7d20-478b-9cca-aea25903b811 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] No waiting events found dispatching network-vif-plugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.641 2 WARNING nova.compute.manager [req-b088bff2-461f-4415-be0b-7b328d602552 req-06498cb3-7d20-478b-9cca-aea25903b811 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received unexpected event network-vif-plugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad for instance with vm_state active and task_state None.
Sep 30 18:41:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:41:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:21.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:41:21 compute-0 nova_compute[265391]: 2025-09-30 18:41:21.809 2 INFO nova.compute.manager [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Took 19.28 seconds to build instance.
Sep 30 18:41:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1887: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 9.2 KiB/s rd, 12 KiB/s wr, 11 op/s
Sep 30 18:41:22 compute-0 nova_compute[265391]: 2025-09-30 18:41:22.314 2 DEBUG oslo_concurrency.lockutils [None req-8d51cee3-b8a1-4c46-a52d-50584e0f9356 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.806s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:41:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:41:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:41:23 compute-0 ceph-mon[73755]: pgmap v1887: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 9.2 KiB/s rd, 12 KiB/s wr, 11 op/s
Sep 30 18:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=cleanup t=2025-09-30T18:41:23.550726513Z level=info msg="Completed cleanup jobs" duration=54.178876ms
Sep 30 18:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T18:41:23.626747919Z level=info msg="Update check succeeded" duration=64.146602ms
Sep 30 18:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T18:41:23.630117555Z level=info msg="Update check succeeded" duration=62.571101ms
Sep 30 18:41:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:23.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:23.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1888: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 85 op/s
Sep 30 18:41:25 compute-0 ceph-mon[73755]: pgmap v1888: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 85 op/s
Sep 30 18:41:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:25.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:25 compute-0 nova_compute[265391]: 2025-09-30 18:41:25.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:25.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1889: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:41:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:26 compute-0 nova_compute[265391]: 2025-09-30 18:41:26.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:27.350Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:27 compute-0 ceph-mon[73755]: pgmap v1889: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:41:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:27.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:27 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:41:27 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:41:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:41:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:27.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:41:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1890: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:41:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:28] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:41:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:28] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:41:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:28.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:29 compute-0 ceph-mon[73755]: pgmap v1890: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:41:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:29.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:29 compute-0 podman[276673]: time="2025-09-30T18:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:41:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:41:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10766 "" "Go-http-client/1.1"
Sep 30 18:41:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:41:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:29.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:41:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1891: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:41:30 compute-0 nova_compute[265391]: 2025-09-30 18:41:30.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:30 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Sep 30 18:41:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:31 compute-0 openstack_network_exporter[279566]: ERROR   18:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:41:31 compute-0 openstack_network_exporter[279566]: ERROR   18:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:41:31 compute-0 openstack_network_exporter[279566]: ERROR   18:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:41:31 compute-0 openstack_network_exporter[279566]: ERROR   18:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:41:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:41:31 compute-0 openstack_network_exporter[279566]: ERROR   18:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:41:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:41:31 compute-0 nova_compute[265391]: 2025-09-30 18:41:31.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:31 compute-0 ceph-mon[73755]: pgmap v1891: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:41:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:31.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:31.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:31 compute-0 ovn_controller[156242]: 2025-09-30T18:41:31Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:d3:fa 10.100.0.13
Sep 30 18:41:31 compute-0 ovn_controller[156242]: 2025-09-30T18:41:31Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:d3:fa 10.100.0.13
Sep 30 18:41:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1892: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:41:32 compute-0 podman[349121]: 2025-09-30 18:41:32.517273704 +0000 UTC m=+0.056665251 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:41:32 compute-0 podman[349123]: 2025-09-30 18:41:32.52690695 +0000 UTC m=+0.060609022 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:41:32 compute-0 podman[349122]: 2025-09-30 18:41:32.565082107 +0000 UTC m=+0.102699949 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:41:33 compute-0 ceph-mon[73755]: pgmap v1892: 353 pgs: 353 active+clean; 167 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:41:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:33.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:33.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:33.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1893: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 18:41:35 compute-0 ceph-mon[73755]: pgmap v1893: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 18:41:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:35.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:35 compute-0 nova_compute[265391]: 2025-09-30 18:41:35.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:35.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1894: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:41:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:36 compute-0 nova_compute[265391]: 2025-09-30 18:41:36.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/546199454' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:41:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/546199454' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:41:37 compute-0 nova_compute[265391]: 2025-09-30 18:41:37.249 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Check if temp file /var/lib/nova/instances/tmphvjyhh8m exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:41:37 compute-0 nova_compute[265391]: 2025-09-30 18:41:37.254 2 DEBUG nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmphvjyhh8m',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='86b9b1e5-516e-43c2-b180-7ef40f7c1c67',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:41:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:41:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:41:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:37.350Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:41:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:41:37 compute-0 ceph-mon[73755]: pgmap v1894: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:41:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:41:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:41:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:37.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:41:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:37.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1895: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:41:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:38] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:41:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:38] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:41:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:38.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:39 compute-0 sudo[349192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:41:39 compute-0 sudo[349192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:39 compute-0 sudo[349192]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:39 compute-0 ceph-mon[73755]: pgmap v1895: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:41:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:41:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:39.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:41:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:39.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1896: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:41:40 compute-0 nova_compute[265391]: 2025-09-30 18:41:40.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:41 compute-0 nova_compute[265391]: 2025-09-30 18:41:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:41 compute-0 ceph-mon[73755]: pgmap v1896: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:41:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:41:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:41.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:41:41 compute-0 nova_compute[265391]: 2025-09-30 18:41:41.689 2 DEBUG nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Preparing to wait for external event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:41:41 compute-0 nova_compute[265391]: 2025-09-30 18:41:41.690 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:41 compute-0 nova_compute[265391]: 2025-09-30 18:41:41.690 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:41 compute-0 nova_compute[265391]: 2025-09-30 18:41:41.690 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:41.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1897: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:41:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:41:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:43.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:41:43 compute-0 ceph-mon[73755]: pgmap v1897: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:41:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:43.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:43.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1898: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:41:45 compute-0 podman[349224]: 2025-09-30 18:41:45.554160417 +0000 UTC m=+0.078499370 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Sep 30 18:41:45 compute-0 podman[349222]: 2025-09-30 18:41:45.574674252 +0000 UTC m=+0.099953519 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:41:45 compute-0 podman[349223]: 2025-09-30 18:41:45.580948102 +0000 UTC m=+0.102052132 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Sep 30 18:41:45 compute-0 nova_compute[265391]: 2025-09-30 18:41:45.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:45.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:45 compute-0 ceph-mon[73755]: pgmap v1898: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 18:41:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:45.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1899: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:41:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:46 compute-0 nova_compute[265391]: 2025-09-30 18:41:46.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:41:46 compute-0 nova_compute[265391]: 2025-09-30 18:41:46.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:46 compute-0 ceph-mon[73755]: pgmap v1899: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:41:47 compute-0 nova_compute[265391]: 2025-09-30 18:41:47.102 2 DEBUG nova.compute.manager [req-2c121e1b-9460-442d-90c4-16051c288ecf req-80ffb1d2-df04-4010-9283-0a07ff4c53e9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:47 compute-0 nova_compute[265391]: 2025-09-30 18:41:47.102 2 DEBUG oslo_concurrency.lockutils [req-2c121e1b-9460-442d-90c4-16051c288ecf req-80ffb1d2-df04-4010-9283-0a07ff4c53e9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:47 compute-0 nova_compute[265391]: 2025-09-30 18:41:47.103 2 DEBUG oslo_concurrency.lockutils [req-2c121e1b-9460-442d-90c4-16051c288ecf req-80ffb1d2-df04-4010-9283-0a07ff4c53e9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:47 compute-0 nova_compute[265391]: 2025-09-30 18:41:47.103 2 DEBUG oslo_concurrency.lockutils [req-2c121e1b-9460-442d-90c4-16051c288ecf req-80ffb1d2-df04-4010-9283-0a07ff4c53e9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:47 compute-0 nova_compute[265391]: 2025-09-30 18:41:47.103 2 DEBUG nova.compute.manager [req-2c121e1b-9460-442d-90c4-16051c288ecf req-80ffb1d2-df04-4010-9283-0a07ff4c53e9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] No event matching network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e in dict_keys([('network-vif-plugged', 'c8821620-973c-4db8-9c4b-766e7751348e')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:41:47 compute-0 nova_compute[265391]: 2025-09-30 18:41:47.104 2 DEBUG nova.compute.manager [req-2c121e1b-9460-442d-90c4-16051c288ecf req-80ffb1d2-df04-4010-9283-0a07ff4c53e9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:41:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:47.351Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:41:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:47.352Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:47.484 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:41:47 compute-0 nova_compute[265391]: 2025-09-30 18:41:47.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:47.485 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:41:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:41:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:47.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:41:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:47.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1900: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:41:48 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:48.487 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:48 compute-0 nova_compute[265391]: 2025-09-30 18:41:48.713 2 INFO nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Took 7.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:41:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:48] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:41:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:48] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:41:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:48.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:49 compute-0 ceph-mon[73755]: pgmap v1900: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.171 2 DEBUG nova.compute.manager [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.172 2 DEBUG oslo_concurrency.lockutils [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.172 2 DEBUG oslo_concurrency.lockutils [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.172 2 DEBUG oslo_concurrency.lockutils [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.172 2 DEBUG nova.compute.manager [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Processing event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:41:49 compute-0 ovn_controller[156242]: 2025-09-30T18:41:49Z|00268|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.172 2 DEBUG nova.compute.manager [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-changed-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.173 2 DEBUG nova.compute.manager [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Refreshing instance network info cache due to event network-changed-c8821620-973c-4db8-9c4b-766e7751348e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.173 2 DEBUG oslo_concurrency.lockutils [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.173 2 DEBUG oslo_concurrency.lockutils [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.173 2 DEBUG nova.network.neutron [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Refreshing network info cache for port c8821620-973c-4db8-9c4b-766e7751348e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.174 2 DEBUG nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:41:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:49.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.702 2 DEBUG nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmphvjyhh8m',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='86b9b1e5-516e-43c2-b180-7ef40f7c1c67',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(f282835a-0309-42be-a72c-7d2d96cd2df8),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.703 2 WARNING neutronclient.v2_0.client [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.708 2 DEBUG nova.objects.instance [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 86b9b1e5-516e-43c2-b180-7ef40f7c1c67 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.709 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.711 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:41:49 compute-0 nova_compute[265391]: 2025-09-30 18:41:49.711 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:41:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:49.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1901: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.213 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.214 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.221 2 DEBUG nova.virt.libvirt.vif [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:40:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-771828615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-771828615',id=30,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:40:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-0wzw0y3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:40:55Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=86b9b1e5-516e-43c2-b180-7ef40f7c1c67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.222 2 DEBUG nova.network.os_vif_util [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.222 2 DEBUG nova.network.os_vif_util [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:7f:fe,bridge_name='br-int',has_traffic_filtering=True,id=c8821620-973c-4db8-9c4b-766e7751348e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8821620-97') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.222 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:49:7f:fe"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <target dev="tapc8821620-97"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]: </interface>
Sep 30 18:41:50 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.223 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <name>instance-0000001e</name>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <uuid>86b9b1e5-516e-43c2-b180-7ef40f7c1c67</uuid>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-771828615</nova:name>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:40:48</nova:creationTime>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:port uuid="c8821620-973c-4db8-9c4b-766e7751348e">
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <system>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="serial">86b9b1e5-516e-43c2-b180-7ef40f7c1c67</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="uuid">86b9b1e5-516e-43c2-b180-7ef40f7c1c67</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </system>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <os>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </os>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <features>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </features>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </source>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </source>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:49:7f:fe"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8821620-97"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/console.log" append="off"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </target>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/console.log" append="off"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </console>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </input>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <video>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </video>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]: </domain>
Sep 30 18:41:50 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.223 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <name>instance-0000001e</name>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <uuid>86b9b1e5-516e-43c2-b180-7ef40f7c1c67</uuid>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-771828615</nova:name>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:40:48</nova:creationTime>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:port uuid="c8821620-973c-4db8-9c4b-766e7751348e">
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <system>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="serial">86b9b1e5-516e-43c2-b180-7ef40f7c1c67</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="uuid">86b9b1e5-516e-43c2-b180-7ef40f7c1c67</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </system>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <os>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </os>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <features>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </features>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </source>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </source>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:49:7f:fe"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8821620-97"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/console.log" append="off"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </target>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/console.log" append="off"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </console>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </input>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <video>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </video>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]: </domain>
Sep 30 18:41:50 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.223 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <name>instance-0000001e</name>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <uuid>86b9b1e5-516e-43c2-b180-7ef40f7c1c67</uuid>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-771828615</nova:name>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:40:48</nova:creationTime>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <nova:port uuid="c8821620-973c-4db8-9c4b-766e7751348e">
Sep 30 18:41:50 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <system>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="serial">86b9b1e5-516e-43c2-b180-7ef40f7c1c67</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="uuid">86b9b1e5-516e-43c2-b180-7ef40f7c1c67</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </system>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <os>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </os>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <features>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </features>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </source>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk.config">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </source>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:49:7f:fe"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8821620-97"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/console.log" append="off"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:41:50 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       </target>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67/console.log" append="off"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </console>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </input>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <video>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </video>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:41:50 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:41:50 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:41:50 compute-0 nova_compute[265391]: </domain>
Sep 30 18:41:50 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.224 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.716 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:41:50 compute-0 nova_compute[265391]: 2025-09-30 18:41:50.716 2 INFO nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:41:51 compute-0 ceph-mon[73755]: pgmap v1901: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:41:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:51 compute-0 sudo[349289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:41:51 compute-0 sudo[349289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:51 compute-0 sudo[349289]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:51 compute-0 nova_compute[265391]: 2025-09-30 18:41:51.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:41:51 compute-0 sudo[349314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:41:51 compute-0 sudo[349314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:51 compute-0 nova_compute[265391]: 2025-09-30 18:41:51.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:51.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:51 compute-0 nova_compute[265391]: 2025-09-30 18:41:51.696 2 WARNING neutronclient.v2_0.client [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:51 compute-0 nova_compute[265391]: 2025-09-30 18:41:51.734 2 INFO nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:41:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:51 compute-0 sudo[349314]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1902: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:41:52 compute-0 sudo[349370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:41:52 compute-0 sudo[349370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:52 compute-0 sudo[349370]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:52 compute-0 sudo[349395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 18:41:52 compute-0 sudo[349395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.237 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.237 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:41:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:41:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:41:52 compute-0 sudo[349395]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:41:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:41:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:41:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 18:41:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.739 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.740 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.939 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:41:52 compute-0 nova_compute[265391]: 2025-09-30 18:41:52.939 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:53 compute-0 ceph-mon[73755]: pgmap v1902: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:41:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:41:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:41:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2392918740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.244 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.244 2 DEBUG nova.virt.libvirt.migration [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.269 2 DEBUG nova.network.neutron [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Updated VIF entry in instance network info cache for port c8821620-973c-4db8-9c4b-766e7751348e. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.269 2 DEBUG nova.network.neutron [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Updating instance_info_cache with network_info: [{"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:41:53 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:41:53 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657525076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.429 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.778 2 DEBUG oslo_concurrency.lockutils [req-eb5acdd4-7836-4014-9855-94084ffa175f req-012016f1-5837-4618-a09c-e66cd1307f1e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-86b9b1e5-516e-43c2-b180-7ef40f7c1c67" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:41:53 compute-0 kernel: tapc8821620-97 (unregistering): left promiscuous mode
Sep 30 18:41:53 compute-0 NetworkManager[45059]: <info>  [1759257713.7878] device (tapc8821620-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:41:53 compute-0 ovn_controller[156242]: 2025-09-30T18:41:53Z|00269|binding|INFO|Releasing lport c8821620-973c-4db8-9c4b-766e7751348e from this chassis (sb_readonly=0)
Sep 30 18:41:53 compute-0 ovn_controller[156242]: 2025-09-30T18:41:53Z|00270|binding|INFO|Setting lport c8821620-973c-4db8-9c4b-766e7751348e down in Southbound
Sep 30 18:41:53 compute-0 ovn_controller[156242]: 2025-09-30T18:41:53Z|00271|binding|INFO|Removing iface tapc8821620-97 ovn-installed in OVS
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.805 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:7f:fe 10.100.0.8'], port_security=['fa:16:3e:49:7f:fe 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '86b9b1e5-516e-43c2-b180-7ef40f7c1c67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a9025550-4c18-4f21-a560-5b6f52684803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=c8821620-973c-4db8-9c4b-766e7751348e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.806 166158 INFO neutron.agent.ovn.metadata.agent [-] Port c8821620-973c-4db8-9c4b-766e7751348e in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 unbound from our chassis
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.807 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.838 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[51449b8e-ae11-416c-8202-17fcbd09ec35]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:53.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:53 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Sep 30 18:41:53 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000001e.scope: Consumed 14.778s CPU time.
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.869 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[e0506b0f-4143-430d-b94f-7f5090e049c4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:53 compute-0 systemd-machined[219917]: Machine qemu-22-instance-0000001e terminated.
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.872 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b6849a-e5c5-4e92-929f-fa25fab75254]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:53.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.906 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[64ec9df8-297b-4c85-b82c-392c83ccae8a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.924 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9dea7398-3ab6-43c0-81f3-92e748acd45f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8484b9b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:bc:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625209, 'reachable_time': 34390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349479, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.939 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2de02c04-b630-4cfb-9f9b-74f5911f2ee5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc8484b9b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625220, 'tstamp': 625220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349480, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc8484b9b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625223, 'tstamp': 625223}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349480, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.941 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8484b9b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:53 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk: No such file or directory
Sep 30 18:41:53 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 86b9b1e5-516e-43c2-b180-7ef40f7c1c67_disk: No such file or directory
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.946 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8484b9b-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.947 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.947 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8484b9b-b0, col_values=(('external_ids', {'iface-id': 'd2e69f29-6b3a-46dc-9ed7-12031e1b7d2b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.947 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:41:53 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:53.948 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9e94c3eb-e8fe-46be-a0f7-b055b3f8e997]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-c8484b9b-b34e-4c32-b987-029d8fcb2a28\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID c8484b9b-b34e-4c32-b987-029d8fcb2a28\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.975 2 DEBUG nova.virt.libvirt.guest [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.975 2 INFO nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Migration operation has completed
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.975 2 INFO nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] _post_live_migration() is started..
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.978 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.978 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.978 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.990 2 WARNING neutronclient.v2_0.client [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:53 compute-0 nova_compute[265391]: 2025-09-30 18:41:53.991 2 WARNING neutronclient.v2_0.client [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:41:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1903: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Sep 30 18:41:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2657525076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:41:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:54.333 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:54.333 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:41:54.334 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.476 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.476 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.477 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Error from libvirt while getting description of instance-0000001e: [Error Code 42] Domain not found: no domain with matching uuid '86b9b1e5-516e-43c2-b180-7ef40f7c1c67' (instance-0000001e): libvirt.libvirtError: Domain not found: no domain with matching uuid '86b9b1e5-516e-43c2-b180-7ef40f7c1c67' (instance-0000001e)
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.625 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.626 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.652 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.652 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3944MB free_disk=39.90116500854492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.652 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.653 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:41:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:41:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.853 2 DEBUG nova.compute.manager [req-43a00b44-0f05-41c2-8469-70fa2cc9e4a8 req-9f969214-2062-416d-a10b-ac56d4a49c4b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.854 2 DEBUG oslo_concurrency.lockutils [req-43a00b44-0f05-41c2-8469-70fa2cc9e4a8 req-9f969214-2062-416d-a10b-ac56d4a49c4b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.854 2 DEBUG oslo_concurrency.lockutils [req-43a00b44-0f05-41c2-8469-70fa2cc9e4a8 req-9f969214-2062-416d-a10b-ac56d4a49c4b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.854 2 DEBUG oslo_concurrency.lockutils [req-43a00b44-0f05-41c2-8469-70fa2cc9e4a8 req-9f969214-2062-416d-a10b-ac56d4a49c4b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.854 2 DEBUG nova.compute.manager [req-43a00b44-0f05-41c2-8469-70fa2cc9e4a8 req-9f969214-2062-416d-a10b-ac56d4a49c4b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] No waiting events found dispatching network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:41:54 compute-0 nova_compute[265391]: 2025-09-30 18:41:54.855 2 DEBUG nova.compute.manager [req-43a00b44-0f05-41c2-8469-70fa2cc9e4a8 req-9f969214-2062-416d-a10b-ac56d4a49c4b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:41:55 compute-0 ceph-mon[73755]: pgmap v1903: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Sep 30 18:41:55 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:55 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:55 compute-0 nova_compute[265391]: 2025-09-30 18:41:55.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:55.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:55 compute-0 nova_compute[265391]: 2025-09-30 18:41:55.675 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Updating resource usage from migration f282835a-0309-42be-a72c-7d2d96cd2df8
Sep 30 18:41:55 compute-0 nova_compute[265391]: 2025-09-30 18:41:55.700 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 61db6fe2-de1c-4e4b-899f-4e40c50227b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:41:55 compute-0 nova_compute[265391]: 2025-09-30 18:41:55.700 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration f282835a-0309-42be-a72c-7d2d96cd2df8 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:41:55 compute-0 nova_compute[265391]: 2025-09-30 18:41:55.700 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:41:55 compute-0 nova_compute[265391]: 2025-09-30 18:41:55.701 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=39GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:41:54 up  1:45,  0 user,  load average: 0.84, 0.82, 0.85\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_migrating': '1', 'num_os_type_None': '2', 'num_proj_63c45bef63ef4b9f895b3bab865e1a84': '2', 'io_workload': '0', 'num_task_None': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:41:55 compute-0 nova_compute[265391]: 2025-09-30 18:41:55.791 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:41:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:55.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.005 2 DEBUG nova.network.neutron [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port c8821620-973c-4db8-9c4b-766e7751348e and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.006 2 DEBUG nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.007 2 DEBUG nova.virt.libvirt.vif [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:40:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-771828615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-771828615',id=30,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:40:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-0wzw0y3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:41:32Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=86b9b1e5-516e-43c2-b180-7ef40f7c1c67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.007 2 DEBUG nova.network.os_vif_util [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "c8821620-973c-4db8-9c4b-766e7751348e", "address": "fa:16:3e:49:7f:fe", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8821620-97", "ovs_interfaceid": "c8821620-973c-4db8-9c4b-766e7751348e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.008 2 DEBUG nova.network.os_vif_util [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:7f:fe,bridge_name='br-int',has_traffic_filtering=True,id=c8821620-973c-4db8-9c4b-766e7751348e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8821620-97') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.009 2 DEBUG os_vif [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:7f:fe,bridge_name='br-int',has_traffic_filtering=True,id=c8821620-973c-4db8-9c4b-766e7751348e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8821620-97') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8821620-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=feb95760-ec5d-479c-93a2-35095f2e20fb) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.020 2 INFO os_vif [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:7f:fe,bridge_name='br-int',has_traffic_filtering=True,id=c8821620-973c-4db8-9c4b-766e7751348e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8821620-97')
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.021 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1904: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329631877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.208 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.215 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:41:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2329631877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:41:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1905: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.5 KiB/s rd, 11 KiB/s wr, 8 op/s
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:41:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:41:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:41:56 compute-0 sudo[349518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:41:56 compute-0 sudo[349518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:56 compute-0 sudo[349518]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:56 compute-0 sudo[349543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:41:56 compute-0 sudo[349543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.725 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:41:56 compute-0 podman[349609]: 2025-09-30 18:41:56.908648908 +0000 UTC m=+0.041043071 container create 47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.922 2 DEBUG nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.922 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.922 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.923 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.923 2 DEBUG nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] No waiting events found dispatching network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.923 2 WARNING nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received unexpected event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e for instance with vm_state active and task_state migrating.
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.923 2 DEBUG nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.923 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.923 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.923 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.924 2 DEBUG nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] No waiting events found dispatching network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.924 2 DEBUG nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-unplugged-c8821620-973c-4db8-9c4b-766e7751348e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.924 2 DEBUG nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.924 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.924 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.925 2 DEBUG oslo_concurrency.lockutils [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.925 2 DEBUG nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] No waiting events found dispatching network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:41:56 compute-0 nova_compute[265391]: 2025-09-30 18:41:56.925 2 WARNING nova.compute.manager [req-aa5f2daf-8dfb-4bf0-b811-fb812f88be4b req-1cc11070-1332-4cbd-86ca-4fb2e8feea55 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Received unexpected event network-vif-plugged-c8821620-973c-4db8-9c4b-766e7751348e for instance with vm_state active and task_state migrating.
Sep 30 18:41:56 compute-0 systemd[1]: Started libpod-conmon-47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9.scope.
Sep 30 18:41:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:41:56 compute-0 podman[349609]: 2025-09-30 18:41:56.889787365 +0000 UTC m=+0.022181558 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:41:57 compute-0 podman[349609]: 2025-09-30 18:41:57.006864881 +0000 UTC m=+0.139259074 container init 47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_golick, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:41:57 compute-0 podman[349609]: 2025-09-30 18:41:57.015746599 +0000 UTC m=+0.148140772 container start 47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:41:57 compute-0 podman[349609]: 2025-09-30 18:41:57.019408132 +0000 UTC m=+0.151802335 container attach 47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:41:57 compute-0 nostalgic_golick[349625]: 167 167
Sep 30 18:41:57 compute-0 systemd[1]: libpod-47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9.scope: Deactivated successfully.
Sep 30 18:41:57 compute-0 podman[349609]: 2025-09-30 18:41:57.023042765 +0000 UTC m=+0.155436938 container died 47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-494e0ffe4c9990a5303c552f94d741b1dc8024e645e054e2e5a344dd2861803c-merged.mount: Deactivated successfully.
Sep 30 18:41:57 compute-0 podman[349609]: 2025-09-30 18:41:57.071766702 +0000 UTC m=+0.204160875 container remove 47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:41:57 compute-0 systemd[1]: libpod-conmon-47bee10e81e84f37e6b109e2656d1168aba3875d0601d8dde390c0239a6cd2c9.scope: Deactivated successfully.
Sep 30 18:41:57 compute-0 nova_compute[265391]: 2025-09-30 18:41:57.234 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:41:57 compute-0 nova_compute[265391]: 2025-09-30 18:41:57.235 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.583s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:57 compute-0 nova_compute[265391]: 2025-09-30 18:41:57.235 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 1.215s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:41:57 compute-0 nova_compute[265391]: 2025-09-30 18:41:57.236 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:41:57 compute-0 nova_compute[265391]: 2025-09-30 18:41:57.236 2 DEBUG nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:41:57 compute-0 nova_compute[265391]: 2025-09-30 18:41:57.237 2 INFO nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Deleting instance files /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_del
Sep 30 18:41:57 compute-0 nova_compute[265391]: 2025-09-30 18:41:57.238 2 INFO nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Deletion of /var/lib/nova/instances/86b9b1e5-516e-43c2-b180-7ef40f7c1c67_del complete
Sep 30 18:41:57 compute-0 podman[349648]: 2025-09-30 18:41:57.296944775 +0000 UTC m=+0.058094298 container create 3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 18:41:57 compute-0 systemd[1]: Started libpod-conmon-3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648.scope.
Sep 30 18:41:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:57.353Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:57 compute-0 podman[349648]: 2025-09-30 18:41:57.268947308 +0000 UTC m=+0.030096871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:41:57 compute-0 ceph-mon[73755]: pgmap v1904: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 9.2 KiB/s wr, 6 op/s
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:41:57 compute-0 ceph-mon[73755]: pgmap v1905: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.5 KiB/s rd, 11 KiB/s wr, 8 op/s
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:41:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:41:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2883496a66016887a1eaeb7bdf9ad9749e4cb698b1f29049c5675008a59e3655/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2883496a66016887a1eaeb7bdf9ad9749e4cb698b1f29049c5675008a59e3655/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2883496a66016887a1eaeb7bdf9ad9749e4cb698b1f29049c5675008a59e3655/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2883496a66016887a1eaeb7bdf9ad9749e4cb698b1f29049c5675008a59e3655/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2883496a66016887a1eaeb7bdf9ad9749e4cb698b1f29049c5675008a59e3655/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:57 compute-0 podman[349648]: 2025-09-30 18:41:57.399988182 +0000 UTC m=+0.161137755 container init 3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:41:57 compute-0 podman[349648]: 2025-09-30 18:41:57.410142772 +0000 UTC m=+0.171292305 container start 3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:41:57 compute-0 podman[349648]: 2025-09-30 18:41:57.414465132 +0000 UTC m=+0.175614695 container attach 3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:41:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:41:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2727541640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:41:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:41:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2727541640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:41:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:41:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:57.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:41:57 compute-0 angry_cerf[349664]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:41:57 compute-0 angry_cerf[349664]: --> All data devices are unavailable
Sep 30 18:41:57 compute-0 systemd[1]: libpod-3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648.scope: Deactivated successfully.
Sep 30 18:41:57 compute-0 podman[349648]: 2025-09-30 18:41:57.795607746 +0000 UTC m=+0.556757339 container died 3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2883496a66016887a1eaeb7bdf9ad9749e4cb698b1f29049c5675008a59e3655-merged.mount: Deactivated successfully.
Sep 30 18:41:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:41:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:57.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:41:57 compute-0 podman[349648]: 2025-09-30 18:41:57.869643591 +0000 UTC m=+0.630793154 container remove 3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:41:57 compute-0 systemd[1]: libpod-conmon-3f6ef43f4a7f2a7adb9b9d3582314965e8de9e235601ce3b36d344a9fa2fc648.scope: Deactivated successfully.
Sep 30 18:41:57 compute-0 sudo[349543]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:58 compute-0 sudo[349693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:41:58 compute-0 sudo[349693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:58 compute-0 sudo[349693]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:58 compute-0 sudo[349718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:41:58 compute-0 sudo[349718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1906: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 6.0 KiB/s rd, 11 KiB/s wr, 8 op/s
Sep 30 18:41:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/134415393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:41:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2727541640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:41:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2727541640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:41:58 compute-0 podman[349785]: 2025-09-30 18:41:58.621848099 +0000 UTC m=+0.041979845 container create 330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamarr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:41:58 compute-0 systemd[1]: Started libpod-conmon-330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c.scope.
Sep 30 18:41:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:41:58 compute-0 podman[349785]: 2025-09-30 18:41:58.601191331 +0000 UTC m=+0.021323127 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:41:58 compute-0 podman[349785]: 2025-09-30 18:41:58.700754489 +0000 UTC m=+0.120886265 container init 330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamarr, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 18:41:58 compute-0 podman[349785]: 2025-09-30 18:41:58.708462216 +0000 UTC m=+0.128593962 container start 330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:41:58 compute-0 podman[349785]: 2025-09-30 18:41:58.712274483 +0000 UTC m=+0.132406259 container attach 330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 18:41:58 compute-0 competent_lamarr[349801]: 167 167
Sep 30 18:41:58 compute-0 systemd[1]: libpod-330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c.scope: Deactivated successfully.
Sep 30 18:41:58 compute-0 podman[349785]: 2025-09-30 18:41:58.715059405 +0000 UTC m=+0.135191181 container died 330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamarr, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f288b08a7d32cefea69530bf4b1aa6f91154c80ab3c8a71476962b0a3683936-merged.mount: Deactivated successfully.
Sep 30 18:41:58 compute-0 podman[349785]: 2025-09-30 18:41:58.753219091 +0000 UTC m=+0.173350847 container remove 330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:41:58 compute-0 systemd[1]: libpod-conmon-330a68e981abfcd591a9e0f69ad459ee4c06b48278e3c82a01aaa19158730d1c.scope: Deactivated successfully.
Sep 30 18:41:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:58] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:41:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:41:58] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:41:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:41:58.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:41:58 compute-0 podman[349824]: 2025-09-30 18:41:58.936998274 +0000 UTC m=+0.042359155 container create 0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:41:58 compute-0 systemd[1]: Started libpod-conmon-0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100.scope.
Sep 30 18:41:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:41:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:41:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:41:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:41:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:41:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:41:59 compute-0 podman[349824]: 2025-09-30 18:41:58.917078805 +0000 UTC m=+0.022439666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:41:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69debe24e84b558a1acfe60ba9509388de918f9420a4afa3b778649bef872bb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69debe24e84b558a1acfe60ba9509388de918f9420a4afa3b778649bef872bb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69debe24e84b558a1acfe60ba9509388de918f9420a4afa3b778649bef872bb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69debe24e84b558a1acfe60ba9509388de918f9420a4afa3b778649bef872bb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:41:59 compute-0 podman[349824]: 2025-09-30 18:41:59.034250333 +0000 UTC m=+0.139611184 container init 0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:41:59 compute-0 podman[349824]: 2025-09-30 18:41:59.04430418 +0000 UTC m=+0.149665021 container start 0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:41:59 compute-0 podman[349824]: 2025-09-30 18:41:59.048048406 +0000 UTC m=+0.153409277 container attach 0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:41:59 compute-0 interesting_thompson[349840]: {
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:     "0": [
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:         {
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "devices": [
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "/dev/loop3"
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             ],
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "lv_name": "ceph_lv0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "lv_size": "21470642176",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "name": "ceph_lv0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "tags": {
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.cluster_name": "ceph",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.crush_device_class": "",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.encrypted": "0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.osd_id": "0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.type": "block",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.vdo": "0",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:                 "ceph.with_tpm": "0"
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             },
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "type": "block",
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:             "vg_name": "ceph_vg0"
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:         }
Sep 30 18:41:59 compute-0 interesting_thompson[349840]:     ]
Sep 30 18:41:59 compute-0 interesting_thompson[349840]: }
Sep 30 18:41:59 compute-0 systemd[1]: libpod-0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100.scope: Deactivated successfully.
Sep 30 18:41:59 compute-0 podman[349824]: 2025-09-30 18:41:59.335696197 +0000 UTC m=+0.441057038 container died 0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-69debe24e84b558a1acfe60ba9509388de918f9420a4afa3b778649bef872bb0-merged.mount: Deactivated successfully.
Sep 30 18:41:59 compute-0 podman[349824]: 2025-09-30 18:41:59.383955342 +0000 UTC m=+0.489316193 container remove 0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:41:59 compute-0 systemd[1]: libpod-conmon-0e1b3b5c96ae8304cee9a2ddffc4d2c249fa49aaef675a4f604ff863758f7100.scope: Deactivated successfully.
Sep 30 18:41:59 compute-0 sudo[349718]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:59 compute-0 ceph-mon[73755]: pgmap v1906: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 6.0 KiB/s rd, 11 KiB/s wr, 8 op/s
Sep 30 18:41:59 compute-0 sudo[349866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:41:59 compute-0 sudo[349866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:59 compute-0 sudo[349866]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:59 compute-0 sudo[349891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:41:59 compute-0 sudo[349891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:41:59.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:41:59 compute-0 sudo[349917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:41:59 compute-0 sudo[349917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:41:59 compute-0 sudo[349917]: pam_unix(sudo:session): session closed for user root
Sep 30 18:41:59 compute-0 podman[276673]: time="2025-09-30T18:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:41:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:41:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10766 "" "Go-http-client/1.1"
Sep 30 18:41:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:41:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:41:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:41:59.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:00 compute-0 podman[349984]: 2025-09-30 18:42:00.002484491 +0000 UTC m=+0.036101275 container create c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 18:42:00 compute-0 systemd[1]: Started libpod-conmon-c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b.scope.
Sep 30 18:42:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:42:00 compute-0 podman[349984]: 2025-09-30 18:42:00.079582114 +0000 UTC m=+0.113198948 container init c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hypatia, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:42:00 compute-0 podman[349984]: 2025-09-30 18:41:59.986867102 +0000 UTC m=+0.020483906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:42:00 compute-0 podman[349984]: 2025-09-30 18:42:00.08607601 +0000 UTC m=+0.119692794 container start c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hypatia, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:42:00 compute-0 podman[349984]: 2025-09-30 18:42:00.089938249 +0000 UTC m=+0.123555083 container attach c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hypatia, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 18:42:00 compute-0 cranky_hypatia[350000]: 167 167
Sep 30 18:42:00 compute-0 systemd[1]: libpod-c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b.scope: Deactivated successfully.
Sep 30 18:42:00 compute-0 podman[349984]: 2025-09-30 18:42:00.09155063 +0000 UTC m=+0.125167424 container died c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac9a929d05242e147922aaa526d258c9c49a0617123e67a72986c775f81e3728-merged.mount: Deactivated successfully.
Sep 30 18:42:00 compute-0 podman[349984]: 2025-09-30 18:42:00.126401212 +0000 UTC m=+0.160018006 container remove c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_hypatia, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:42:00 compute-0 systemd[1]: libpod-conmon-c57f44f94e81be21351b5379f1f34c05b2a377b59bb69930918003b4930c122b.scope: Deactivated successfully.
Sep 30 18:42:00 compute-0 podman[350024]: 2025-09-30 18:42:00.329067569 +0000 UTC m=+0.045834004 container create 3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:42:00 compute-0 systemd[1]: Started libpod-conmon-3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe.scope.
Sep 30 18:42:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1907: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.7 KiB/s rd, 11 KiB/s wr, 8 op/s
Sep 30 18:42:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3244ac0ac1b51e123bc705e9dbe5081f05b7f61532433890e0592331ca5e68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3244ac0ac1b51e123bc705e9dbe5081f05b7f61532433890e0592331ca5e68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3244ac0ac1b51e123bc705e9dbe5081f05b7f61532433890e0592331ca5e68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3244ac0ac1b51e123bc705e9dbe5081f05b7f61532433890e0592331ca5e68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:42:00 compute-0 podman[350024]: 2025-09-30 18:42:00.405483424 +0000 UTC m=+0.122249889 container init 3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chaplygin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:42:00 compute-0 podman[350024]: 2025-09-30 18:42:00.313607253 +0000 UTC m=+0.030373708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:42:00 compute-0 podman[350024]: 2025-09-30 18:42:00.411284613 +0000 UTC m=+0.128051048 container start 3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chaplygin, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 18:42:00 compute-0 podman[350024]: 2025-09-30 18:42:00.414617688 +0000 UTC m=+0.131384133 container attach 3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:42:00 compute-0 lvm[350115]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:42:00 compute-0 lvm[350115]: VG ceph_vg0 finished
Sep 30 18:42:00 compute-0 lvm[350119]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:42:00 compute-0 lvm[350119]: VG ceph_vg0 finished
Sep 30 18:42:00 compute-0 dazzling_chaplygin[350040]: {}
Sep 30 18:42:01 compute-0 systemd[1]: libpod-3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe.scope: Deactivated successfully.
Sep 30 18:42:01 compute-0 nova_compute[265391]: 2025-09-30 18:42:01.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:01 compute-0 podman[350120]: 2025-09-30 18:42:01.064476289 +0000 UTC m=+0.027319361 container died 3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:42:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc3244ac0ac1b51e123bc705e9dbe5081f05b7f61532433890e0592331ca5e68-merged.mount: Deactivated successfully.
Sep 30 18:42:01 compute-0 podman[350120]: 2025-09-30 18:42:01.107678074 +0000 UTC m=+0.070521106 container remove 3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:42:01 compute-0 systemd[1]: libpod-conmon-3c053b686e020e2dcc243c76ed922784563fc54c1c050debb5a35232174931fe.scope: Deactivated successfully.
Sep 30 18:42:01 compute-0 sudo[349891]: pam_unix(sudo:session): session closed for user root
Sep 30 18:42:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:42:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:42:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:42:01 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:42:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:01 compute-0 nova_compute[265391]: 2025-09-30 18:42:01.243 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:01 compute-0 nova_compute[265391]: 2025-09-30 18:42:01.243 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:01 compute-0 nova_compute[265391]: 2025-09-30 18:42:01.244 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:01 compute-0 nova_compute[265391]: 2025-09-30 18:42:01.244 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:42:01 compute-0 sudo[350135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:42:01 compute-0 sudo[350135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:42:01 compute-0 sudo[350135]: pam_unix(sudo:session): session closed for user root
Sep 30 18:42:01 compute-0 openstack_network_exporter[279566]: ERROR   18:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:42:01 compute-0 openstack_network_exporter[279566]: ERROR   18:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:42:01 compute-0 openstack_network_exporter[279566]: ERROR   18:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:42:01 compute-0 openstack_network_exporter[279566]: ERROR   18:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:42:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:42:01 compute-0 openstack_network_exporter[279566]: ERROR   18:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:42:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:42:01 compute-0 nova_compute[265391]: 2025-09-30 18:42:01.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:01 compute-0 ceph-mon[73755]: pgmap v1907: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.7 KiB/s rd, 11 KiB/s wr, 8 op/s
Sep 30 18:42:01 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:42:01 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:42:01 compute-0 nova_compute[265391]: 2025-09-30 18:42:01.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:01.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:01.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1908: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.7 KiB/s rd, 11 KiB/s wr, 8 op/s
Sep 30 18:42:02 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=infra.usagestats t=2025-09-30T18:42:02.514676809Z level=info msg="Usage stats are ready to report"
Sep 30 18:42:03 compute-0 ceph-mon[73755]: pgmap v1908: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.7 KiB/s rd, 11 KiB/s wr, 8 op/s
Sep 30 18:42:03 compute-0 podman[350162]: 2025-09-30 18:42:03.525041555 +0000 UTC m=+0.064732758 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Sep 30 18:42:03 compute-0 podman[350164]: 2025-09-30 18:42:03.527913618 +0000 UTC m=+0.067183790 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:42:03 compute-0 podman[350163]: 2025-09-30 18:42:03.555377221 +0000 UTC m=+0.094815157 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller)
Sep 30 18:42:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:03.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:03.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:03.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1909: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 796 B/s rd, 2.7 KiB/s wr, 1 op/s
Sep 30 18:42:05 compute-0 ceph-mon[73755]: pgmap v1909: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 796 B/s rd, 2.7 KiB/s wr, 1 op/s
Sep 30 18:42:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:05.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:05.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:06 compute-0 nova_compute[265391]: 2025-09-30 18:42:06.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1910: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 796 B/s rd, 2.7 KiB/s wr, 1 op/s
Sep 30 18:42:06 compute-0 nova_compute[265391]: 2025-09-30 18:42:06.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:42:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:42:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:07.353Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:42:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:42:07 compute-0 ceph-mon[73755]: pgmap v1910: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 796 B/s rd, 2.7 KiB/s wr, 1 op/s
Sep 30 18:42:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:42:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:07.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002276388363205704 of space, bias 1.0, pg target 0.45527767264114083 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:42:08
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', '.nfs', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes']
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:42:08 compute-0 nova_compute[265391]: 2025-09-30 18:42:08.277 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:08 compute-0 nova_compute[265391]: 2025-09-30 18:42:08.278 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:08 compute-0 nova_compute[265391]: 2025-09-30 18:42:08.278 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "86b9b1e5-516e-43c2-b180-7ef40f7c1c67-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1911: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:42:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:08] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:42:08 compute-0 nova_compute[265391]: 2025-09-30 18:42:08.790 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:08 compute-0 nova_compute[265391]: 2025-09-30 18:42:08.791 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:08] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:42:08 compute-0 nova_compute[265391]: 2025-09-30 18:42:08.791 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:08 compute-0 nova_compute[265391]: 2025-09-30 18:42:08.791 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:42:08 compute-0 nova_compute[265391]: 2025-09-30 18:42:08.792 2 DEBUG oslo_concurrency.processutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:08.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:42:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338394468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:09 compute-0 nova_compute[265391]: 2025-09-30 18:42:09.222 2 DEBUG oslo_concurrency.processutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:09 compute-0 ceph-mon[73755]: pgmap v1911: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:42:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2338394468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:09.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:10 compute-0 nova_compute[265391]: 2025-09-30 18:42:10.266 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:42:10 compute-0 nova_compute[265391]: 2025-09-30 18:42:10.267 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:42:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1912: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:42:10 compute-0 nova_compute[265391]: 2025-09-30 18:42:10.404 2 WARNING nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:42:10 compute-0 nova_compute[265391]: 2025-09-30 18:42:10.405 2 DEBUG oslo_concurrency.processutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:10 compute-0 nova_compute[265391]: 2025-09-30 18:42:10.426 2 DEBUG oslo_concurrency.processutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:10 compute-0 nova_compute[265391]: 2025-09-30 18:42:10.427 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=39.90114974975586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:42:10 compute-0 nova_compute[265391]: 2025-09-30 18:42:10.428 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:10 compute-0 nova_compute[265391]: 2025-09-30 18:42:10.428 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:11 compute-0 nova_compute[265391]: 2025-09-30 18:42:11.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:11 compute-0 nova_compute[265391]: 2025-09-30 18:42:11.449 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 86b9b1e5-516e-43c2-b180-7ef40f7c1c67 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:42:11 compute-0 ceph-mon[73755]: pgmap v1912: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:42:11 compute-0 nova_compute[265391]: 2025-09-30 18:42:11.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:11.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:11.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:11 compute-0 nova_compute[265391]: 2025-09-30 18:42:11.957 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.003 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.003 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.004 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.004 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.004 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.017 2 INFO nova.compute.manager [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Terminating instance
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.115 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Instance 61db6fe2-de1c-4e4b-899f-4e40c50227b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.115 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration f282835a-0309-42be-a72c-7d2d96cd2df8 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.116 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.116 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:42:10 up  1:45,  0 user,  load average: 0.72, 0.80, 0.84\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_63c45bef63ef4b9f895b3bab865e1a84': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.158 2 DEBUG oslo_concurrency.processutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1913: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.533 2 DEBUG nova.compute.manager [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:42:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:42:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3676218880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:12 compute-0 kernel: tap25ca9495-1e (unregistering): left promiscuous mode
Sep 30 18:42:12 compute-0 NetworkManager[45059]: <info>  [1759257732.5942] device (tap25ca9495-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:12 compute-0 ovn_controller[156242]: 2025-09-30T18:42:12Z|00272|binding|INFO|Releasing lport 25ca9495-1e4e-46a1-826a-4fcb2880a7ad from this chassis (sb_readonly=0)
Sep 30 18:42:12 compute-0 ovn_controller[156242]: 2025-09-30T18:42:12Z|00273|binding|INFO|Setting lport 25ca9495-1e4e-46a1-826a-4fcb2880a7ad down in Southbound
Sep 30 18:42:12 compute-0 ovn_controller[156242]: 2025-09-30T18:42:12Z|00274|binding|INFO|Removing iface tap25ca9495-1e ovn-installed in OVS
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.614 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:d3:fa 10.100.0.13'], port_security=['fa:16:3e:b7:d3:fa 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61db6fe2-de1c-4e4b-899f-4e40c50227b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a9025550-4c18-4f21-a560-5b6f52684803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=25ca9495-1e4e-46a1-826a-4fcb2880a7ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.615 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 25ca9495-1e4e-46a1-826a-4fcb2880a7ad in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 unbound from our chassis
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.619 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8484b9b-b34e-4c32-b987-029d8fcb2a28, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.621 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c948bdfe-82ea-4369-8d06-7ae7f9c66580]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.622 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 namespace which is not needed anymore
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.621 2 DEBUG oslo_concurrency.processutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.633 2 DEBUG nova.compute.provider_tree [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:12 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Sep 30 18:42:12 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000001f.scope: Consumed 14.162s CPU time.
Sep 30 18:42:12 compute-0 systemd-machined[219917]: Machine qemu-23-instance-0000001f terminated.
Sep 30 18:42:12 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[348482]: [NOTICE]   (348486) : haproxy version is 3.0.5-8e879a5
Sep 30 18:42:12 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[348482]: [NOTICE]   (348486) : path to executable is /usr/sbin/haproxy
Sep 30 18:42:12 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[348482]: [WARNING]  (348486) : Exiting Master process...
Sep 30 18:42:12 compute-0 podman[350311]: 2025-09-30 18:42:12.768401867 +0000 UTC m=+0.031601830 container kill 17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:42:12 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[348482]: [ALERT]    (348486) : Current worker (348488) exited with code 143 (Terminated)
Sep 30 18:42:12 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[348482]: [WARNING]  (348486) : All workers exited. Exiting... (0)
Sep 30 18:42:12 compute-0 systemd[1]: libpod-17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748.scope: Deactivated successfully.
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.776 2 INFO nova.virt.libvirt.driver [-] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Instance destroyed successfully.
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.776 2 DEBUG nova.objects.instance [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lazy-loading 'resources' on Instance uuid 61db6fe2-de1c-4e4b-899f-4e40c50227b7 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.798 2 DEBUG nova.compute.manager [req-52c6376e-6fc8-4be4-ac97-4e1f8ac69160 req-a3bd2b4f-213d-4944-bba0-6280cad4696b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received event network-vif-unplugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.798 2 DEBUG oslo_concurrency.lockutils [req-52c6376e-6fc8-4be4-ac97-4e1f8ac69160 req-a3bd2b4f-213d-4944-bba0-6280cad4696b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.798 2 DEBUG oslo_concurrency.lockutils [req-52c6376e-6fc8-4be4-ac97-4e1f8ac69160 req-a3bd2b4f-213d-4944-bba0-6280cad4696b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.798 2 DEBUG oslo_concurrency.lockutils [req-52c6376e-6fc8-4be4-ac97-4e1f8ac69160 req-a3bd2b4f-213d-4944-bba0-6280cad4696b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.799 2 DEBUG nova.compute.manager [req-52c6376e-6fc8-4be4-ac97-4e1f8ac69160 req-a3bd2b4f-213d-4944-bba0-6280cad4696b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] No waiting events found dispatching network-vif-unplugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.799 2 DEBUG nova.compute.manager [req-52c6376e-6fc8-4be4-ac97-4e1f8ac69160 req-a3bd2b4f-213d-4944-bba0-6280cad4696b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received event network-vif-unplugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:42:12 compute-0 podman[350335]: 2025-09-30 18:42:12.810010082 +0000 UTC m=+0.024844867 container died 17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Sep 30 18:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748-userdata-shm.mount: Deactivated successfully.
Sep 30 18:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fa65a589576a4f57ed7e46a73547b2b1956059a47bd2f65f6fe653543e64992-merged.mount: Deactivated successfully.
Sep 30 18:42:12 compute-0 podman[350335]: 2025-09-30 18:42:12.854021638 +0000 UTC m=+0.068856393 container cleanup 17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:42:12 compute-0 systemd[1]: libpod-conmon-17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748.scope: Deactivated successfully.
Sep 30 18:42:12 compute-0 podman[350341]: 2025-09-30 18:42:12.873563367 +0000 UTC m=+0.072105375 container remove 17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.880 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[10a7cc17-89b0-4c99-bef9-d721a9231ab9]: (4, ("Tue Sep 30 06:42:12 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 (17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748)\n17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748\nTue Sep 30 06:42:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 (17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748)\n17f1091bc96cede5883832b8d7a2d2204533dc9b7560ae14ded9db3860efb748\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.881 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe19ca72-4b60-4c48-8a32-63712e4b36ae]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.882 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.882 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c27d8e0e-0812-4618-9a0a-081a12b2f89f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.883 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8484b9b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:12 compute-0 kernel: tapc8484b9b-b0: left promiscuous mode
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:12 compute-0 nova_compute[265391]: 2025-09-30 18:42:12.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.906 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9980f6e4-d851-46d4-b028-107f6cd75aa5]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.934 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1727fb-2f12-4ba6-97ea-974a6214af9b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.936 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0944c5b8-70f8-4131-9ede-27163ec5b2f4]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.952 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[28f46298-d383-484c-8aed-516924783240]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625201, 'reachable_time': 19375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350369, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:12 compute-0 systemd[1]: run-netns-ovnmeta\x2dc8484b9b\x2db34e\x2d4c32\x2db987\x2d029d8fcb2a28.mount: Deactivated successfully.
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.956 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:42:12 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:12.956 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[07d75a06-c3b4-41ce-8939-9adfc8e6ea21]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.143 2 DEBUG nova.scheduler.client.report [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.284 2 DEBUG nova.virt.libvirt.vif [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1220912296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1220912296',id=31,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:41:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-0qrb7r13',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:41:21Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=61db6fe2-de1c-4e4b-899f-4e40c50227b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.284 2 DEBUG nova.network.os_vif_util [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "address": "fa:16:3e:b7:d3:fa", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ca9495-1e", "ovs_interfaceid": "25ca9495-1e4e-46a1-826a-4fcb2880a7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.285 2 DEBUG nova.network.os_vif_util [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:d3:fa,bridge_name='br-int',has_traffic_filtering=True,id=25ca9495-1e4e-46a1-826a-4fcb2880a7ad,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ca9495-1e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.286 2 DEBUG os_vif [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:d3:fa,bridge_name='br-int',has_traffic_filtering=True,id=25ca9495-1e4e-46a1-826a-4fcb2880a7ad,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ca9495-1e') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25ca9495-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=a6977821-ca04-42b4-905c-58eaa252da8e) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.351 2 INFO os_vif [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:d3:fa,bridge_name='br-int',has_traffic_filtering=True,id=25ca9495-1e4e-46a1-826a-4fcb2880a7ad,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ca9495-1e')
Sep 30 18:42:13 compute-0 ceph-mon[73755]: pgmap v1913: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:42:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3676218880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.655 2 DEBUG nova.compute.resource_tracker [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.656 2 DEBUG oslo_concurrency.lockutils [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.228s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:13.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.683 2 INFO nova.compute.manager [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:42:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:13.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:13.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.909 2 INFO nova.virt.libvirt.driver [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Deleting instance files /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7_del
Sep 30 18:42:13 compute-0 nova_compute[265391]: 2025-09-30 18:42:13.911 2 INFO nova.virt.libvirt.driver [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Deletion of /var/lib/nova/instances/61db6fe2-de1c-4e4b-899f-4e40c50227b7_del complete
Sep 30 18:42:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1914: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.425 2 INFO nova.compute.manager [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Took 1.89 seconds to destroy the instance on the hypervisor.
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.426 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.427 2 DEBUG nova.compute.manager [-] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.427 2 DEBUG nova.network.neutron [-] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.428 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.750 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.766 2 INFO nova.scheduler.client.report [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration f282835a-0309-42be-a72c-7d2d96cd2df8
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.767 2 DEBUG nova.virt.libvirt.driver [None req-17a6f3dc-42be-4ce6-a356-87d256030da9 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 86b9b1e5-516e-43c2-b180-7ef40f7c1c67] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.857 2 DEBUG nova.compute.manager [req-15b48d05-0421-4bef-8e89-14e7fecbaa8a req-4ccbf3b5-79ab-4888-9fa5-14bf4af3ebd5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received event network-vif-unplugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.857 2 DEBUG oslo_concurrency.lockutils [req-15b48d05-0421-4bef-8e89-14e7fecbaa8a req-4ccbf3b5-79ab-4888-9fa5-14bf4af3ebd5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.858 2 DEBUG oslo_concurrency.lockutils [req-15b48d05-0421-4bef-8e89-14e7fecbaa8a req-4ccbf3b5-79ab-4888-9fa5-14bf4af3ebd5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.858 2 DEBUG oslo_concurrency.lockutils [req-15b48d05-0421-4bef-8e89-14e7fecbaa8a req-4ccbf3b5-79ab-4888-9fa5-14bf4af3ebd5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.859 2 DEBUG nova.compute.manager [req-15b48d05-0421-4bef-8e89-14e7fecbaa8a req-4ccbf3b5-79ab-4888-9fa5-14bf4af3ebd5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] No waiting events found dispatching network-vif-unplugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:42:14 compute-0 nova_compute[265391]: 2025-09-30 18:42:14.859 2 DEBUG nova.compute.manager [req-15b48d05-0421-4bef-8e89-14e7fecbaa8a req-4ccbf3b5-79ab-4888-9fa5-14bf4af3ebd5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received event network-vif-unplugged-25ca9495-1e4e-46a1-826a-4fcb2880a7ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:42:15 compute-0 nova_compute[265391]: 2025-09-30 18:42:15.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:15 compute-0 ceph-mon[73755]: pgmap v1914: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:42:15 compute-0 nova_compute[265391]: 2025-09-30 18:42:15.589 2 DEBUG nova.network.neutron [-] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:42:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:15.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:15.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:16 compute-0 nova_compute[265391]: 2025-09-30 18:42:16.095 2 INFO nova.compute.manager [-] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Took 1.67 seconds to deallocate network for instance.
Sep 30 18:42:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1915: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:42:16 compute-0 podman[350394]: 2025-09-30 18:42:16.530846679 +0000 UTC m=+0.062310636 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=iscsid)
Sep 30 18:42:16 compute-0 podman[350393]: 2025-09-30 18:42:16.538668569 +0000 UTC m=+0.070275660 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=multipathd)
Sep 30 18:42:16 compute-0 podman[350395]: 2025-09-30 18:42:16.539598603 +0000 UTC m=+0.062851380 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 18:42:16 compute-0 nova_compute[265391]: 2025-09-30 18:42:16.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:16 compute-0 nova_compute[265391]: 2025-09-30 18:42:16.618 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:16 compute-0 nova_compute[265391]: 2025-09-30 18:42:16.618 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:16 compute-0 nova_compute[265391]: 2025-09-30 18:42:16.658 2 DEBUG oslo_concurrency.processutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:16 compute-0 nova_compute[265391]: 2025-09-30 18:42:16.935 2 DEBUG nova.compute.manager [req-122eac4e-a7e3-4b88-bd64-cee82c3ebd0f req-9b3508fe-447f-4fdd-826c-bc2f85cf0663 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 61db6fe2-de1c-4e4b-899f-4e40c50227b7] Received event network-vif-deleted-25ca9495-1e4e-46a1-826a-4fcb2880a7ad external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:42:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:42:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109886704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:17 compute-0 nova_compute[265391]: 2025-09-30 18:42:17.100 2 DEBUG oslo_concurrency.processutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:17 compute-0 nova_compute[265391]: 2025-09-30 18:42:17.107 2 DEBUG nova.compute.provider_tree [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:42:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:17.355Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:17 compute-0 ceph-mon[73755]: pgmap v1915: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:42:17 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1109886704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:17 compute-0 nova_compute[265391]: 2025-09-30 18:42:17.617 2 DEBUG nova.scheduler.client.report [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:42:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:17.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:17.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:18 compute-0 nova_compute[265391]: 2025-09-30 18:42:18.127 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.508s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:18 compute-0 nova_compute[265391]: 2025-09-30 18:42:18.151 2 INFO nova.scheduler.client.report [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Deleted allocations for instance 61db6fe2-de1c-4e4b-899f-4e40c50227b7
Sep 30 18:42:18 compute-0 nova_compute[265391]: 2025-09-30 18:42:18.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1916: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:42:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:18] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:42:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:18] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:42:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:18.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:19 compute-0 nova_compute[265391]: 2025-09-30 18:42:19.293 2 DEBUG oslo_concurrency.lockutils [None req-f75ebc9c-45fc-48e9-8455-33334783b207 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "61db6fe2-de1c-4e4b-899f-4e40c50227b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.290s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:19 compute-0 ceph-mon[73755]: pgmap v1916: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:42:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:19.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:19 compute-0 sudo[350473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:42:19 compute-0 sudo[350473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:42:19 compute-0 sudo[350473]: pam_unix(sudo:session): session closed for user root
Sep 30 18:42:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:19.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1917: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:42:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:21 compute-0 nova_compute[265391]: 2025-09-30 18:42:21.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:21 compute-0 ceph-mon[73755]: pgmap v1917: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:42:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:21.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:42:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:42:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1918: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:42:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:42:23 compute-0 nova_compute[265391]: 2025-09-30 18:42:23.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:23 compute-0 ceph-mon[73755]: pgmap v1918: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:42:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:23.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:23.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:23.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1919: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:42:24 compute-0 ceph-mon[73755]: pgmap v1919: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:42:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:25.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:25.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1920: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:42:26 compute-0 nova_compute[265391]: 2025-09-30 18:42:26.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:27.356Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:27 compute-0 ceph-mon[73755]: pgmap v1920: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:42:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:27.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:27.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:28 compute-0 nova_compute[265391]: 2025-09-30 18:42:28.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1921: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:42:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:28] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:42:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:28] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:42:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:28.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:42:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:28.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:29 compute-0 ceph-mon[73755]: pgmap v1921: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:42:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:29.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:29 compute-0 podman[276673]: time="2025-09-30T18:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:42:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:42:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10310 "" "Go-http-client/1.1"
Sep 30 18:42:29 compute-0 nova_compute[265391]: 2025-09-30 18:42:29.793 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:29 compute-0 nova_compute[265391]: 2025-09-30 18:42:29.794 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:29.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:30 compute-0 nova_compute[265391]: 2025-09-30 18:42:30.301 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:42:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1922: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:42:30 compute-0 nova_compute[265391]: 2025-09-30 18:42:30.874 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:30 compute-0 nova_compute[265391]: 2025-09-30 18:42:30.874 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:30 compute-0 nova_compute[265391]: 2025-09-30 18:42:30.883 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:42:30 compute-0 nova_compute[265391]: 2025-09-30 18:42:30.884 2 INFO nova.compute.claims [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:42:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:31 compute-0 openstack_network_exporter[279566]: ERROR   18:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:42:31 compute-0 openstack_network_exporter[279566]: ERROR   18:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:42:31 compute-0 openstack_network_exporter[279566]: ERROR   18:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:42:31 compute-0 openstack_network_exporter[279566]: ERROR   18:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:42:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:42:31 compute-0 openstack_network_exporter[279566]: ERROR   18:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:42:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:42:31 compute-0 ceph-mon[73755]: pgmap v1922: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:42:31 compute-0 nova_compute[265391]: 2025-09-30 18:42:31.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:31.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:31.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:31 compute-0 nova_compute[265391]: 2025-09-30 18:42:31.936 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1923: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:42:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:42:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/474572149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:32 compute-0 nova_compute[265391]: 2025-09-30 18:42:32.425 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:32 compute-0 nova_compute[265391]: 2025-09-30 18:42:32.431 2 DEBUG nova.compute.provider_tree [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:42:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/474572149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:32 compute-0 nova_compute[265391]: 2025-09-30 18:42:32.941 2 DEBUG nova.scheduler.client.report [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:42:33 compute-0 nova_compute[265391]: 2025-09-30 18:42:33.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:33 compute-0 nova_compute[265391]: 2025-09-30 18:42:33.460 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.585s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:33 compute-0 nova_compute[265391]: 2025-09-30 18:42:33.460 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:42:33 compute-0 ceph-mon[73755]: pgmap v1923: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:42:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:33.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:33.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:33.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:33 compute-0 nova_compute[265391]: 2025-09-30 18:42:33.970 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:42:33 compute-0 nova_compute[265391]: 2025-09-30 18:42:33.970 2 DEBUG nova.network.neutron [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:42:33 compute-0 nova_compute[265391]: 2025-09-30 18:42:33.971 2 WARNING neutronclient.v2_0.client [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:42:33 compute-0 nova_compute[265391]: 2025-09-30 18:42:33.971 2 WARNING neutronclient.v2_0.client [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:42:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1924: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:42:34 compute-0 nova_compute[265391]: 2025-09-30 18:42:34.439 2 DEBUG nova.network.neutron [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Successfully created port: a362175f-2dba-4a9f-bc07-4260625a8ce0 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:42:34 compute-0 nova_compute[265391]: 2025-09-30 18:42:34.478 2 INFO nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:42:34 compute-0 podman[350538]: 2025-09-30 18:42:34.51519807 +0000 UTC m=+0.054199668 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:42:34 compute-0 podman[350536]: 2025-09-30 18:42:34.540438756 +0000 UTC m=+0.086115324 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 18:42:34 compute-0 podman[350537]: 2025-09-30 18:42:34.56011044 +0000 UTC m=+0.094190412 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:42:34 compute-0 nova_compute[265391]: 2025-09-30 18:42:34.987 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:42:35 compute-0 ceph-mon[73755]: pgmap v1924: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:42:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:35.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.006 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.008 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.009 2 INFO nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Creating image(s)
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.049 2 DEBUG nova.storage.rbd_utils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 0bd62f93-1956-4b12-a38a-10deee907b16_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.095 2 DEBUG nova.storage.rbd_utils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 0bd62f93-1956-4b12-a38a-10deee907b16_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.135 2 DEBUG nova.storage.rbd_utils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 0bd62f93-1956-4b12-a38a-10deee907b16_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.139 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.216 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.218 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.219 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.219 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.262 2 DEBUG nova.storage.rbd_utils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 0bd62f93-1956-4b12-a38a-10deee907b16_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.267 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 0bd62f93-1956-4b12-a38a-10deee907b16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1925: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:42:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3785611316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:42:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3785611316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.597 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 0bd62f93-1956-4b12-a38a-10deee907b16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.672 2 DEBUG nova.storage.rbd_utils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] resizing rbd image 0bd62f93-1956-4b12-a38a-10deee907b16_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.776 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.777 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Ensure instance console log exists: /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.777 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.778 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:36 compute-0 nova_compute[265391]: 2025-09-30 18:42:36.778 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:42:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:42:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:37.357Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:42:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:42:37 compute-0 ceph-mon[73755]: pgmap v1925: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:42:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:42:37 compute-0 nova_compute[265391]: 2025-09-30 18:42:37.651 2 DEBUG nova.network.neutron [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Successfully updated port: a362175f-2dba-4a9f-bc07-4260625a8ce0 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:42:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:37.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:37 compute-0 nova_compute[265391]: 2025-09-30 18:42:37.710 2 DEBUG nova.compute.manager [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-changed-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:42:37 compute-0 nova_compute[265391]: 2025-09-30 18:42:37.710 2 DEBUG nova.compute.manager [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Refreshing instance network info cache due to event network-changed-a362175f-2dba-4a9f-bc07-4260625a8ce0. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:42:37 compute-0 nova_compute[265391]: 2025-09-30 18:42:37.710 2 DEBUG oslo_concurrency.lockutils [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:42:37 compute-0 nova_compute[265391]: 2025-09-30 18:42:37.710 2 DEBUG oslo_concurrency.lockutils [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:42:37 compute-0 nova_compute[265391]: 2025-09-30 18:42:37.711 2 DEBUG nova.network.neutron [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Refreshing network info cache for port a362175f-2dba-4a9f-bc07-4260625a8ce0 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:42:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:37.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:38 compute-0 nova_compute[265391]: 2025-09-30 18:42:38.157 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:42:38 compute-0 nova_compute[265391]: 2025-09-30 18:42:38.218 2 WARNING neutronclient.v2_0.client [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:42:38 compute-0 nova_compute[265391]: 2025-09-30 18:42:38.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1926: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:38 compute-0 nova_compute[265391]: 2025-09-30 18:42:38.717 2 DEBUG nova.network.neutron [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:42:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:38] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:42:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:38] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:42:38 compute-0 nova_compute[265391]: 2025-09-30 18:42:38.856 2 DEBUG nova.network.neutron [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:42:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:38.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:39 compute-0 nova_compute[265391]: 2025-09-30 18:42:39.364 2 DEBUG oslo_concurrency.lockutils [req-59f95fa0-d328-44fe-9f75-cf80d3e12201 req-edeeb517-bdcc-407e-8f9a-924417c11eda 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:42:39 compute-0 nova_compute[265391]: 2025-09-30 18:42:39.365 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquired lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:42:39 compute-0 nova_compute[265391]: 2025-09-30 18:42:39.365 2 DEBUG nova.network.neutron [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:42:39 compute-0 ceph-mon[73755]: pgmap v1926: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:39.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:39 compute-0 sudo[350774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:42:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:39.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:39 compute-0 sudo[350774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:42:39 compute-0 sudo[350774]: pam_unix(sudo:session): session closed for user root
Sep 30 18:42:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1927: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:40 compute-0 nova_compute[265391]: 2025-09-30 18:42:40.716 2 DEBUG nova.network.neutron [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:42:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:41 compute-0 ceph-mon[73755]: pgmap v1927: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:41 compute-0 nova_compute[265391]: 2025-09-30 18:42:41.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:41 compute-0 nova_compute[265391]: 2025-09-30 18:42:41.622 2 WARNING neutronclient.v2_0.client [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:42:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:41.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:41 compute-0 nova_compute[265391]: 2025-09-30 18:42:41.774 2 DEBUG nova.network.neutron [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Updating instance_info_cache with network_info: [{"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:42:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:41.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.312 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Releasing lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.313 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Instance network_info: |[{"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.316 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Start _get_guest_xml network_info=[{"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.320 2 WARNING nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.322 2 DEBUG nova.virt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084', uuid='0bd62f93-1956-4b12-a38a-10deee907b16'), owner=OwnerMeta(userid='5717e8cb8548429b948a23763350ab4a', username='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin', projectid='63c45bef63ef4b9f895b3bab865e1a84', projectname='tempest-TestExecuteWorkloadBalanceStrategy-134702932'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='tempest-watcher_flavor-293987126', flavorid='dc3a14e6-3544-428c-a856-1da19a12bf48', memory_mb=1151, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={}, swap=0), network_info=[{"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257762.3221414) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.335 2 DEBUG nova.virt.libvirt.host [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.336 2 DEBUG nova.virt.libvirt.host [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.340 2 DEBUG nova.virt.libvirt.host [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.342 2 DEBUG nova.virt.libvirt.host [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.343 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.343 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:42:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={},flavorid='dc3a14e6-3544-428c-a856-1da19a12bf48',id=3,is_public=True,memory_mb=1151,name='tempest-watcher_flavor-293987126',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.344 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.345 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.345 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.346 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.346 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.347 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.347 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.348 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.348 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.349 2 DEBUG nova.virt.hardware [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.354 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1928: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:42:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78543690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.908 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.930 2 DEBUG nova.storage.rbd_utils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 0bd62f93-1956-4b12-a38a-10deee907b16_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:42:42 compute-0 nova_compute[265391]: 2025-09-30 18:42:42.933 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:42:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795778409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.364 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.366 2 DEBUG nova.virt.libvirt.vif [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:42:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1791586084',id=32,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-1jmywl2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:42:35Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=0bd62f93-1956-4b12-a38a-10deee907b16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.366 2 DEBUG nova.network.os_vif_util [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.367 2 DEBUG nova.network.os_vif_util [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:c0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a362175f-2dba-4a9f-bc07-4260625a8ce0,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa362175f-2d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.368 2 DEBUG nova.objects.instance [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0bd62f93-1956-4b12-a38a-10deee907b16 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:43 compute-0 ceph-mon[73755]: pgmap v1928: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/78543690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:42:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2795778409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:42:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:43.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.876 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <uuid>0bd62f93-1956-4b12-a38a-10deee907b16</uuid>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <name>instance-00000020</name>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <memory>1178624</memory>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084</nova:name>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:42:42</nova:creationTime>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <nova:flavor name="tempest-watcher_flavor-293987126" id="dc3a14e6-3544-428c-a856-1da19a12bf48">
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:memory>1151</nova:memory>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:extraSpecs/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:42:43 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <nova:port uuid="a362175f-2dba-4a9f-bc07-4260625a8ce0">
Sep 30 18:42:43 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <system>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <entry name="serial">0bd62f93-1956-4b12-a38a-10deee907b16</entry>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <entry name="uuid">0bd62f93-1956-4b12-a38a-10deee907b16</entry>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </system>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <os>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   </os>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <features>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   </features>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/0bd62f93-1956-4b12-a38a-10deee907b16_disk">
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       </source>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/0bd62f93-1956-4b12-a38a-10deee907b16_disk.config">
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       </source>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:42:43 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:52:c0:eb"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <target dev="tapa362175f-2d"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/console.log" append="off"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <video>
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </video>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:42:43 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:42:43 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:42:43 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:42:43 compute-0 nova_compute[265391]: </domain>
Sep 30 18:42:43 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.877 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Preparing to wait for external event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.877 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.878 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.878 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.879 2 DEBUG nova.virt.libvirt.vif [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:42:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1791586084',id=32,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-1jmywl2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:42:35Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=0bd62f93-1956-4b12-a38a-10deee907b16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.879 2 DEBUG nova.network.os_vif_util [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.879 2 DEBUG nova.network.os_vif_util [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:c0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a362175f-2dba-4a9f-bc07-4260625a8ce0,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa362175f-2d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.880 2 DEBUG os_vif [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:c0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a362175f-2dba-4a9f-bc07-4260625a8ce0,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa362175f-2d') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.882 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '990b8b1d-9c6f-5c00-b165-dbb844e49ff5', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:43.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:42:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:43.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa362175f-2d, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapa362175f-2d, col_values=(('qos', UUID('cdd527f0-d41f-4c24-9d69-d7c598a66fac')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapa362175f-2d, col_values=(('external_ids', {'iface-id': 'a362175f-2dba-4a9f-bc07-4260625a8ce0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:c0:eb', 'vm-uuid': '0bd62f93-1956-4b12-a38a-10deee907b16'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:43 compute-0 NetworkManager[45059]: <info>  [1759257763.8902] manager: (tapa362175f-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:43 compute-0 nova_compute[265391]: 2025-09-30 18:42:43.898 2 INFO os_vif [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:c0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a362175f-2dba-4a9f-bc07-4260625a8ce0,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa362175f-2d')
Sep 30 18:42:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1929: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:45 compute-0 nova_compute[265391]: 2025-09-30 18:42:45.450 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:42:45 compute-0 nova_compute[265391]: 2025-09-30 18:42:45.451 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:42:45 compute-0 nova_compute[265391]: 2025-09-30 18:42:45.451 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No VIF found with MAC fa:16:3e:52:c0:eb, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:42:45 compute-0 nova_compute[265391]: 2025-09-30 18:42:45.451 2 INFO nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Using config drive
Sep 30 18:42:45 compute-0 nova_compute[265391]: 2025-09-30 18:42:45.471 2 DEBUG nova.storage.rbd_utils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 0bd62f93-1956-4b12-a38a-10deee907b16_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:42:45 compute-0 ceph-mon[73755]: pgmap v1929: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:45.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:45 compute-0 nova_compute[265391]: 2025-09-30 18:42:45.986 2 WARNING neutronclient.v2_0.client [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:42:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1930: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:46 compute-0 nova_compute[265391]: 2025-09-30 18:42:46.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:46 compute-0 ceph-mon[73755]: pgmap v1930: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:42:46 compute-0 nova_compute[265391]: 2025-09-30 18:42:46.894 2 INFO nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Creating config drive at /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/disk.config
Sep 30 18:42:46 compute-0 nova_compute[265391]: 2025-09-30 18:42:46.900 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpj9x63vnt execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.034 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpj9x63vnt" returned: 0 in 0.134s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.073 2 DEBUG nova.storage.rbd_utils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 0bd62f93-1956-4b12-a38a-10deee907b16_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.077 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/disk.config 0bd62f93-1956-4b12-a38a-10deee907b16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.292 2 DEBUG oslo_concurrency.processutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/disk.config 0bd62f93-1956-4b12-a38a-10deee907b16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.294 2 INFO nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Deleting local config drive /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/disk.config because it was imported into RBD.
Sep 30 18:42:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:47.358Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:47 compute-0 kernel: tapa362175f-2d: entered promiscuous mode
Sep 30 18:42:47 compute-0 NetworkManager[45059]: <info>  [1759257767.3690] manager: (tapa362175f-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Sep 30 18:42:47 compute-0 ovn_controller[156242]: 2025-09-30T18:42:47Z|00275|binding|INFO|Claiming lport a362175f-2dba-4a9f-bc07-4260625a8ce0 for this chassis.
Sep 30 18:42:47 compute-0 ovn_controller[156242]: 2025-09-30T18:42:47Z|00276|binding|INFO|a362175f-2dba-4a9f-bc07-4260625a8ce0: Claiming fa:16:3e:52:c0:eb 10.100.0.8
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:47 compute-0 ovn_controller[156242]: 2025-09-30T18:42:47Z|00277|binding|INFO|Setting lport a362175f-2dba-4a9f-bc07-4260625a8ce0 ovn-installed in OVS
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:47 compute-0 ovn_controller[156242]: 2025-09-30T18:42:47Z|00278|binding|INFO|Setting lport a362175f-2dba-4a9f-bc07-4260625a8ce0 up in Southbound
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.444 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:c0:eb 10.100.0.8'], port_security=['fa:16:3e:52:c0:eb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0bd62f93-1956-4b12-a38a-10deee907b16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9025550-4c18-4f21-a560-5b6f52684803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=a362175f-2dba-4a9f-bc07-4260625a8ce0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.445 166158 INFO neutron.agent.ovn.metadata.agent [-] Port a362175f-2dba-4a9f-bc07-4260625a8ce0 in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 bound to our chassis
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.446 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.461 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bd33e67f-c019-4895-9738-5b415ae0b605]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.462 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8484b9b-b1 in ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.464 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8484b9b-b0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.464 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1fafeeb4-0f07-4062-a067-da9ad6507a5c]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.465 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9d555de6-cb53-451f-b619-b07a7639246a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.477 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4def11-bd4b-46d2-a4ae-f1df7a9119f2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 systemd-machined[219917]: New machine qemu-24-instance-00000020.
Sep 30 18:42:47 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000020.
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.495 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[fef024f8-9fb2-451d-a56c-46fce5cabae3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 systemd-udevd[350981]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:42:47 compute-0 NetworkManager[45059]: <info>  [1759257767.5255] device (tapa362175f-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:42:47 compute-0 NetworkManager[45059]: <info>  [1759257767.5268] device (tapa362175f-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.527 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[261c7327-6b68-4ae1-b0ae-a770a2643fb9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.537 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[48620ad9-f9ec-4ccd-9149-459fdc01cb07]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 NetworkManager[45059]: <info>  [1759257767.5382] manager: (tapc8484b9b-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Sep 30 18:42:47 compute-0 podman[350937]: 2025-09-30 18:42:47.559744539 +0000 UTC m=+0.109681778 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Sep 30 18:42:47 compute-0 podman[350939]: 2025-09-30 18:42:47.563626798 +0000 UTC m=+0.113956567 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Sep 30 18:42:47 compute-0 podman[350940]: 2025-09-30 18:42:47.569773496 +0000 UTC m=+0.105006249 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64)
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.577 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[47893f14-f528-41f0-9a67-3d97fa2bfbc1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.580 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[25e7d734-39d1-4d01-b16f-0ccb39f551d4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 NetworkManager[45059]: <info>  [1759257767.5987] device (tapc8484b9b-b0): carrier: link connected
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.605 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[fd53c9d5-4631-4c14-8b5c-a2a18a0bdc8c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.620 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4648a39f-4d3a-44bb-be72-309c032d5024]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8484b9b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:bc:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636629, 'reachable_time': 26334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351032, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.637 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bc22ae11-11f1-4c5a-b83c-ffb57e507e95]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe48:bccf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636629, 'tstamp': 636629}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351033, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.661 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c5daea-8d76-4167-bb5c-f854a7cfd468]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8484b9b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:bc:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636629, 'reachable_time': 26334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351034, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.692 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a81c7903-33d7-49e5-bb74-4d5c9d40e2c1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:47.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.762 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bfeade40-3a6f-4cbc-b61b-cf2e43ddfaad]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.764 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8484b9b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.765 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.765 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8484b9b-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:47 compute-0 NetworkManager[45059]: <info>  [1759257767.7678] manager: (tapc8484b9b-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Sep 30 18:42:47 compute-0 kernel: tapc8484b9b-b0: entered promiscuous mode
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.769 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8484b9b-b0, col_values=(('external_ids', {'iface-id': 'd2e69f29-6b3a-46dc-9ed7-12031e1b7d2b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:42:47 compute-0 ovn_controller[156242]: 2025-09-30T18:42:47Z|00279|binding|INFO|Releasing lport d2e69f29-6b3a-46dc-9ed7-12031e1b7d2b from this chassis (sb_readonly=0)
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.774 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[751af3e4-5c3b-4056-a35a-ef72b8645450]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.775 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.775 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.775 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for c8484b9b-b34e-4c32-b987-029d8fcb2a28 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.775 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.776 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb4dc2a-abe2-4fe7-8a4b-2923b05449f0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.777 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.777 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef3eadb-8655-4998-b258-9ba7f0aef01c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.778 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:42:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:47.779 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'env', 'PROCESS_TAG=haproxy-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8484b9b-b34e-4c32-b987-029d8fcb2a28.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:47.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.982 2 DEBUG nova.compute.manager [req-f0a2b655-976e-462f-8e7b-1d5c95abb557 req-2c221344-b237-43e7-9ea9-3ba3c0e6f30b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.983 2 DEBUG oslo_concurrency.lockutils [req-f0a2b655-976e-462f-8e7b-1d5c95abb557 req-2c221344-b237-43e7-9ea9-3ba3c0e6f30b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.984 2 DEBUG oslo_concurrency.lockutils [req-f0a2b655-976e-462f-8e7b-1d5c95abb557 req-2c221344-b237-43e7-9ea9-3ba3c0e6f30b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.984 2 DEBUG oslo_concurrency.lockutils [req-f0a2b655-976e-462f-8e7b-1d5c95abb557 req-2c221344-b237-43e7-9ea9-3ba3c0e6f30b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:47 compute-0 nova_compute[265391]: 2025-09-30 18:42:47.984 2 DEBUG nova.compute.manager [req-f0a2b655-976e-462f-8e7b-1d5c95abb557 req-2c221344-b237-43e7-9ea9-3ba3c0e6f30b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Processing event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:42:48 compute-0 podman[351109]: 2025-09-30 18:42:48.167501452 +0000 UTC m=+0.062897611 container create c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 18:42:48 compute-0 systemd[1]: Started libpod-conmon-c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0.scope.
Sep 30 18:42:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:42:48 compute-0 podman[351109]: 2025-09-30 18:42:48.134628571 +0000 UTC m=+0.030024780 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd425e4ebe9c3c9f621bb2c2cdd8421e3a7e4466b2ab8604de587c535df6051/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:42:48 compute-0 podman[351109]: 2025-09-30 18:42:48.239850634 +0000 UTC m=+0.135246823 container init c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:42:48 compute-0 podman[351109]: 2025-09-30 18:42:48.246153895 +0000 UTC m=+0.141550064 container start c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=watcher_latest)
Sep 30 18:42:48 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[351125]: [NOTICE]   (351129) : New worker (351131) forked
Sep 30 18:42:48 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[351125]: [NOTICE]   (351129) : Loading success.
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.353 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.359 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.363 2 INFO nova.virt.libvirt.driver [-] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Instance spawned successfully.
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.364 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:42:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1931: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:48] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:42:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:48] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.881 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.882 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.882 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.883 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.883 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.883 2 DEBUG nova.virt.libvirt.driver [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:42:48 compute-0 nova_compute[265391]: 2025-09-30 18:42:48.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:48.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:49 compute-0 nova_compute[265391]: 2025-09-30 18:42:49.395 2 INFO nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Took 13.39 seconds to spawn the instance on the hypervisor.
Sep 30 18:42:49 compute-0 nova_compute[265391]: 2025-09-30 18:42:49.395 2 DEBUG nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:42:49 compute-0 ceph-mon[73755]: pgmap v1931: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:42:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:49.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:49 compute-0 nova_compute[265391]: 2025-09-30 18:42:49.923 2 INFO nova.compute.manager [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Took 19.11 seconds to build instance.
Sep 30 18:42:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:49.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:50 compute-0 nova_compute[265391]: 2025-09-30 18:42:50.049 2 DEBUG nova.compute.manager [req-f29142c0-2ecf-45d6-ae28-a9c590fa52f8 req-b6eab166-76da-446c-8dba-6cc2cbee6318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:42:50 compute-0 nova_compute[265391]: 2025-09-30 18:42:50.049 2 DEBUG oslo_concurrency.lockutils [req-f29142c0-2ecf-45d6-ae28-a9c590fa52f8 req-b6eab166-76da-446c-8dba-6cc2cbee6318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:50 compute-0 nova_compute[265391]: 2025-09-30 18:42:50.049 2 DEBUG oslo_concurrency.lockutils [req-f29142c0-2ecf-45d6-ae28-a9c590fa52f8 req-b6eab166-76da-446c-8dba-6cc2cbee6318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:50 compute-0 nova_compute[265391]: 2025-09-30 18:42:50.050 2 DEBUG oslo_concurrency.lockutils [req-f29142c0-2ecf-45d6-ae28-a9c590fa52f8 req-b6eab166-76da-446c-8dba-6cc2cbee6318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:50 compute-0 nova_compute[265391]: 2025-09-30 18:42:50.050 2 DEBUG nova.compute.manager [req-f29142c0-2ecf-45d6-ae28-a9c590fa52f8 req-b6eab166-76da-446c-8dba-6cc2cbee6318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] No waiting events found dispatching network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:42:50 compute-0 nova_compute[265391]: 2025-09-30 18:42:50.050 2 WARNING nova.compute.manager [req-f29142c0-2ecf-45d6-ae28-a9c590fa52f8 req-b6eab166-76da-446c-8dba-6cc2cbee6318 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received unexpected event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 for instance with vm_state active and task_state None.
Sep 30 18:42:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1932: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:42:50 compute-0 nova_compute[265391]: 2025-09-30 18:42:50.428 2 DEBUG oslo_concurrency.lockutils [None req-496b412e-603b-4d95-adbd-98c55d99cd67 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.634s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:51 compute-0 ceph-mon[73755]: pgmap v1932: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:42:51 compute-0 nova_compute[265391]: 2025-09-30 18:42:51.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:42:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:51.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:42:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:42:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:42:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1933: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:42:52 compute-0 nova_compute[265391]: 2025-09-30 18:42:52.430 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:42:53 compute-0 nova_compute[265391]: 2025-09-30 18:42:53.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:53 compute-0 nova_compute[265391]: 2025-09-30 18:42:53.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:42:53 compute-0 ceph-mon[73755]: pgmap v1933: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:42:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/222232711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:53.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:53.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:53 compute-0 nova_compute[265391]: 2025-09-30 18:42:53.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:53 compute-0 nova_compute[265391]: 2025-09-30 18:42:53.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:53 compute-0 nova_compute[265391]: 2025-09-30 18:42:53.947 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:53 compute-0 nova_compute[265391]: 2025-09-30 18:42:53.947 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:53 compute-0 nova_compute[265391]: 2025-09-30 18:42:53.948 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:42:53 compute-0 nova_compute[265391]: 2025-09-30 18:42:53.948 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:53.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:54.334 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:54.335 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:42:54.335 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:42:54 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660351972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:54 compute-0 nova_compute[265391]: 2025-09-30 18:42:54.387 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1934: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:42:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1660351972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:55 compute-0 nova_compute[265391]: 2025-09-30 18:42:55.432 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:42:55 compute-0 nova_compute[265391]: 2025-09-30 18:42:55.432 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:42:55 compute-0 nova_compute[265391]: 2025-09-30 18:42:55.573 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:42:55 compute-0 nova_compute[265391]: 2025-09-30 18:42:55.574 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:55 compute-0 nova_compute[265391]: 2025-09-30 18:42:55.612 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.038s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:55 compute-0 nova_compute[265391]: 2025-09-30 18:42:55.613 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=39.971275329589844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:42:55 compute-0 nova_compute[265391]: 2025-09-30 18:42:55.613 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:55 compute-0 nova_compute[265391]: 2025-09-30 18:42:55.613 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:55 compute-0 ceph-mon[73755]: pgmap v1934: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:42:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/166605120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:55.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:55.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:42:56 compute-0 nova_compute[265391]: 2025-09-30 18:42:56.262 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:56 compute-0 nova_compute[265391]: 2025-09-30 18:42:56.263 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1935: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:42:56 compute-0 sshd[192864]: Timeout before authentication for connection from 115.190.39.222 to 38.102.83.202, pid = 348502
Sep 30 18:42:56 compute-0 nova_compute[265391]: 2025-09-30 18:42:56.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:56 compute-0 nova_compute[265391]: 2025-09-30 18:42:56.672 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 0bd62f93-1956-4b12-a38a-10deee907b16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1151, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:42:56 compute-0 nova_compute[265391]: 2025-09-30 18:42:56.779 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:42:57 compute-0 nova_compute[265391]: 2025-09-30 18:42:57.181 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1151, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1797
Sep 30 18:42:57 compute-0 nova_compute[265391]: 2025-09-30 18:42:57.181 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:42:57 compute-0 nova_compute[265391]: 2025-09-30 18:42:57.182 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1663MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:42:55 up  1:46,  0 user,  load average: 1.57, 0.99, 0.90\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_63c45bef63ef4b9f895b3bab865e1a84': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:42:57 compute-0 nova_compute[265391]: 2025-09-30 18:42:57.232 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:57 compute-0 nova_compute[265391]: 2025-09-30 18:42:57.314 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:42:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:57.359Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:57 compute-0 ceph-mgr[74051]: [dashboard INFO request] [192.168.122.100:55166] [POST] [200] [0.002s] [4.0B] [e4f34bc8-a90f-45e3-bae1-f4753db8d812] /api/prometheus_receiver
Sep 30 18:42:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:42:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3436248168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:42:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:42:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3436248168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:42:57 compute-0 ceph-mon[73755]: pgmap v1935: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:42:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3436248168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:42:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3436248168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:42:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:42:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625569122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:57 compute-0 nova_compute[265391]: 2025-09-30 18:42:57.705 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:42:57 compute-0 nova_compute[265391]: 2025-09-30 18:42:57.713 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:42:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:57.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:42:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:57.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:42:58 compute-0 nova_compute[265391]: 2025-09-30 18:42:58.221 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:42:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1936: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Sep 30 18:42:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2625569122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:42:58 compute-0 nova_compute[265391]: 2025-09-30 18:42:58.730 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:42:58 compute-0 nova_compute[265391]: 2025-09-30 18:42:58.730 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.117s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:42:58 compute-0 nova_compute[265391]: 2025-09-30 18:42:58.731 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.417s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:42:58 compute-0 nova_compute[265391]: 2025-09-30 18:42:58.759 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:42:58 compute-0 nova_compute[265391]: 2025-09-30 18:42:58.760 2 INFO nova.compute.claims [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:42:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:58] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:42:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:42:58] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:42:58 compute-0 nova_compute[265391]: 2025-09-30 18:42:58.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:42:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:42:58.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:42:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:42:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:42:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:42:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:42:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:42:59 compute-0 ceph-mon[73755]: pgmap v1936: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Sep 30 18:42:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:42:59.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:42:59 compute-0 podman[276673]: time="2025-09-30T18:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:42:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42600 "" "Go-http-client/1.1"
Sep 30 18:42:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10768 "" "Go-http-client/1.1"
Sep 30 18:42:59 compute-0 nova_compute[265391]: 2025-09-30 18:42:59.828 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:42:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:42:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:42:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:42:59.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:00 compute-0 sudo[351219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:43:00 compute-0 sudo[351219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:00 compute-0 sudo[351219]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:43:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124713597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:43:00 compute-0 nova_compute[265391]: 2025-09-30 18:43:00.295 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:00 compute-0 nova_compute[265391]: 2025-09-30 18:43:00.303 2 DEBUG nova.compute.provider_tree [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:43:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1937: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 65 op/s
Sep 30 18:43:00 compute-0 ovn_controller[156242]: 2025-09-30T18:43:00Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:c0:eb 10.100.0.8
Sep 30 18:43:00 compute-0 ovn_controller[156242]: 2025-09-30T18:43:00Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:c0:eb 10.100.0.8
Sep 30 18:43:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/124713597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:43:00 compute-0 nova_compute[265391]: 2025-09-30 18:43:00.811 2 DEBUG nova.scheduler.client.report [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:43:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.326 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.595s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.327 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:43:01 compute-0 openstack_network_exporter[279566]: ERROR   18:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:43:01 compute-0 openstack_network_exporter[279566]: ERROR   18:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:43:01 compute-0 openstack_network_exporter[279566]: ERROR   18:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:43:01 compute-0 openstack_network_exporter[279566]: ERROR   18:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:43:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:43:01 compute-0 openstack_network_exporter[279566]: ERROR   18:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:43:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:43:01 compute-0 sudo[351246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:43:01 compute-0 sudo[351246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:01 compute-0 sudo[351246]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:01 compute-0 sudo[351272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:43:01 compute-0 sudo[351272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:01 compute-0 ceph-mon[73755]: pgmap v1937: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 65 op/s
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.731 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.731 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.731 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.731 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:43:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:01.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.838 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.840 2 DEBUG nova.network.neutron [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.841 2 WARNING neutronclient.v2_0.client [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:01 compute-0 nova_compute[265391]: 2025-09-30 18:43:01.841 2 WARNING neutronclient.v2_0.client [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:01.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:02 compute-0 sudo[351272]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:02 compute-0 nova_compute[265391]: 2025-09-30 18:43:02.352 2 INFO nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:43:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:43:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:43:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:43:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:43:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1938: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 65 op/s
Sep 30 18:43:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:43:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:43:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:43:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:43:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:43:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:43:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:43:02 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:43:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:43:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:43:02 compute-0 nova_compute[265391]: 2025-09-30 18:43:02.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:02 compute-0 sudo[351331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:43:02 compute-0 sudo[351331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:02 compute-0 sudo[351331]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:02 compute-0 sudo[351356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:43:02 compute-0 sudo[351356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:43:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:43:02 compute-0 ceph-mon[73755]: pgmap v1938: 353 pgs: 353 active+clean; 88 MiB data, 374 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 65 op/s
Sep 30 18:43:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:43:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:43:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:43:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:43:02 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:43:02 compute-0 nova_compute[265391]: 2025-09-30 18:43:02.862 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:43:02 compute-0 podman[351422]: 2025-09-30 18:43:02.887000716 +0000 UTC m=+0.036711010 container create d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_curie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:43:02 compute-0 systemd[1]: Started libpod-conmon-d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3.scope.
Sep 30 18:43:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:43:02 compute-0 podman[351422]: 2025-09-30 18:43:02.870178066 +0000 UTC m=+0.019888380 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:43:02 compute-0 podman[351422]: 2025-09-30 18:43:02.974102894 +0000 UTC m=+0.123813188 container init d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_curie, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:43:02 compute-0 podman[351422]: 2025-09-30 18:43:02.980400315 +0000 UTC m=+0.130110609 container start d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Sep 30 18:43:02 compute-0 podman[351422]: 2025-09-30 18:43:02.983215607 +0000 UTC m=+0.132925891 container attach d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_curie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:43:02 compute-0 cranky_curie[351438]: 167 167
Sep 30 18:43:02 compute-0 systemd[1]: libpod-d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3.scope: Deactivated successfully.
Sep 30 18:43:02 compute-0 podman[351422]: 2025-09-30 18:43:02.989446707 +0000 UTC m=+0.139157041 container died d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_curie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa85ad6777ea293ec1bf66b72122aa14971f99553b4a88a27d8017def62f3e17-merged.mount: Deactivated successfully.
Sep 30 18:43:03 compute-0 podman[351422]: 2025-09-30 18:43:03.038062781 +0000 UTC m=+0.187773065 container remove d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_curie, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:43:03 compute-0 systemd[1]: libpod-conmon-d181ef571441d2c3d81c86f4a4cca324881e663aa956cb918b2688e64781cbf3.scope: Deactivated successfully.
Sep 30 18:43:03 compute-0 podman[351462]: 2025-09-30 18:43:03.243320064 +0000 UTC m=+0.052043683 container create 7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_heyrovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:43:03 compute-0 systemd[1]: Started libpod-conmon-7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80.scope.
Sep 30 18:43:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:43:03 compute-0 podman[351462]: 2025-09-30 18:43:03.223550088 +0000 UTC m=+0.032273727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf249fe0bbbbf302719fa2c781f519019eb9b4dc58e503f22d1eb7bcb000fa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf249fe0bbbbf302719fa2c781f519019eb9b4dc58e503f22d1eb7bcb000fa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf249fe0bbbbf302719fa2c781f519019eb9b4dc58e503f22d1eb7bcb000fa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf249fe0bbbbf302719fa2c781f519019eb9b4dc58e503f22d1eb7bcb000fa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf249fe0bbbbf302719fa2c781f519019eb9b4dc58e503f22d1eb7bcb000fa2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:03 compute-0 podman[351462]: 2025-09-30 18:43:03.333294916 +0000 UTC m=+0.142018555 container init 7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:43:03 compute-0 podman[351462]: 2025-09-30 18:43:03.34242184 +0000 UTC m=+0.151145459 container start 7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:43:03 compute-0 podman[351462]: 2025-09-30 18:43:03.345538169 +0000 UTC m=+0.154261798 container attach 7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 18:43:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:03.543 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:43:03 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:03.544 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:03 compute-0 zen_heyrovsky[351477]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:43:03 compute-0 zen_heyrovsky[351477]: --> All data devices are unavailable
Sep 30 18:43:03 compute-0 systemd[1]: libpod-7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80.scope: Deactivated successfully.
Sep 30 18:43:03 compute-0 podman[351462]: 2025-09-30 18:43:03.69999417 +0000 UTC m=+0.508717819 container died 7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_heyrovsky, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf249fe0bbbbf302719fa2c781f519019eb9b4dc58e503f22d1eb7bcb000fa2-merged.mount: Deactivated successfully.
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.733 2 DEBUG nova.network.neutron [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Successfully created port: bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:43:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:03.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:03 compute-0 podman[351462]: 2025-09-30 18:43:03.746257224 +0000 UTC m=+0.554980853 container remove 7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:43:03 compute-0 systemd[1]: libpod-conmon-7b141b68f54f80a3a3b808f7428dd9b31af8ac771f9bba8c31b72b43e63d2f80.scope: Deactivated successfully.
Sep 30 18:43:03 compute-0 sudo[351356]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:03 compute-0 sudo[351505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:43:03 compute-0 sudo[351505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:03 compute-0 sudo[351505]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.882 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.883 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.883 2 INFO nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Creating image(s)
Sep 30 18:43:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:03.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:03 compute-0 sudo[351531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:43:03 compute-0 sudo[351531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.911 2 DEBUG nova.storage.rbd_utils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.941 2 DEBUG nova.storage.rbd_utils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.967 2 DEBUG nova.storage.rbd_utils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.970 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:43:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:43:03 compute-0 nova_compute[265391]: 2025-09-30 18:43:03.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.023 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.023 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.023 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.024 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.049 2 DEBUG nova.storage.rbd_utils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.053 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.333 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:04 compute-0 podman[351691]: 2025-09-30 18:43:04.372803208 +0000 UTC m=+0.039836520 container create 2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:43:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1939: 353 pgs: 353 active+clean; 121 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Sep 30 18:43:04 compute-0 systemd[1]: Started libpod-conmon-2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892.scope.
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.414 2 DEBUG nova.storage.rbd_utils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] resizing rbd image 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:43:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:43:04 compute-0 podman[351691]: 2025-09-30 18:43:04.440748897 +0000 UTC m=+0.107782209 container init 2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:43:04 compute-0 podman[351691]: 2025-09-30 18:43:04.448074084 +0000 UTC m=+0.115107376 container start 2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:43:04 compute-0 podman[351691]: 2025-09-30 18:43:04.451320837 +0000 UTC m=+0.118354119 container attach 2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_liskov, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:43:04 compute-0 hungry_liskov[351742]: 167 167
Sep 30 18:43:04 compute-0 systemd[1]: libpod-2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892.scope: Deactivated successfully.
Sep 30 18:43:04 compute-0 podman[351691]: 2025-09-30 18:43:04.356097391 +0000 UTC m=+0.023130713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:43:04 compute-0 podman[351691]: 2025-09-30 18:43:04.453266497 +0000 UTC m=+0.120299819 container died 2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-65e3aba57a21e5d2112abfc0d23ae9c60691199caf5e7680b27094ba7136e34c-merged.mount: Deactivated successfully.
Sep 30 18:43:04 compute-0 podman[351691]: 2025-09-30 18:43:04.488296864 +0000 UTC m=+0.155330156 container remove 2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:43:04 compute-0 systemd[1]: libpod-conmon-2f42945c59df4c9ea6b682466225c6ed58894818e453bf03cbd80b7c1529f892.scope: Deactivated successfully.
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.528 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.528 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Ensure instance console log exists: /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.529 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.529 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:04 compute-0 nova_compute[265391]: 2025-09-30 18:43:04.529 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:04 compute-0 podman[351804]: 2025-09-30 18:43:04.653123152 +0000 UTC m=+0.047168398 container create 6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kapitsa, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:43:04 compute-0 systemd[1]: Started libpod-conmon-6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a.scope.
Sep 30 18:43:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d417237e6c6c9ca0820b168bc5c7b8a9b2fdbe3f3d6ca109c0f8a2d4a4ee267/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d417237e6c6c9ca0820b168bc5c7b8a9b2fdbe3f3d6ca109c0f8a2d4a4ee267/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d417237e6c6c9ca0820b168bc5c7b8a9b2fdbe3f3d6ca109c0f8a2d4a4ee267/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d417237e6c6c9ca0820b168bc5c7b8a9b2fdbe3f3d6ca109c0f8a2d4a4ee267/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:04 compute-0 podman[351804]: 2025-09-30 18:43:04.635136372 +0000 UTC m=+0.029181608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:43:04 compute-0 podman[351804]: 2025-09-30 18:43:04.730930893 +0000 UTC m=+0.124976119 container init 6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:43:04 compute-0 podman[351804]: 2025-09-30 18:43:04.738476126 +0000 UTC m=+0.132521372 container start 6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:43:04 compute-0 podman[351804]: 2025-09-30 18:43:04.744384707 +0000 UTC m=+0.138429913 container attach 6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kapitsa, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 18:43:04 compute-0 podman[351818]: 2025-09-30 18:43:04.772382934 +0000 UTC m=+0.081406664 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:43:04 compute-0 podman[351822]: 2025-09-30 18:43:04.788132947 +0000 UTC m=+0.084942645 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:43:04 compute-0 podman[351821]: 2025-09-30 18:43:04.811804063 +0000 UTC m=+0.120074284 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]: {
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:     "0": [
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:         {
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "devices": [
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "/dev/loop3"
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             ],
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "lv_name": "ceph_lv0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "lv_size": "21470642176",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "name": "ceph_lv0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "tags": {
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.cluster_name": "ceph",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.crush_device_class": "",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.encrypted": "0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.osd_id": "0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.type": "block",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.vdo": "0",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:                 "ceph.with_tpm": "0"
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             },
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "type": "block",
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:             "vg_name": "ceph_vg0"
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:         }
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]:     ]
Sep 30 18:43:05 compute-0 jolly_kapitsa[351823]: }
Sep 30 18:43:05 compute-0 systemd[1]: libpod-6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a.scope: Deactivated successfully.
Sep 30 18:43:05 compute-0 podman[351804]: 2025-09-30 18:43:05.026016755 +0000 UTC m=+0.420061971 container died 6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d417237e6c6c9ca0820b168bc5c7b8a9b2fdbe3f3d6ca109c0f8a2d4a4ee267-merged.mount: Deactivated successfully.
Sep 30 18:43:05 compute-0 podman[351804]: 2025-09-30 18:43:05.082464339 +0000 UTC m=+0.476509545 container remove 6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:43:05 compute-0 systemd[1]: libpod-conmon-6b3f591b1194eecbc2ff109847a2777c7b989226fd22605b3e810755b1a7b33a.scope: Deactivated successfully.
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.104 2 DEBUG nova.network.neutron [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Successfully updated port: bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:43:05 compute-0 sudo[351531]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:05 compute-0 sudo[351912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:43:05 compute-0 sudo[351912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:05 compute-0 sudo[351912]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.180 2 DEBUG nova.compute.manager [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received event network-changed-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.180 2 DEBUG nova.compute.manager [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Refreshing instance network info cache due to event network-changed-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.180 2 DEBUG oslo_concurrency.lockutils [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.180 2 DEBUG oslo_concurrency.lockutils [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.180 2 DEBUG nova.network.neutron [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Refreshing network info cache for port bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:43:05 compute-0 sudo[351937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:43:05 compute-0 sudo[351937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:05 compute-0 ceph-mon[73755]: pgmap v1939: 353 pgs: 353 active+clean; 121 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.612 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "refresh_cache-1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:43:05 compute-0 podman[352003]: 2025-09-30 18:43:05.675216218 +0000 UTC m=+0.072895576 container create 89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_diffie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.691 2 WARNING neutronclient.v2_0.client [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:05 compute-0 systemd[1]: Started libpod-conmon-89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72.scope.
Sep 30 18:43:05 compute-0 podman[352003]: 2025-09-30 18:43:05.64478682 +0000 UTC m=+0.042466228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:43:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:05.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:43:05 compute-0 podman[352003]: 2025-09-30 18:43:05.768309511 +0000 UTC m=+0.165988889 container init 89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:43:05 compute-0 podman[352003]: 2025-09-30 18:43:05.774375456 +0000 UTC m=+0.172054764 container start 89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 18:43:05 compute-0 podman[352003]: 2025-09-30 18:43:05.777711711 +0000 UTC m=+0.175391069 container attach 89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_diffie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:43:05 compute-0 stoic_diffie[352019]: 167 167
Sep 30 18:43:05 compute-0 systemd[1]: libpod-89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72.scope: Deactivated successfully.
Sep 30 18:43:05 compute-0 podman[352003]: 2025-09-30 18:43:05.780862232 +0000 UTC m=+0.178541560 container died 89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:43:05 compute-0 nova_compute[265391]: 2025-09-30 18:43:05.812 2 DEBUG nova.network.neutron [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-91c9528a00fbdf2ebb5de21128751a3383503467ab00f595f0d114fb63540cbf-merged.mount: Deactivated successfully.
Sep 30 18:43:05 compute-0 podman[352003]: 2025-09-30 18:43:05.825792472 +0000 UTC m=+0.223471800 container remove 89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_diffie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:43:05 compute-0 systemd[1]: libpod-conmon-89bfe46e1270fdb931c61dce5b54b7a70001c6292443fd9d5ade42cfa2e81d72.scope: Deactivated successfully.
Sep 30 18:43:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:05.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:05 compute-0 podman[352045]: 2025-09-30 18:43:05.993336059 +0000 UTC m=+0.044330675 container create 1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shtern, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 18:43:06 compute-0 systemd[1]: Started libpod-conmon-1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44.scope.
Sep 30 18:43:06 compute-0 nova_compute[265391]: 2025-09-30 18:43:06.039 2 DEBUG nova.network.neutron [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:43:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4f0ab2c5a4d875d584aa2db931b10cd5140e2805747334ed94924bc2da44da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4f0ab2c5a4d875d584aa2db931b10cd5140e2805747334ed94924bc2da44da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4f0ab2c5a4d875d584aa2db931b10cd5140e2805747334ed94924bc2da44da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4f0ab2c5a4d875d584aa2db931b10cd5140e2805747334ed94924bc2da44da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:43:06 compute-0 podman[352045]: 2025-09-30 18:43:05.977035052 +0000 UTC m=+0.028029688 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:43:06 compute-0 podman[352045]: 2025-09-30 18:43:06.077586545 +0000 UTC m=+0.128581211 container init 1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shtern, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:43:06 compute-0 podman[352045]: 2025-09-30 18:43:06.085765425 +0000 UTC m=+0.136760031 container start 1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:43:06 compute-0 podman[352045]: 2025-09-30 18:43:06.088613348 +0000 UTC m=+0.139607984 container attach 1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shtern, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:43:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1940: 353 pgs: 353 active+clean; 121 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 146 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Sep 30 18:43:06 compute-0 nova_compute[265391]: 2025-09-30 18:43:06.547 2 DEBUG oslo_concurrency.lockutils [req-da37fb46-1bdc-4c84-9ee9-affe386831e3 req-4bbe6023-5604-40b7-9553-9c3095b2bec5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:43:06 compute-0 nova_compute[265391]: 2025-09-30 18:43:06.548 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquired lock "refresh_cache-1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:43:06 compute-0 nova_compute[265391]: 2025-09-30 18:43:06.548 2 DEBUG nova.network.neutron [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:43:06 compute-0 nova_compute[265391]: 2025-09-30 18:43:06.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:06 compute-0 lvm[352135]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:43:06 compute-0 lvm[352135]: VG ceph_vg0 finished
Sep 30 18:43:06 compute-0 boring_shtern[352061]: {}
Sep 30 18:43:06 compute-0 systemd[1]: libpod-1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44.scope: Deactivated successfully.
Sep 30 18:43:06 compute-0 podman[352045]: 2025-09-30 18:43:06.790662052 +0000 UTC m=+0.841656668 container died 1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:43:06 compute-0 systemd[1]: libpod-1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44.scope: Consumed 1.031s CPU time.
Sep 30 18:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b4f0ab2c5a4d875d584aa2db931b10cd5140e2805747334ed94924bc2da44da-merged.mount: Deactivated successfully.
Sep 30 18:43:06 compute-0 podman[352045]: 2025-09-30 18:43:06.829392783 +0000 UTC m=+0.880387399 container remove 1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:43:06 compute-0 systemd[1]: libpod-conmon-1e23943c279b5dbad0541907eb0346ef25a64b16c0f02a8b7958ed358e8c3c44.scope: Deactivated successfully.
Sep 30 18:43:06 compute-0 sudo[351937]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:43:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:43:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:43:06 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:43:06 compute-0 sudo[352153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:43:06 compute-0 sudo[352153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:06 compute-0 sudo[352153]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:43:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:43:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:07.360Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:43:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:43:07 compute-0 ceph-mon[73755]: pgmap v1940: 353 pgs: 353 active+clean; 121 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 146 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Sep 30 18:43:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:43:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:43:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:43:07 compute-0 nova_compute[265391]: 2025-09-30 18:43:07.517 2 DEBUG nova.network.neutron [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:43:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:07.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:07 compute-0 nova_compute[265391]: 2025-09-30 18:43:07.797 2 WARNING neutronclient.v2_0.client [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:07 compute-0 nova_compute[265391]: 2025-09-30 18:43:07.984 2 DEBUG nova.network.neutron [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Updating instance_info_cache with network_info: [{"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011344264321644852 of space, bias 1.0, pg target 0.22688528643289704 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:43:08
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'backups', 'images', 'default.rgw.control', '.mgr', '.nfs']
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1941: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 163 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.494 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Releasing lock "refresh_cache-1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.495 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Instance network_info: |[{"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.500 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Start _get_guest_xml network_info=[{"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.505 2 WARNING nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.508 2 DEBUG nova.virt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalanceStrategy-server-1513928321', uuid='1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4'), owner=OwnerMeta(userid='5717e8cb8548429b948a23763350ab4a', username='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin', projectid='63c45bef63ef4b9f895b3bab865e1a84', projectname='tempest-TestExecuteWorkloadBalanceStrategy-134702932'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='tempest-watcher_flavor-293987126', flavorid='dc3a14e6-3544-428c-a856-1da19a12bf48', memory_mb=1151, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={}, swap=0), network_info=[{"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257788.5078738) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.513 2 DEBUG nova.virt.libvirt.host [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.514 2 DEBUG nova.virt.libvirt.host [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.517 2 DEBUG nova.virt.libvirt.host [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.518 2 DEBUG nova.virt.libvirt.host [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.518 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.519 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:42:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={},flavorid='dc3a14e6-3544-428c-a856-1da19a12bf48',id=3,is_public=True,memory_mb=1151,name='tempest-watcher_flavor-293987126',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.520 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.520 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.520 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.521 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.521 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.521 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.522 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.522 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.522 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.524 2 DEBUG nova.virt.hardware [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.527 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:08] "GET /metrics HTTP/1.1" 200 46731 "" "Prometheus/2.51.0"
Sep 30 18:43:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:08] "GET /metrics HTTP/1.1" 200 46731 "" "Prometheus/2.51.0"
Sep 30 18:43:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:08.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:43:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509196512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:08 compute-0 nova_compute[265391]: 2025-09-30 18:43:08.988 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.017 2 DEBUG nova.storage.rbd_utils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.021 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:43:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973940694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.470 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:09 compute-0 ceph-mon[73755]: pgmap v1941: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 163 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Sep 30 18:43:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1509196512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:43:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2973940694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.473 2 DEBUG nova.virt.libvirt.vif [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:42:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1513928321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1513928321',id=33,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-rr510t3s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:43:02Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.473 2 DEBUG nova.network.os_vif_util [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.474 2 DEBUG nova.network.os_vif_util [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0d:ac,bridge_name='br-int',has_traffic_filtering=True,id=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1a6fd9-a9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.476 2 DEBUG nova.objects.instance [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:43:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:09.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:09.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.987 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <uuid>1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4</uuid>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <name>instance-00000021</name>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <memory>1178624</memory>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-1513928321</nova:name>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:43:08</nova:creationTime>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <nova:flavor name="tempest-watcher_flavor-293987126" id="dc3a14e6-3544-428c-a856-1da19a12bf48">
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:memory>1151</nova:memory>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:extraSpecs/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:43:09 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <nova:port uuid="bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e">
Sep 30 18:43:09 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <system>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <entry name="serial">1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4</entry>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <entry name="uuid">1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4</entry>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </system>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <os>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   </os>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <features>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   </features>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk">
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       </source>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk.config">
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       </source>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:43:09 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:b9:0d:ac"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <target dev="tapbb1a6fd9-a9"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4/console.log" append="off"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <video>
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </video>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:43:09 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:43:09 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:43:09 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:43:09 compute-0 nova_compute[265391]: </domain>
Sep 30 18:43:09 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.988 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Preparing to wait for external event network-vif-plugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.989 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.989 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.990 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.991 2 DEBUG nova.virt.libvirt.vif [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:42:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1513928321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1513928321',id=33,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-rr510t3s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:43:02Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.991 2 DEBUG nova.network.os_vif_util [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.992 2 DEBUG nova.network.os_vif_util [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0d:ac,bridge_name='br-int',has_traffic_filtering=True,id=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1a6fd9-a9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.992 2 DEBUG os_vif [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0d:ac,bridge_name='br-int',has_traffic_filtering=True,id=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1a6fd9-a9') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.994 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:09 compute-0 nova_compute[265391]: 2025-09-30 18:43:09.995 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'c507e4eb-de78-58f4-b0bd-a1b804600b70', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb1a6fd9-a9, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.028 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapbb1a6fd9-a9, col_values=(('qos', UUID('f6066c6d-f1e9-414a-81c0-4146af41e336')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.028 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapbb1a6fd9-a9, col_values=(('external_ids', {'iface-id': 'bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:0d:ac', 'vm-uuid': '1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:10 compute-0 NetworkManager[45059]: <info>  [1759257790.0302] manager: (tapbb1a6fd9-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:10 compute-0 nova_compute[265391]: 2025-09-30 18:43:10.038 2 INFO os_vif [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0d:ac,bridge_name='br-int',has_traffic_filtering=True,id=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1a6fd9-a9')
Sep 30 18:43:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1942: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 160 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Sep 30 18:43:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:11 compute-0 ceph-mon[73755]: pgmap v1942: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 160 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Sep 30 18:43:11 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:11.548 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:11 compute-0 nova_compute[265391]: 2025-09-30 18:43:11.579 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:43:11 compute-0 nova_compute[265391]: 2025-09-30 18:43:11.580 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:43:11 compute-0 nova_compute[265391]: 2025-09-30 18:43:11.580 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] No VIF found with MAC fa:16:3e:b9:0d:ac, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:43:11 compute-0 nova_compute[265391]: 2025-09-30 18:43:11.580 2 INFO nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Using config drive
Sep 30 18:43:11 compute-0 nova_compute[265391]: 2025-09-30 18:43:11.608 2 DEBUG nova.storage.rbd_utils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:43:11 compute-0 nova_compute[265391]: 2025-09-30 18:43:11.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:11.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:11.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:12 compute-0 nova_compute[265391]: 2025-09-30 18:43:12.121 2 WARNING neutronclient.v2_0.client [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1943: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 160 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Sep 30 18:43:12 compute-0 nova_compute[265391]: 2025-09-30 18:43:12.897 2 INFO nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Creating config drive at /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4/disk.config
Sep 30 18:43:12 compute-0 nova_compute[265391]: 2025-09-30 18:43:12.904 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpgi4oajas execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.032 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpgi4oajas" returned: 0 in 0.128s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.059 2 DEBUG nova.storage.rbd_utils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] rbd image 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.062 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4/disk.config 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.209 2 DEBUG oslo_concurrency.processutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4/disk.config 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.210 2 INFO nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Deleting local config drive /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4/disk.config because it was imported into RBD.
Sep 30 18:43:13 compute-0 kernel: tapbb1a6fd9-a9: entered promiscuous mode
Sep 30 18:43:13 compute-0 NetworkManager[45059]: <info>  [1759257793.2578] manager: (tapbb1a6fd9-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Sep 30 18:43:13 compute-0 ovn_controller[156242]: 2025-09-30T18:43:13Z|00280|binding|INFO|Claiming lport bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e for this chassis.
Sep 30 18:43:13 compute-0 ovn_controller[156242]: 2025-09-30T18:43:13Z|00281|binding|INFO|bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e: Claiming fa:16:3e:b9:0d:ac 10.100.0.6
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.267 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:0d:ac 10.100.0.6'], port_security=['fa:16:3e:b9:0d:ac 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9025550-4c18-4f21-a560-5b6f52684803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.267 166158 INFO neutron.agent.ovn.metadata.agent [-] Port bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 bound to our chassis
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.268 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:43:13 compute-0 ovn_controller[156242]: 2025-09-30T18:43:13Z|00282|binding|INFO|Setting lport bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e ovn-installed in OVS
Sep 30 18:43:13 compute-0 ovn_controller[156242]: 2025-09-30T18:43:13Z|00283|binding|INFO|Setting lport bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e up in Southbound
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.287 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[88372ac7-43a5-4d0b-a1bc-6bd1516a51ff]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:13 compute-0 systemd-udevd[352320]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:43:13 compute-0 systemd-machined[219917]: New machine qemu-25-instance-00000021.
Sep 30 18:43:13 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000021.
Sep 30 18:43:13 compute-0 NetworkManager[45059]: <info>  [1759257793.3057] device (tapbb1a6fd9-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:43:13 compute-0 NetworkManager[45059]: <info>  [1759257793.3063] device (tapbb1a6fd9-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.320 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[387d1f78-1b56-40d8-bc3c-510bf536aa69]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.322 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[aed91ccd-b4ac-47b1-b691-16f8882bc5a9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.346 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ec4173-f3c3-4f48-87ce-7a504623a45e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.363 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[509b4451-934f-45b3-bec5-456fe391a363]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8484b9b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:bc:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636629, 'reachable_time': 26334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352332, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.376 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[02fae934-8bfd-4c66-bcc5-235bfd95d1bb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc8484b9b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636641, 'tstamp': 636641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352334, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc8484b9b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636645, 'tstamp': 636645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352334, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.377 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8484b9b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:13 compute-0 nova_compute[265391]: 2025-09-30 18:43:13.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.430 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8484b9b-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.430 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.431 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8484b9b-b0, col_values=(('external_ids', {'iface-id': 'd2e69f29-6b3a-46dc-9ed7-12031e1b7d2b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.431 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:43:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:13.432 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c520b304-9f05-469c-82cd-dcbc7ea8f0ba]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-c8484b9b-b34e-4c32-b987-029d8fcb2a28\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID c8484b9b-b34e-4c32-b987-029d8fcb2a28\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:13 compute-0 ceph-mon[73755]: pgmap v1943: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 160 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Sep 30 18:43:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:13.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:13.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:43:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.046 2 DEBUG nova.compute.manager [req-43f84ead-508a-4bc7-b505-3b97cd7830e9 req-ed3d05a9-391d-40de-9470-a422f18aba69 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received event network-vif-plugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.047 2 DEBUG oslo_concurrency.lockutils [req-43f84ead-508a-4bc7-b505-3b97cd7830e9 req-ed3d05a9-391d-40de-9470-a422f18aba69 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.047 2 DEBUG oslo_concurrency.lockutils [req-43f84ead-508a-4bc7-b505-3b97cd7830e9 req-ed3d05a9-391d-40de-9470-a422f18aba69 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.048 2 DEBUG oslo_concurrency.lockutils [req-43f84ead-508a-4bc7-b505-3b97cd7830e9 req-ed3d05a9-391d-40de-9470-a422f18aba69 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.048 2 DEBUG nova.compute.manager [req-43f84ead-508a-4bc7-b505-3b97cd7830e9 req-ed3d05a9-391d-40de-9470-a422f18aba69 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Processing event network-vif-plugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:43:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1944: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 162 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.747 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.750 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.753 2 INFO nova.virt.libvirt.driver [-] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Instance spawned successfully.
Sep 30 18:43:14 compute-0 nova_compute[265391]: 2025-09-30 18:43:14.753 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.269 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.270 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.270 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.271 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.271 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.272 2 DEBUG nova.virt.libvirt.driver [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:43:15 compute-0 ceph-mon[73755]: pgmap v1944: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 162 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Sep 30 18:43:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:15.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.782 2 INFO nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Took 11.90 seconds to spawn the instance on the hypervisor.
Sep 30 18:43:15 compute-0 nova_compute[265391]: 2025-09-30 18:43:15.783 2 DEBUG nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:43:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:15.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.126 2 DEBUG nova.compute.manager [req-b35e7b91-fbbe-4ace-b45e-ab1580d997fe req-711d276c-c978-4347-b897-7dfcc23e59dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received event network-vif-plugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.127 2 DEBUG oslo_concurrency.lockutils [req-b35e7b91-fbbe-4ace-b45e-ab1580d997fe req-711d276c-c978-4347-b897-7dfcc23e59dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.128 2 DEBUG oslo_concurrency.lockutils [req-b35e7b91-fbbe-4ace-b45e-ab1580d997fe req-711d276c-c978-4347-b897-7dfcc23e59dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.128 2 DEBUG oslo_concurrency.lockutils [req-b35e7b91-fbbe-4ace-b45e-ab1580d997fe req-711d276c-c978-4347-b897-7dfcc23e59dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.129 2 DEBUG nova.compute.manager [req-b35e7b91-fbbe-4ace-b45e-ab1580d997fe req-711d276c-c978-4347-b897-7dfcc23e59dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] No waiting events found dispatching network-vif-plugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.129 2 WARNING nova.compute.manager [req-b35e7b91-fbbe-4ace-b45e-ab1580d997fe req-711d276c-c978-4347-b897-7dfcc23e59dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received unexpected event network-vif-plugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e for instance with vm_state active and task_state None.
Sep 30 18:43:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.343 2 INFO nova.compute.manager [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Took 19.06 seconds to build instance.
Sep 30 18:43:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1945: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:16 compute-0 nova_compute[265391]: 2025-09-30 18:43:16.849 2 DEBUG oslo_concurrency.lockutils [None req-1d0022ed-7f21-4cb2-a52e-c5bb9772bb47 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.586s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:17.362Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:17 compute-0 ceph-mon[73755]: pgmap v1945: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:43:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:17.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1946: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Sep 30 18:43:18 compute-0 podman[352385]: 2025-09-30 18:43:18.521129441 +0000 UTC m=+0.056113927 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=multipathd)
Sep 30 18:43:18 compute-0 podman[352386]: 2025-09-30 18:43:18.522163208 +0000 UTC m=+0.055713427 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Sep 30 18:43:18 compute-0 podman[352387]: 2025-09-30 18:43:18.527100334 +0000 UTC m=+0.058261572 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41)
Sep 30 18:43:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:18] "GET /metrics HTTP/1.1" 200 46731 "" "Prometheus/2.51.0"
Sep 30 18:43:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:18] "GET /metrics HTTP/1.1" 200 46731 "" "Prometheus/2.51.0"
Sep 30 18:43:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:18.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:43:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:18.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:19 compute-0 ceph-mon[73755]: pgmap v1946: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Sep 30 18:43:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:19.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:20 compute-0 nova_compute[265391]: 2025-09-30 18:43:20.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:20 compute-0 sudo[352446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:43:20 compute-0 sudo[352446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:20 compute-0 sudo[352446]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1947: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:43:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:21 compute-0 ceph-mon[73755]: pgmap v1947: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:43:21 compute-0 nova_compute[265391]: 2025-09-30 18:43:21.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:21.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:21.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:43:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:43:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1948: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:43:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:43:23 compute-0 ceph-mon[73755]: pgmap v1948: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:43:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:23.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:23.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:24.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1949: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:43:25 compute-0 nova_compute[265391]: 2025-09-30 18:43:25.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:25 compute-0 sshd-session[352383]: Connection closed by 154.125.120.7 port 53377 [preauth]
Sep 30 18:43:25 compute-0 ceph-mon[73755]: pgmap v1949: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:43:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:25.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:26.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1950: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 69 op/s
Sep 30 18:43:26 compute-0 nova_compute[265391]: 2025-09-30 18:43:26.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:26 compute-0 ovn_controller[156242]: 2025-09-30T18:43:26Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:0d:ac 10.100.0.6
Sep 30 18:43:26 compute-0 ovn_controller[156242]: 2025-09-30T18:43:26Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:0d:ac 10.100.0.6
Sep 30 18:43:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:27.363Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:27 compute-0 ceph-mon[73755]: pgmap v1950: 353 pgs: 353 active+clean; 167 MiB data, 420 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 69 op/s
Sep 30 18:43:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:27.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:28.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1951: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Sep 30 18:43:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:28] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:43:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:28] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:43:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:28.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:29 compute-0 ceph-mon[73755]: pgmap v1951: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Sep 30 18:43:29 compute-0 podman[276673]: time="2025-09-30T18:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:43:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42600 "" "Go-http-client/1.1"
Sep 30 18:43:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:29.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10771 "" "Go-http-client/1.1"
Sep 30 18:43:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:30 compute-0 nova_compute[265391]: 2025-09-30 18:43:30.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1952: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 145 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Sep 30 18:43:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:31 compute-0 openstack_network_exporter[279566]: ERROR   18:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:43:31 compute-0 openstack_network_exporter[279566]: ERROR   18:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:43:31 compute-0 openstack_network_exporter[279566]: ERROR   18:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:43:31 compute-0 openstack_network_exporter[279566]: ERROR   18:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:43:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:43:31 compute-0 openstack_network_exporter[279566]: ERROR   18:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:43:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:43:31 compute-0 nova_compute[265391]: 2025-09-30 18:43:31.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:31 compute-0 ceph-mon[73755]: pgmap v1952: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 145 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Sep 30 18:43:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:31.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:32.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1953: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 145 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Sep 30 18:43:32 compute-0 ceph-mon[73755]: pgmap v1953: 353 pgs: 353 active+clean; 200 MiB data, 445 MiB used, 40 GiB / 40 GiB avail; 145 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Sep 30 18:43:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:33.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:33.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:34.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1954: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 146 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Sep 30 18:43:35 compute-0 nova_compute[265391]: 2025-09-30 18:43:35.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:35 compute-0 ceph-mon[73755]: pgmap v1954: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 146 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Sep 30 18:43:35 compute-0 podman[352488]: 2025-09-30 18:43:35.519242562 +0000 UTC m=+0.050293678 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:43:35 compute-0 podman[352486]: 2025-09-30 18:43:35.519685323 +0000 UTC m=+0.054990058 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS)
Sep 30 18:43:35 compute-0 podman[352487]: 2025-09-30 18:43:35.548219803 +0000 UTC m=+0.083565179 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:43:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:35.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:36.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1955: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 145 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Sep 30 18:43:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:43:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1721697830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:43:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:43:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1721697830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:43:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1721697830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:43:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1721697830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:43:36 compute-0 nova_compute[265391]: 2025-09-30 18:43:36.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:37 compute-0 nova_compute[265391]: 2025-09-30 18:43:37.217 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Check if temp file /var/lib/nova/instances/tmpgbao7i09 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:43:37 compute-0 nova_compute[265391]: 2025-09-30 18:43:37.222 2 DEBUG nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgbao7i09',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='0bd62f93-1956-4b12-a38a-10deee907b16',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:43:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:43:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:43:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:37.364Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:43:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:43:37 compute-0 ceph-mon[73755]: pgmap v1955: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 145 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Sep 30 18:43:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:43:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:37.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:38.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1956: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 146 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Sep 30 18:43:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:38] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:43:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:38] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:43:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:38.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:39 compute-0 ceph-mon[73755]: pgmap v1956: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 146 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Sep 30 18:43:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:39.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:40.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:40 compute-0 nova_compute[265391]: 2025-09-30 18:43:40.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:40 compute-0 sudo[352553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:43:40 compute-0 sudo[352553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:43:40 compute-0 sudo[352553]: pam_unix(sudo:session): session closed for user root
Sep 30 18:43:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1957: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:43:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:41 compute-0 ceph-mon[73755]: pgmap v1957: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:43:41 compute-0 nova_compute[265391]: 2025-09-30 18:43:41.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:41 compute-0 nova_compute[265391]: 2025-09-30 18:43:41.685 2 DEBUG nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Preparing to wait for external event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:43:41 compute-0 nova_compute[265391]: 2025-09-30 18:43:41.686 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:41 compute-0 nova_compute[265391]: 2025-09-30 18:43:41.686 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:41 compute-0 nova_compute[265391]: 2025-09-30 18:43:41.687 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:41.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:42.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1958: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:43:43 compute-0 ovn_controller[156242]: 2025-09-30T18:43:43Z|00284|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Sep 30 18:43:43 compute-0 ceph-mon[73755]: pgmap v1958: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:43:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:43:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:43.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:43:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:43.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:44.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1959: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 23 KiB/s wr, 2 op/s
Sep 30 18:43:45 compute-0 nova_compute[265391]: 2025-09-30 18:43:45.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:45 compute-0 ceph-mon[73755]: pgmap v1959: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 23 KiB/s wr, 2 op/s
Sep 30 18:43:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:45.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:46.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:46 compute-0 nova_compute[265391]: 2025-09-30 18:43:46.225 2 DEBUG nova.compute.manager [req-d5c30d0d-5a8b-43bb-98d2-94c6c17b1ac3 req-c5984b8d-5cba-41d4-bce7-792192d0f3e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:46 compute-0 nova_compute[265391]: 2025-09-30 18:43:46.225 2 DEBUG oslo_concurrency.lockutils [req-d5c30d0d-5a8b-43bb-98d2-94c6c17b1ac3 req-c5984b8d-5cba-41d4-bce7-792192d0f3e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:46 compute-0 nova_compute[265391]: 2025-09-30 18:43:46.226 2 DEBUG oslo_concurrency.lockutils [req-d5c30d0d-5a8b-43bb-98d2-94c6c17b1ac3 req-c5984b8d-5cba-41d4-bce7-792192d0f3e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:46 compute-0 nova_compute[265391]: 2025-09-30 18:43:46.226 2 DEBUG oslo_concurrency.lockutils [req-d5c30d0d-5a8b-43bb-98d2-94c6c17b1ac3 req-c5984b8d-5cba-41d4-bce7-792192d0f3e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:46 compute-0 nova_compute[265391]: 2025-09-30 18:43:46.226 2 DEBUG nova.compute.manager [req-d5c30d0d-5a8b-43bb-98d2-94c6c17b1ac3 req-c5984b8d-5cba-41d4-bce7-792192d0f3e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] No event matching network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 in dict_keys([('network-vif-plugged', 'a362175f-2dba-4a9f-bc07-4260625a8ce0')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:43:46 compute-0 nova_compute[265391]: 2025-09-30 18:43:46.226 2 DEBUG nova.compute.manager [req-d5c30d0d-5a8b-43bb-98d2-94c6c17b1ac3 req-c5984b8d-5cba-41d4-bce7-792192d0f3e5 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:43:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1960: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:43:46 compute-0 nova_compute[265391]: 2025-09-30 18:43:46.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:47.365Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:47 compute-0 ceph-mon[73755]: pgmap v1960: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:43:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:48.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.286 2 DEBUG nova.compute.manager [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.286 2 DEBUG oslo_concurrency.lockutils [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.287 2 DEBUG oslo_concurrency.lockutils [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.287 2 DEBUG oslo_concurrency.lockutils [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.287 2 DEBUG nova.compute.manager [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Processing event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.287 2 DEBUG nova.compute.manager [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-changed-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.287 2 DEBUG nova.compute.manager [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Refreshing instance network info cache due to event network-changed-a362175f-2dba-4a9f-bc07-4260625a8ce0. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.287 2 DEBUG oslo_concurrency.lockutils [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.287 2 DEBUG oslo_concurrency.lockutils [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.288 2 DEBUG nova.network.neutron [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Refreshing network info cache for port a362175f-2dba-4a9f-bc07-4260625a8ce0 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:43:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1961: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.709 2 INFO nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Took 7.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.710 2 DEBUG nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:43:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:48] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:43:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:48] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:43:48 compute-0 nova_compute[265391]: 2025-09-30 18:43:48.795 2 WARNING neutronclient.v2_0.client [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:48.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:43:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:48.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:43:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.229 2 DEBUG nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgbao7i09',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='0bd62f93-1956-4b12-a38a-10deee907b16',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(5921f771-62ce-4f71-be1d-1e67d936f2cc),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.232 2 DEBUG nova.objects.instance [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 0bd62f93-1956-4b12-a38a-10deee907b16 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.233 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.234 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.234 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.330 2 WARNING neutronclient.v2_0.client [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.500 2 DEBUG nova.network.neutron [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Updated VIF entry in instance network info cache for port a362175f-2dba-4a9f-bc07-4260625a8ce0. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.501 2 DEBUG nova.network.neutron [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Updating instance_info_cache with network_info: [{"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:43:49 compute-0 podman[352586]: 2025-09-30 18:43:49.543178934 +0000 UTC m=+0.075454421 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:43:49 compute-0 podman[352587]: 2025-09-30 18:43:49.552880522 +0000 UTC m=+0.085950649 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid)
Sep 30 18:43:49 compute-0 podman[352588]: 2025-09-30 18:43:49.564242283 +0000 UTC m=+0.083301782 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Sep 30 18:43:49 compute-0 ceph-mon[73755]: pgmap v1961: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 9.1 KiB/s wr, 2 op/s
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.736 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.736 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.744 2 DEBUG nova.virt.libvirt.vif [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:42:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1791586084',id=32,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:42:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-1jmywl2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:42:49Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=0bd62f93-1956-4b12-a38a-10deee907b16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.744 2 DEBUG nova.network.os_vif_util [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.745 2 DEBUG nova.network.os_vif_util [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:c0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a362175f-2dba-4a9f-bc07-4260625a8ce0,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa362175f-2d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.745 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:52:c0:eb"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <target dev="tapa362175f-2d"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]: </interface>
Sep 30 18:43:49 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.746 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <name>instance-00000020</name>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <uuid>0bd62f93-1956-4b12-a38a-10deee907b16</uuid>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084</nova:name>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:42:42</nova:creationTime>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:flavor name="tempest-watcher_flavor-293987126" id="dc3a14e6-3544-428c-a856-1da19a12bf48">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:memory>1151</nova:memory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:extraSpecs/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:43:49 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:port uuid="a362175f-2dba-4a9f-bc07-4260625a8ce0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <memory unit="KiB">1178624</memory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">1178624</currentMemory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <system>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="serial">0bd62f93-1956-4b12-a38a-10deee907b16</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="uuid">0bd62f93-1956-4b12-a38a-10deee907b16</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </system>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <os>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </os>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <features>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </features>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/0bd62f93-1956-4b12-a38a-10deee907b16_disk">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/0bd62f93-1956-4b12-a38a-10deee907b16_disk.config">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:52:c0:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa362175f-2d"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/console.log" append="off"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </target>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/console.log" append="off"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </console>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </input>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <video>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </video>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]: </domain>
Sep 30 18:43:49 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.746 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <name>instance-00000020</name>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <uuid>0bd62f93-1956-4b12-a38a-10deee907b16</uuid>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084</nova:name>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:42:42</nova:creationTime>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:flavor name="tempest-watcher_flavor-293987126" id="dc3a14e6-3544-428c-a856-1da19a12bf48">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:memory>1151</nova:memory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:extraSpecs/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:43:49 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:port uuid="a362175f-2dba-4a9f-bc07-4260625a8ce0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <memory unit="KiB">1178624</memory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">1178624</currentMemory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <system>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="serial">0bd62f93-1956-4b12-a38a-10deee907b16</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="uuid">0bd62f93-1956-4b12-a38a-10deee907b16</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </system>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <os>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </os>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <features>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </features>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/0bd62f93-1956-4b12-a38a-10deee907b16_disk">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/0bd62f93-1956-4b12-a38a-10deee907b16_disk.config">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:52:c0:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa362175f-2d"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/console.log" append="off"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </target>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/console.log" append="off"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </console>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </input>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <video>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </video>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]: </domain>
Sep 30 18:43:49 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.746 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <name>instance-00000020</name>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <uuid>0bd62f93-1956-4b12-a38a-10deee907b16</uuid>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084</nova:name>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:42:42</nova:creationTime>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:flavor name="tempest-watcher_flavor-293987126" id="dc3a14e6-3544-428c-a856-1da19a12bf48">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:memory>1151</nova:memory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:extraSpecs/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:43:49 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:user uuid="5717e8cb8548429b948a23763350ab4a">tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin</nova:user>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:project uuid="63c45bef63ef4b9f895b3bab865e1a84">tempest-TestExecuteWorkloadBalanceStrategy-134702932</nova:project>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <nova:port uuid="a362175f-2dba-4a9f-bc07-4260625a8ce0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <memory unit="KiB">1178624</memory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">1178624</currentMemory>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <system>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="serial">0bd62f93-1956-4b12-a38a-10deee907b16</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="uuid">0bd62f93-1956-4b12-a38a-10deee907b16</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </system>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <os>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </os>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <features>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </features>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/0bd62f93-1956-4b12-a38a-10deee907b16_disk">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/0bd62f93-1956-4b12-a38a-10deee907b16_disk.config">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </source>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:52:c0:eb"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa362175f-2d"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/console.log" append="off"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:43:49 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       </target>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16/console.log" append="off"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </console>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </input>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <video>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </video>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:43:49 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:43:49 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:43:49 compute-0 nova_compute[265391]: </domain>
Sep 30 18:43:49 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:43:49 compute-0 nova_compute[265391]: 2025-09-30 18:43:49.746 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:43:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:49.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:50 compute-0 nova_compute[265391]: 2025-09-30 18:43:50.008 2 DEBUG oslo_concurrency.lockutils [req-aeba9839-22b7-4bb2-8802-94766bc88147 req-3aa8ae14-e602-43f8-836c-7ccef04931ea 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-0bd62f93-1956-4b12-a38a-10deee907b16" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:43:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:50.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:50 compute-0 nova_compute[265391]: 2025-09-30 18:43:50.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:50 compute-0 nova_compute[265391]: 2025-09-30 18:43:50.240 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:43:50 compute-0 nova_compute[265391]: 2025-09-30 18:43:50.241 2 INFO nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:43:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1962: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:43:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:51 compute-0 nova_compute[265391]: 2025-09-30 18:43:51.257 2 INFO nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:43:51 compute-0 nova_compute[265391]: 2025-09-30 18:43:51.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:51 compute-0 ceph-mon[73755]: pgmap v1962: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:43:51 compute-0 nova_compute[265391]: 2025-09-30 18:43:51.761 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:43:51 compute-0 nova_compute[265391]: 2025-09-30 18:43:51.762 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:43:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:51.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:52.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.265 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.266 2 DEBUG nova.virt.libvirt.migration [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:43:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:43:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:43:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1963: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:43:52 compute-0 kernel: tapa362175f-2d (unregistering): left promiscuous mode
Sep 30 18:43:52 compute-0 NetworkManager[45059]: <info>  [1759257832.5023] device (tapa362175f-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:52 compute-0 ovn_controller[156242]: 2025-09-30T18:43:52Z|00285|binding|INFO|Releasing lport a362175f-2dba-4a9f-bc07-4260625a8ce0 from this chassis (sb_readonly=0)
Sep 30 18:43:52 compute-0 ovn_controller[156242]: 2025-09-30T18:43:52Z|00286|binding|INFO|Setting lport a362175f-2dba-4a9f-bc07-4260625a8ce0 down in Southbound
Sep 30 18:43:52 compute-0 ovn_controller[156242]: 2025-09-30T18:43:52Z|00287|binding|INFO|Removing iface tapa362175f-2d ovn-installed in OVS
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.527 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:c0:eb 10.100.0.8'], port_security=['fa:16:3e:52:c0:eb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0bd62f93-1956-4b12-a38a-10deee907b16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'a9025550-4c18-4f21-a560-5b6f52684803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=a362175f-2dba-4a9f-bc07-4260625a8ce0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.529 166158 INFO neutron.agent.ovn.metadata.agent [-] Port a362175f-2dba-4a9f-bc07-4260625a8ce0 in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 unbound from our chassis
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.533 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8484b9b-b34e-4c32-b987-029d8fcb2a28
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.549 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c19c9151-17da-488d-b948-244e28e9a82f]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:52 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000020.scope: Deactivated successfully.
Sep 30 18:43:52 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000020.scope: Consumed 16.072s CPU time.
Sep 30 18:43:52 compute-0 systemd-machined[219917]: Machine qemu-24-instance-00000020 terminated.
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.585 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[4de4a6d1-e9a6-47ee-a992-06d530ffae3f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.590 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[db7ed88a-14a0-49d4-8e80-62664357e29c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.613 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[d96e9acd-7588-44f5-a6dc-47b658974545]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.633 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5952c3-f23a-457c-ba3c-70c7ded74a3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8484b9b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:bc:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636629, 'reachable_time': 26334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352664, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:52 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 0bd62f93-1956-4b12-a38a-10deee907b16_disk: No such file or directory
Sep 30 18:43:52 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 0bd62f93-1956-4b12-a38a-10deee907b16_disk: No such file or directory
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.648 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1c101fff-16bd-4b90-84df-8e183743f67a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc8484b9b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636641, 'tstamp': 636641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352665, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc8484b9b-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636645, 'tstamp': 636645}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352665, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.651 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8484b9b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.657 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8484b9b-b0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.657 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.658 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8484b9b-b0, col_values=(('external_ids', {'iface-id': 'd2e69f29-6b3a-46dc-9ed7-12031e1b7d2b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.658 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:43:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:52.661 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b93afbd-f352-454a-bcbe-cfbbbc815751]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-c8484b9b-b34e-4c32-b987-029d8fcb2a28\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID c8484b9b-b34e-4c32-b987-029d8fcb2a28\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.677 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.677 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.678 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:43:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.768 2 DEBUG nova.virt.libvirt.guest [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '0bd62f93-1956-4b12-a38a-10deee907b16' (instance-00000020) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.768 2 INFO nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Migration operation has completed
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.769 2 INFO nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] _post_live_migration() is started..
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.783 2 WARNING neutronclient.v2_0.client [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:52 compute-0 nova_compute[265391]: 2025-09-30 18:43:52.783 2 WARNING neutronclient.v2_0.client [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.096 2 DEBUG nova.compute.manager [req-7e1d5ae0-d323-4f73-a394-4625ee720311 req-864a89f3-6dd9-47d9-a0a0-5b069ac60d02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.097 2 DEBUG oslo_concurrency.lockutils [req-7e1d5ae0-d323-4f73-a394-4625ee720311 req-864a89f3-6dd9-47d9-a0a0-5b069ac60d02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.097 2 DEBUG oslo_concurrency.lockutils [req-7e1d5ae0-d323-4f73-a394-4625ee720311 req-864a89f3-6dd9-47d9-a0a0-5b069ac60d02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.097 2 DEBUG oslo_concurrency.lockutils [req-7e1d5ae0-d323-4f73-a394-4625ee720311 req-864a89f3-6dd9-47d9-a0a0-5b069ac60d02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.098 2 DEBUG nova.compute.manager [req-7e1d5ae0-d323-4f73-a394-4625ee720311 req-864a89f3-6dd9-47d9-a0a0-5b069ac60d02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] No waiting events found dispatching network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.098 2 DEBUG nova.compute.manager [req-7e1d5ae0-d323-4f73-a394-4625ee720311 req-864a89f3-6dd9-47d9-a0a0-5b069ac60d02 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.431 2 DEBUG nova.network.neutron [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port a362175f-2dba-4a9f-bc07-4260625a8ce0 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.431 2 DEBUG nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.432 2 DEBUG nova.virt.libvirt.vif [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:42:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1791586084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1791586084',id=32,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:42:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-1jmywl2f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:43:31Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=0bd62f93-1956-4b12-a38a-10deee907b16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.432 2 DEBUG nova.network.os_vif_util [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "address": "fa:16:3e:52:c0:eb", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa362175f-2d", "ovs_interfaceid": "a362175f-2dba-4a9f-bc07-4260625a8ce0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.433 2 DEBUG nova.network.os_vif_util [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:c0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a362175f-2dba-4a9f-bc07-4260625a8ce0,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa362175f-2d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.433 2 DEBUG os_vif [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:c0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a362175f-2dba-4a9f-bc07-4260625a8ce0,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa362175f-2d') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa362175f-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=cdd527f0-d41f-4c24-9d69-d7c598a66fac) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.479 2 INFO os_vif [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:c0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a362175f-2dba-4a9f-bc07-4260625a8ce0,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa362175f-2d')
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.480 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.481 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.482 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.482 2 DEBUG nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.483 2 INFO nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Deleting instance files /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16_del
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.484 2 INFO nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Deletion of /var/lib/nova/instances/0bd62f93-1956-4b12-a38a-10deee907b16_del complete
Sep 30 18:43:53 compute-0 ceph-mon[73755]: pgmap v1963: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 9.1 KiB/s wr, 1 op/s
Sep 30 18:43:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:53.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:53.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.970 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.971 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.971 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.971 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:43:53 compute-0 nova_compute[265391]: 2025-09-30 18:43:53.971 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:43:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:54.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:43:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:54.336 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:54.336 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:43:54.337 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1964: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Sep 30 18:43:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:43:54 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/505879848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:43:54 compute-0 nova_compute[265391]: 2025-09-30 18:43:54.428 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:54 compute-0 ceph-mon[73755]: pgmap v1964: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Sep 30 18:43:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/505879848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.160 2 DEBUG nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.161 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.162 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.162 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.163 2 DEBUG nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] No waiting events found dispatching network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.163 2 WARNING nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received unexpected event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 for instance with vm_state active and task_state migrating.
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.163 2 DEBUG nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.164 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.164 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.165 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.165 2 DEBUG nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] No waiting events found dispatching network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.165 2 DEBUG nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-unplugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.166 2 DEBUG nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.166 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.167 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.167 2 DEBUG oslo_concurrency.lockutils [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.167 2 DEBUG nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] No waiting events found dispatching network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.168 2 WARNING nova.compute.manager [req-4d5b54ea-0e11-4ea1-a0b8-6351b9377924 req-4a54f00b-d77b-4eb8-9a28-c1167a08b984 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Received unexpected event network-vif-plugged-a362175f-2dba-4a9f-bc07-4260625a8ce0 for instance with vm_state active and task_state migrating.
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.474 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.475 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.649 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.650 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.675 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.025s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.676 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4055MB free_disk=39.901153564453125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.676 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:43:55 compute-0 nova_compute[265391]: 2025-09-30 18:43:55.677 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:43:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3433659692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:43:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:55.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:56.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:43:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1965: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Sep 30 18:43:56 compute-0 nova_compute[265391]: 2025-09-30 18:43:56.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:56 compute-0 nova_compute[265391]: 2025-09-30 18:43:56.711 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Updating resource usage from migration 5921f771-62ce-4f71-be1d-1e67d936f2cc
Sep 30 18:43:56 compute-0 nova_compute[265391]: 2025-09-30 18:43:56.740 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1151, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:43:56 compute-0 nova_compute[265391]: 2025-09-30 18:43:56.740 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration 5921f771-62ce-4f71-be1d-1e67d936f2cc is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 1151, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:43:56 compute-0 nova_compute[265391]: 2025-09-30 18:43:56.741 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:43:56 compute-0 nova_compute[265391]: 2025-09-30 18:43:56.741 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2814MB phys_disk=39GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:43:55 up  1:47,  0 user,  load average: 1.02, 0.94, 0.89\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_migrating': '1', 'num_os_type_None': '2', 'num_proj_63c45bef63ef4b9f895b3bab865e1a84': '2', 'io_workload': '0', 'num_task_None': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:43:56 compute-0 ceph-mon[73755]: pgmap v1965: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Sep 30 18:43:56 compute-0 nova_compute[265391]: 2025-09-30 18:43:56.784 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:43:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:43:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323189244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:43:57 compute-0 nova_compute[265391]: 2025-09-30 18:43:57.195 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:43:57 compute-0 nova_compute[265391]: 2025-09-30 18:43:57.202 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:43:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:57.366Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:43:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3052451571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:43:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:43:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3052451571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:43:57 compute-0 nova_compute[265391]: 2025-09-30 18:43:57.710 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:43:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/323189244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:43:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3052451571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:43:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3052451571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:43:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:57.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:43:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:43:58.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:43:58 compute-0 nova_compute[265391]: 2025-09-30 18:43:58.220 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:43:58 compute-0 nova_compute[265391]: 2025-09-30 18:43:58.221 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.544s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:43:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1966: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 3.5 KiB/s wr, 6 op/s
Sep 30 18:43:58 compute-0 nova_compute[265391]: 2025-09-30 18:43:58.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:43:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:58] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:43:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:43:58] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:43:58 compute-0 ceph-mon[73755]: pgmap v1966: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 3.5 KiB/s wr, 6 op/s
Sep 30 18:43:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:43:58.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:43:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:43:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:43:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:43:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:43:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:43:59 compute-0 nova_compute[265391]: 2025-09-30 18:43:59.221 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:59 compute-0 nova_compute[265391]: 2025-09-30 18:43:59.221 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:59 compute-0 nova_compute[265391]: 2025-09-30 18:43:59.221 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:59 compute-0 nova_compute[265391]: 2025-09-30 18:43:59.221 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:43:59 compute-0 nova_compute[265391]: 2025-09-30 18:43:59.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:43:59 compute-0 podman[276673]: time="2025-09-30T18:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:43:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42600 "" "Go-http-client/1.1"
Sep 30 18:43:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10761 "" "Go-http-client/1.1"
Sep 30 18:43:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:43:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:43:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:43:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:43:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/285807711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:44:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:00.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:44:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1967: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:44:00 compute-0 sudo[352732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:44:00 compute-0 sudo[352732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:00 compute-0 sudo[352732]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:00 compute-0 ceph-mon[73755]: pgmap v1967: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:44:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:01 compute-0 openstack_network_exporter[279566]: ERROR   18:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:44:01 compute-0 openstack_network_exporter[279566]: ERROR   18:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:44:01 compute-0 openstack_network_exporter[279566]: ERROR   18:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:44:01 compute-0 openstack_network_exporter[279566]: ERROR   18:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:44:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:44:01 compute-0 openstack_network_exporter[279566]: ERROR   18:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:44:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:44:01 compute-0 nova_compute[265391]: 2025-09-30 18:44:01.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:01.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:02.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1968: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:44:03 compute-0 nova_compute[265391]: 2025-09-30 18:44:03.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:03 compute-0 ceph-mon[73755]: pgmap v1968: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:44:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:03.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:03.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:04.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1969: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:44:04 compute-0 nova_compute[265391]: 2025-09-30 18:44:04.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.019 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.020 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.020 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "0bd62f93-1956-4b12-a38a-10deee907b16-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:05 compute-0 ceph-mon[73755]: pgmap v1969: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 2.5 KiB/s wr, 6 op/s
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.530 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.530 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.531 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.531 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.531 2 DEBUG oslo_concurrency.processutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:44:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:05.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:44:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120071903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:05 compute-0 nova_compute[265391]: 2025-09-30 18:44:05.980 2 DEBUG oslo_concurrency.processutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:44:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:06.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1970: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:44:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3120071903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:06 compute-0 podman[352785]: 2025-09-30 18:44:06.541478048 +0000 UTC m=+0.071902611 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:44:06 compute-0 podman[352787]: 2025-09-30 18:44:06.560384052 +0000 UTC m=+0.083356454 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:44:06 compute-0 podman[352786]: 2025-09-30 18:44:06.561142411 +0000 UTC m=+0.090139807 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:44:06 compute-0 nova_compute[265391]: 2025-09-30 18:44:06.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.015 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.015 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.082 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.083 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.083 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.084 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.084 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.101 2 INFO nova.compute.manager [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Terminating instance
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.201 2 WARNING nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.202 2 DEBUG oslo_concurrency.processutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:44:07 compute-0 sudo[352857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.224 2 DEBUG oslo_concurrency.processutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:44:07 compute-0 sudo[352857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.225 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4045MB free_disk=39.90114974975586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.226 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.226 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:07 compute-0 sudo[352857]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:07 compute-0 sudo[352883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 18:44:07 compute-0 sudo[352883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:44:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:44:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:07.367Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:44:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:44:07 compute-0 ceph-mon[73755]: pgmap v1970: 353 pgs: 353 active+clean; 200 MiB data, 446 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:44:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.617 2 DEBUG nova.compute.manager [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:44:07 compute-0 kernel: tapbb1a6fd9-a9 (unregistering): left promiscuous mode
Sep 30 18:44:07 compute-0 NetworkManager[45059]: <info>  [1759257847.6767] device (tapbb1a6fd9-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:44:07 compute-0 ovn_controller[156242]: 2025-09-30T18:44:07Z|00288|binding|INFO|Releasing lport bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e from this chassis (sb_readonly=0)
Sep 30 18:44:07 compute-0 ovn_controller[156242]: 2025-09-30T18:44:07Z|00289|binding|INFO|Setting lport bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e down in Southbound
Sep 30 18:44:07 compute-0 ovn_controller[156242]: 2025-09-30T18:44:07Z|00290|binding|INFO|Removing iface tapbb1a6fd9-a9 ovn-installed in OVS
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.699 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:0d:ac 10.100.0.6'], port_security=['fa:16:3e:b9:0d:ac 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63c45bef63ef4b9f895b3bab865e1a84', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a9025550-4c18-4f21-a560-5b6f52684803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55e305e6-0f4d-40bc-a70b-ac91f882ec57, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.701 166158 INFO neutron.agent.ovn.metadata.agent [-] Port bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e in datapath c8484b9b-b34e-4c32-b987-029d8fcb2a28 unbound from our chassis
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.703 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8484b9b-b34e-4c32-b987-029d8fcb2a28, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.704 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7084e97b-f5e1-42b7-9d92-45e097e4dd26]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.704 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 namespace which is not needed anymore
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:07 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000021.scope: Deactivated successfully.
Sep 30 18:44:07 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000021.scope: Consumed 14.707s CPU time.
Sep 30 18:44:07 compute-0 systemd-machined[219917]: Machine qemu-25-instance-00000021 terminated.
Sep 30 18:44:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:07.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:07 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[351125]: [NOTICE]   (351129) : haproxy version is 3.0.5-8e879a5
Sep 30 18:44:07 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[351125]: [NOTICE]   (351129) : path to executable is /usr/sbin/haproxy
Sep 30 18:44:07 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[351125]: [WARNING]  (351129) : Exiting Master process...
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:07 compute-0 podman[353002]: 2025-09-30 18:44:07.851739118 +0000 UTC m=+0.049080707 container kill c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 18:44:07 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[351125]: [ALERT]    (351129) : Current worker (351131) exited with code 143 (Terminated)
Sep 30 18:44:07 compute-0 neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28[351125]: [WARNING]  (351129) : All workers exited. Exiting... (0)
Sep 30 18:44:07 compute-0 systemd[1]: libpod-c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0.scope: Deactivated successfully.
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.865 2 INFO nova.virt.libvirt.driver [-] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Instance destroyed successfully.
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.866 2 DEBUG nova.objects.instance [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lazy-loading 'resources' on Instance uuid 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:44:07 compute-0 podman[352999]: 2025-09-30 18:44:07.878421961 +0000 UTC m=+0.081008124 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.892 2 DEBUG nova.compute.manager [req-928b966f-0c67-4064-97d2-2a3e295f14bc req-18fc857a-4eac-4479-ab44-51e6c599e552 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received event network-vif-unplugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.893 2 DEBUG oslo_concurrency.lockutils [req-928b966f-0c67-4064-97d2-2a3e295f14bc req-18fc857a-4eac-4479-ab44-51e6c599e552 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.893 2 DEBUG oslo_concurrency.lockutils [req-928b966f-0c67-4064-97d2-2a3e295f14bc req-18fc857a-4eac-4479-ab44-51e6c599e552 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.893 2 DEBUG oslo_concurrency.lockutils [req-928b966f-0c67-4064-97d2-2a3e295f14bc req-18fc857a-4eac-4479-ab44-51e6c599e552 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.894 2 DEBUG nova.compute.manager [req-928b966f-0c67-4064-97d2-2a3e295f14bc req-18fc857a-4eac-4479-ab44-51e6c599e552 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] No waiting events found dispatching network-vif-unplugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.894 2 DEBUG nova.compute.manager [req-928b966f-0c67-4064-97d2-2a3e295f14bc req-18fc857a-4eac-4479-ab44-51e6c599e552 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received event network-vif-unplugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:44:07 compute-0 podman[353042]: 2025-09-30 18:44:07.914607947 +0000 UTC m=+0.030646475 container died c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdd425e4ebe9c3c9f621bb2c2cdd8421e3a7e4466b2ab8604de587c535df6051-merged.mount: Deactivated successfully.
Sep 30 18:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0-userdata-shm.mount: Deactivated successfully.
Sep 30 18:44:07 compute-0 podman[353042]: 2025-09-30 18:44:07.955677938 +0000 UTC m=+0.071716446 container cleanup c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:44:07 compute-0 systemd[1]: libpod-conmon-c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0.scope: Deactivated successfully.
Sep 30 18:44:07 compute-0 podman[352999]: 2025-09-30 18:44:07.965357266 +0000 UTC m=+0.167943409 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:44:07 compute-0 podman[353043]: 2025-09-30 18:44:07.977318502 +0000 UTC m=+0.089652855 container remove c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.988 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ad41503f-0b58-4fef-9878-8a70483627d4]: (4, ("Tue Sep 30 06:44:07 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 (c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0)\nc03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0\nTue Sep 30 06:44:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 (c03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0)\nc03284355065eecc0d4ef5163c9408261cf5cc7836df398abd949f44ba4867e0\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.992 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[72449379-b239-40dc-af26-42fb813f4828]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.992 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8484b9b-b34e-4c32-b987-029d8fcb2a28.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.993 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[212411a0-8204-483e-aeba-8c5691ee20b3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:07.993 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8484b9b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:44:07 compute-0 nova_compute[265391]: 2025-09-30 18:44:07.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:07 compute-0 kernel: tapc8484b9b-b0: left promiscuous mode
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:08.013 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d65f48ad-f3c1-471a-a874-b045ea289bf9]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:08.014 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:08.035 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e8784fb0-2ca2-464c-9dff-2d2905c8458e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:08.037 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fc5896-b523-407a-9bd2-19906dade171]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:08.058 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f643b42d-badc-4f41-9794-23e527dcec9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636621, 'reachable_time': 23363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353093, 'error': None, 'target': 'ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:08.061 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8484b9b-b34e-4c32-b987-029d8fcb2a28 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:44:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:08.061 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[5844f0d6-138a-4ed8-903e-bf4a8cc916c0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:08.062 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:44:08 compute-0 systemd[1]: run-netns-ovnmeta\x2dc8484b9b\x2db34e\x2d4c32\x2db987\x2d029d8fcb2a28.mount: Deactivated successfully.
Sep 30 18:44:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:08.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002276388363205704 of space, bias 1.0, pg target 0.45527767264114083 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:44:08
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', '.nfs', '.mgr', 'default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.log']
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.247 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 0bd62f93-1956-4b12-a38a-10deee907b16 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.372 2 DEBUG nova.virt.libvirt.vif [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:42:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1513928321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1513928321',id=33,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:43:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63c45bef63ef4b9f895b3bab865e1a84',ramdisk_id='',reservation_id='r-rr510t3s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,manager,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-134702932-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:43:15Z,user_data=None,user_id='5717e8cb8548429b948a23763350ab4a',uuid=1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.372 2 DEBUG nova.network.os_vif_util [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converting VIF {"id": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "address": "fa:16:3e:b9:0d:ac", "network": {"id": "c8484b9b-b34e-4c32-b987-029d8fcb2a28", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-862913257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9029a2856de43388bcee1a38d165449", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb1a6fd9-a9", "ovs_interfaceid": "bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.373 2 DEBUG nova.network.os_vif_util [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0d:ac,bridge_name='br-int',has_traffic_filtering=True,id=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1a6fd9-a9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.373 2 DEBUG os_vif [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0d:ac,bridge_name='br-int',has_traffic_filtering=True,id=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1a6fd9-a9') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb1a6fd9-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=f6066c6d-f1e9-414a-81c0-4146af41e336) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.386 2 INFO os_vif [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:0d:ac,bridge_name='br-int',has_traffic_filtering=True,id=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e,network=Network(c8484b9b-b34e-4c32-b987-029d8fcb2a28),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb1a6fd9-a9')
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1971: 353 pgs: 353 active+clean; 200 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:44:08 compute-0 podman[353175]: 2025-09-30 18:44:08.463305129 +0000 UTC m=+0.114402349 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:44:08 compute-0 podman[353218]: 2025-09-30 18:44:08.534056209 +0000 UTC m=+0.051833227 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:44:08 compute-0 podman[353175]: 2025-09-30 18:44:08.567219108 +0000 UTC m=+0.218316348 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.753 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.780 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Instance 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1151, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.780 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 5921f771-62ce-4f71-be1d-1e67d936f2cc is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 1151, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.780 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.781 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1663MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:44:07 up  1:47,  0 user,  load average: 0.86, 0.91, 0.88\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_deleting': '1', 'num_os_type_None': '1', 'num_proj_63c45bef63ef4b9f895b3bab865e1a84': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:44:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:08] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:44:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:08] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:44:08 compute-0 nova_compute[265391]: 2025-09-30 18:44:08.834 2 DEBUG oslo_concurrency.processutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:44:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:08.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:08 compute-0 podman[353286]: 2025-09-30 18:44:08.924781048 +0000 UTC m=+0.061481934 container exec 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 18:44:08 compute-0 podman[353286]: 2025-09-30 18:44:08.941796894 +0000 UTC m=+0.078497780 container exec_died 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:44:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:09.063 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.067 2 INFO nova.virt.libvirt.driver [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Deleting instance files /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_del
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.068 2 INFO nova.virt.libvirt.driver [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Deletion of /var/lib/nova/instances/1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4_del complete
Sep 30 18:44:09 compute-0 podman[353373]: 2025-09-30 18:44:09.15423716 +0000 UTC m=+0.048101822 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:44:09 compute-0 podman[353373]: 2025-09-30 18:44:09.163744184 +0000 UTC m=+0.057608836 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:44:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:44:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3944740418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.289 2 DEBUG oslo_concurrency.processutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.294 2 DEBUG nova.compute.provider_tree [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:44:09 compute-0 podman[353442]: 2025-09-30 18:44:09.34530552 +0000 UTC m=+0.045906626 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., release=1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9)
Sep 30 18:44:09 compute-0 podman[353442]: 2025-09-30 18:44:09.366574904 +0000 UTC m=+0.067175990 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, com.redhat.component=keepalived-container, release=1793)
Sep 30 18:44:09 compute-0 ceph-mon[73755]: pgmap v1971: 353 pgs: 353 active+clean; 200 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:44:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3944740418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.581 2 INFO nova.compute.manager [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Took 1.96 seconds to destroy the instance on the hypervisor.
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.581 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.582 2 DEBUG nova.compute.manager [-] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.582 2 DEBUG nova.network.neutron [-] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.582 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:44:09 compute-0 podman[353508]: 2025-09-30 18:44:09.584820289 +0000 UTC m=+0.062657914 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:44:09 compute-0 podman[353508]: 2025-09-30 18:44:09.619718152 +0000 UTC m=+0.097555777 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.801 2 DEBUG nova.scheduler.client.report [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:44:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:09.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:09 compute-0 podman[353584]: 2025-09-30 18:44:09.834447488 +0000 UTC m=+0.060751236 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.934 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.979 2 DEBUG nova.compute.manager [req-0ddb42a9-dc0e-40c0-883a-9619bc515c71 req-11463622-45d5-4ac5-92f9-6509268f666f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received event network-vif-unplugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.980 2 DEBUG oslo_concurrency.lockutils [req-0ddb42a9-dc0e-40c0-883a-9619bc515c71 req-11463622-45d5-4ac5-92f9-6509268f666f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.980 2 DEBUG oslo_concurrency.lockutils [req-0ddb42a9-dc0e-40c0-883a-9619bc515c71 req-11463622-45d5-4ac5-92f9-6509268f666f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.981 2 DEBUG oslo_concurrency.lockutils [req-0ddb42a9-dc0e-40c0-883a-9619bc515c71 req-11463622-45d5-4ac5-92f9-6509268f666f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.981 2 DEBUG nova.compute.manager [req-0ddb42a9-dc0e-40c0-883a-9619bc515c71 req-11463622-45d5-4ac5-92f9-6509268f666f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] No waiting events found dispatching network-vif-unplugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:44:09 compute-0 nova_compute[265391]: 2025-09-30 18:44:09.981 2 DEBUG nova.compute.manager [req-0ddb42a9-dc0e-40c0-883a-9619bc515c71 req-11463622-45d5-4ac5-92f9-6509268f666f 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received event network-vif-unplugged-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:44:09 compute-0 podman[353584]: 2025-09-30 18:44:09.99789056 +0000 UTC m=+0.224194308 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:44:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:10.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:10 compute-0 nova_compute[265391]: 2025-09-30 18:44:10.312 2 DEBUG nova.compute.resource_tracker [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:44:10 compute-0 nova_compute[265391]: 2025-09-30 18:44:10.314 2 DEBUG oslo_concurrency.lockutils [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.088s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:10 compute-0 nova_compute[265391]: 2025-09-30 18:44:10.323 2 DEBUG nova.compute.manager [req-dda1e1ce-c054-4e42-a460-abbcc3e6a98a req-b28dc17a-fcb0-49e6-a877-402735aa0745 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Received event network-vif-deleted-bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:44:10 compute-0 nova_compute[265391]: 2025-09-30 18:44:10.323 2 INFO nova.compute.manager [req-dda1e1ce-c054-4e42-a460-abbcc3e6a98a req-b28dc17a-fcb0-49e6-a877-402735aa0745 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Neutron deleted interface bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e; detaching it from the instance and deleting it from the info cache
Sep 30 18:44:10 compute-0 nova_compute[265391]: 2025-09-30 18:44:10.324 2 DEBUG nova.network.neutron [req-dda1e1ce-c054-4e42-a460-abbcc3e6a98a req-b28dc17a-fcb0-49e6-a877-402735aa0745 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:44:10 compute-0 nova_compute[265391]: 2025-09-30 18:44:10.341 2 INFO nova.compute.manager [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:44:10 compute-0 podman[353696]: 2025-09-30 18:44:10.395910176 +0000 UTC m=+0.062674955 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:44:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1972: 353 pgs: 353 active+clean; 200 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:44:10 compute-0 podman[353696]: 2025-09-30 18:44:10.436824693 +0000 UTC m=+0.103589472 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:44:10 compute-0 sudo[352883]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:44:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:44:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:10 compute-0 sudo[353739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:44:10 compute-0 sudo[353739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:10 compute-0 sudo[353739]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:10 compute-0 nova_compute[265391]: 2025-09-30 18:44:10.747 2 DEBUG nova.network.neutron [-] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:44:10 compute-0 sudo[353764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:44:10 compute-0 sudo[353764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:10 compute-0 nova_compute[265391]: 2025-09-30 18:44:10.837 2 DEBUG nova.compute.manager [req-dda1e1ce-c054-4e42-a460-abbcc3e6a98a req-b28dc17a-fcb0-49e6-a877-402735aa0745 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Detach interface failed, port_id=bb1a6fd9-a955-4c4e-ad73-a3d41cfa246e, reason: Instance 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Sep 30 18:44:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:11 compute-0 nova_compute[265391]: 2025-09-30 18:44:11.255 2 INFO nova.compute.manager [-] [instance: 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4] Took 1.67 seconds to deallocate network for instance.
Sep 30 18:44:11 compute-0 sudo[353764]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:44:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:44:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:44:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:44:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1973: 353 pgs: 353 active+clean; 200 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 747 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:44:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:44:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:44:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:44:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:44:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:44:11 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:44:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:44:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:44:11 compute-0 nova_compute[265391]: 2025-09-30 18:44:11.402 2 INFO nova.scheduler.client.report [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 5921f771-62ce-4f71-be1d-1e67d936f2cc
Sep 30 18:44:11 compute-0 nova_compute[265391]: 2025-09-30 18:44:11.403 2 DEBUG nova.virt.libvirt.driver [None req-76722a76-1096-43b9-8cfa-ba0e0265636b 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 0bd62f93-1956-4b12-a38a-10deee907b16] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:44:11 compute-0 sudo[353822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:44:11 compute-0 sudo[353822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:11 compute-0 sudo[353822]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:11 compute-0 sudo[353847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:44:11 compute-0 sudo[353847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:11 compute-0 ceph-mon[73755]: pgmap v1972: 353 pgs: 353 active+clean; 200 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:44:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:44:11 compute-0 nova_compute[265391]: 2025-09-30 18:44:11.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:11 compute-0 nova_compute[265391]: 2025-09-30 18:44:11.771 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:11 compute-0 nova_compute[265391]: 2025-09-30 18:44:11.772 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:11.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:11 compute-0 nova_compute[265391]: 2025-09-30 18:44:11.823 2 DEBUG oslo_concurrency.processutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:44:11 compute-0 podman[353913]: 2025-09-30 18:44:11.899019591 +0000 UTC m=+0.047875026 container create 5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_rosalind, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:44:11 compute-0 systemd[1]: Started libpod-conmon-5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e.scope.
Sep 30 18:44:11 compute-0 podman[353913]: 2025-09-30 18:44:11.877020038 +0000 UTC m=+0.025875493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:44:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:44:11 compute-0 podman[353913]: 2025-09-30 18:44:11.992677048 +0000 UTC m=+0.141532483 container init 5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:44:12 compute-0 podman[353913]: 2025-09-30 18:44:12.002311464 +0000 UTC m=+0.151166869 container start 5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:44:12 compute-0 podman[353913]: 2025-09-30 18:44:12.005789253 +0000 UTC m=+0.154644658 container attach 5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_rosalind, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:44:12 compute-0 practical_rosalind[353929]: 167 167
Sep 30 18:44:12 compute-0 systemd[1]: libpod-5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e.scope: Deactivated successfully.
Sep 30 18:44:12 compute-0 conmon[353929]: conmon 5ed1828149d3fa125503 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e.scope/container/memory.events
Sep 30 18:44:12 compute-0 podman[353913]: 2025-09-30 18:44:12.009317704 +0000 UTC m=+0.158173119 container died 5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_rosalind, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-808f784691541a47582e17b271e4dbdd408f2a4a4736caaae6848ea1e66b7e20-merged.mount: Deactivated successfully.
Sep 30 18:44:12 compute-0 podman[353913]: 2025-09-30 18:44:12.048417664 +0000 UTC m=+0.197273109 container remove 5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_rosalind, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:44:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:12.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:12 compute-0 systemd[1]: libpod-conmon-5ed1828149d3fa125503eeb0a69f3dab1641884f8b40d4daacb71c016b75040e.scope: Deactivated successfully.
Sep 30 18:44:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:44:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470898929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:12 compute-0 nova_compute[265391]: 2025-09-30 18:44:12.248 2 DEBUG oslo_concurrency.processutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:44:12 compute-0 nova_compute[265391]: 2025-09-30 18:44:12.254 2 DEBUG nova.compute.provider_tree [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:44:12 compute-0 podman[353973]: 2025-09-30 18:44:12.255152165 +0000 UTC m=+0.056617810 container create b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:44:12 compute-0 systemd[1]: Started libpod-conmon-b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b.scope.
Sep 30 18:44:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb58bf6d9bc6417dde9363cc5c0f6faa453d7e5ed0cdba6236ea4e8bdf6087/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:12 compute-0 podman[353973]: 2025-09-30 18:44:12.23661735 +0000 UTC m=+0.038083025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb58bf6d9bc6417dde9363cc5c0f6faa453d7e5ed0cdba6236ea4e8bdf6087/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb58bf6d9bc6417dde9363cc5c0f6faa453d7e5ed0cdba6236ea4e8bdf6087/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb58bf6d9bc6417dde9363cc5c0f6faa453d7e5ed0cdba6236ea4e8bdf6087/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb58bf6d9bc6417dde9363cc5c0f6faa453d7e5ed0cdba6236ea4e8bdf6087/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:12 compute-0 podman[353973]: 2025-09-30 18:44:12.350352251 +0000 UTC m=+0.151817916 container init b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:44:12 compute-0 podman[353973]: 2025-09-30 18:44:12.358201972 +0000 UTC m=+0.159667617 container start b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lamport, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:44:12 compute-0 podman[353973]: 2025-09-30 18:44:12.361916277 +0000 UTC m=+0.163381952 container attach b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:44:12 compute-0 ceph-mon[73755]: pgmap v1973: 353 pgs: 353 active+clean; 200 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 747 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:44:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/470898929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:12 compute-0 laughing_lamport[353992]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:44:12 compute-0 laughing_lamport[353992]: --> All data devices are unavailable
Sep 30 18:44:12 compute-0 systemd[1]: libpod-b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b.scope: Deactivated successfully.
Sep 30 18:44:12 compute-0 podman[353973]: 2025-09-30 18:44:12.696090749 +0000 UTC m=+0.497556394 container died b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1bb58bf6d9bc6417dde9363cc5c0f6faa453d7e5ed0cdba6236ea4e8bdf6087-merged.mount: Deactivated successfully.
Sep 30 18:44:12 compute-0 podman[353973]: 2025-09-30 18:44:12.74222808 +0000 UTC m=+0.543693725 container remove b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lamport, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:44:12 compute-0 systemd[1]: libpod-conmon-b9a2546b34d800bd996ecacba071af8aa32a826708a8bdf9abe0c9c3c8f8220b.scope: Deactivated successfully.
Sep 30 18:44:12 compute-0 nova_compute[265391]: 2025-09-30 18:44:12.761 2 DEBUG nova.scheduler.client.report [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:44:12 compute-0 sudo[353847]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:12 compute-0 sudo[354018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:44:12 compute-0 sudo[354018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:12 compute-0 sudo[354018]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:12 compute-0 sudo[354043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:44:12 compute-0 sudo[354043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:13 compute-0 nova_compute[265391]: 2025-09-30 18:44:13.275 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.503s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:13 compute-0 nova_compute[265391]: 2025-09-30 18:44:13.300 2 INFO nova.scheduler.client.report [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Deleted allocations for instance 1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4
Sep 30 18:44:13 compute-0 podman[354109]: 2025-09-30 18:44:13.341070884 +0000 UTC m=+0.037511330 container create 96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cartwright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:44:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1974: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Sep 30 18:44:13 compute-0 systemd[1]: Started libpod-conmon-96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df.scope.
Sep 30 18:44:13 compute-0 nova_compute[265391]: 2025-09-30 18:44:13.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:44:13 compute-0 podman[354109]: 2025-09-30 18:44:13.323532656 +0000 UTC m=+0.019973122 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:44:13 compute-0 podman[354109]: 2025-09-30 18:44:13.422853507 +0000 UTC m=+0.119293963 container init 96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cartwright, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:44:13 compute-0 podman[354109]: 2025-09-30 18:44:13.427962228 +0000 UTC m=+0.124402664 container start 96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Sep 30 18:44:13 compute-0 podman[354109]: 2025-09-30 18:44:13.43155903 +0000 UTC m=+0.127999466 container attach 96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cartwright, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 18:44:13 compute-0 stupefied_cartwright[354125]: 167 167
Sep 30 18:44:13 compute-0 systemd[1]: libpod-96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df.scope: Deactivated successfully.
Sep 30 18:44:13 compute-0 podman[354109]: 2025-09-30 18:44:13.434780893 +0000 UTC m=+0.131221329 container died 96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-619de631f2a875a4baad81c62b43d6ae8e1320cd0595b323aae6ad6e44d2a6e1-merged.mount: Deactivated successfully.
Sep 30 18:44:13 compute-0 podman[354109]: 2025-09-30 18:44:13.469745787 +0000 UTC m=+0.166186223 container remove 96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_cartwright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:44:13 compute-0 systemd[1]: libpod-conmon-96e6e06763f1f30f3193933eb754298f1342894677ebf8a8a73e31ada680c7df.scope: Deactivated successfully.
Sep 30 18:44:13 compute-0 podman[354150]: 2025-09-30 18:44:13.660749675 +0000 UTC m=+0.045987648 container create 3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:44:13 compute-0 systemd[1]: Started libpod-conmon-3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c.scope.
Sep 30 18:44:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d073ebb5547170e6861d4b0c0cbe8c2cee3668141539c248641a6dd5ebdf0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d073ebb5547170e6861d4b0c0cbe8c2cee3668141539c248641a6dd5ebdf0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:13 compute-0 podman[354150]: 2025-09-30 18:44:13.63905945 +0000 UTC m=+0.024297403 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d073ebb5547170e6861d4b0c0cbe8c2cee3668141539c248641a6dd5ebdf0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d073ebb5547170e6861d4b0c0cbe8c2cee3668141539c248641a6dd5ebdf0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:13 compute-0 podman[354150]: 2025-09-30 18:44:13.744836807 +0000 UTC m=+0.130074760 container init 3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_agnesi, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:44:13 compute-0 podman[354150]: 2025-09-30 18:44:13.754745921 +0000 UTC m=+0.139983854 container start 3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:44:13 compute-0 podman[354150]: 2025-09-30 18:44:13.757702317 +0000 UTC m=+0.142940280 container attach 3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_agnesi, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:44:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:13.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:13.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:44:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:13.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:14 compute-0 elated_agnesi[354166]: {
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:     "0": [
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:         {
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "devices": [
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "/dev/loop3"
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             ],
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "lv_name": "ceph_lv0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "lv_size": "21470642176",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "name": "ceph_lv0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "tags": {
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.cluster_name": "ceph",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.crush_device_class": "",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.encrypted": "0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.osd_id": "0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.type": "block",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.vdo": "0",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:                 "ceph.with_tpm": "0"
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             },
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "type": "block",
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:             "vg_name": "ceph_vg0"
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:         }
Sep 30 18:44:14 compute-0 elated_agnesi[354166]:     ]
Sep 30 18:44:14 compute-0 elated_agnesi[354166]: }
Sep 30 18:44:14 compute-0 systemd[1]: libpod-3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c.scope: Deactivated successfully.
Sep 30 18:44:14 compute-0 podman[354150]: 2025-09-30 18:44:14.077042879 +0000 UTC m=+0.462280822 container died 3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_agnesi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:44:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:14.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:14 compute-0 nova_compute[265391]: 2025-09-30 18:44:14.615 2 DEBUG oslo_concurrency.lockutils [None req-4e09328d-473c-4207-8cbd-7f6d99ef093c 5717e8cb8548429b948a23763350ab4a 63c45bef63ef4b9f895b3bab865e1a84 - - default default] Lock "1d83ccd0-1cb8-4a1c-b462-99a20ae7ede4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.532s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-37d073ebb5547170e6861d4b0c0cbe8c2cee3668141539c248641a6dd5ebdf0d-merged.mount: Deactivated successfully.
Sep 30 18:44:14 compute-0 ceph-mon[73755]: pgmap v1974: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Sep 30 18:44:14 compute-0 podman[354150]: 2025-09-30 18:44:14.742770144 +0000 UTC m=+1.128008127 container remove 3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 18:44:14 compute-0 sudo[354043]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:14 compute-0 systemd[1]: libpod-conmon-3aa99f1884dd0c434e4257ce62ec90da872c21f46e48dd4fe8e8eca682e87a1c.scope: Deactivated successfully.
Sep 30 18:44:14 compute-0 sudo[354189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:44:14 compute-0 sudo[354189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:14 compute-0 sudo[354189]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:14 compute-0 sudo[354214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:44:14 compute-0 sudo[354214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:15 compute-0 podman[354278]: 2025-09-30 18:44:15.344014861 +0000 UTC m=+0.046090741 container create fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_rubin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:44:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1975: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 30 op/s
Sep 30 18:44:15 compute-0 systemd[1]: Started libpod-conmon-fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932.scope.
Sep 30 18:44:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:44:15 compute-0 podman[354278]: 2025-09-30 18:44:15.323981728 +0000 UTC m=+0.026057658 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:44:15 compute-0 podman[354278]: 2025-09-30 18:44:15.427920748 +0000 UTC m=+0.129996648 container init fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_rubin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:44:15 compute-0 podman[354278]: 2025-09-30 18:44:15.437312458 +0000 UTC m=+0.139388328 container start fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 18:44:15 compute-0 podman[354278]: 2025-09-30 18:44:15.441832594 +0000 UTC m=+0.143908524 container attach fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_rubin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:44:15 compute-0 bold_rubin[354294]: 167 167
Sep 30 18:44:15 compute-0 systemd[1]: libpod-fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932.scope: Deactivated successfully.
Sep 30 18:44:15 compute-0 podman[354299]: 2025-09-30 18:44:15.496896443 +0000 UTC m=+0.033737354 container died fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_rubin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Sep 30 18:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-603da303ecd9c1be9c57258144688b07934e6a6345e57d5afe5c15d243fa6c3e-merged.mount: Deactivated successfully.
Sep 30 18:44:15 compute-0 podman[354299]: 2025-09-30 18:44:15.536479996 +0000 UTC m=+0.073320847 container remove fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 18:44:15 compute-0 systemd[1]: libpod-conmon-fbc3747e2ca241e78c90c4b87cfe30cfee0cbfab3c6722322bd86178f5ee8932.scope: Deactivated successfully.
Sep 30 18:44:15 compute-0 podman[354323]: 2025-09-30 18:44:15.750899903 +0000 UTC m=+0.063144947 container create 580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:44:15 compute-0 systemd[1]: Started libpod-conmon-580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba.scope.
Sep 30 18:44:15 compute-0 podman[354323]: 2025-09-30 18:44:15.720776332 +0000 UTC m=+0.033021416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:44:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:44:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:15.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5760a5ff895656787ad587aa43c4975c6b45fbd493ada8465c314da16a2107b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5760a5ff895656787ad587aa43c4975c6b45fbd493ada8465c314da16a2107b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5760a5ff895656787ad587aa43c4975c6b45fbd493ada8465c314da16a2107b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5760a5ff895656787ad587aa43c4975c6b45fbd493ada8465c314da16a2107b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:44:15 compute-0 podman[354323]: 2025-09-30 18:44:15.837617892 +0000 UTC m=+0.149862916 container init 580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_babbage, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:44:15 compute-0 podman[354323]: 2025-09-30 18:44:15.849048495 +0000 UTC m=+0.161293509 container start 580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_babbage, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:44:15 compute-0 podman[354323]: 2025-09-30 18:44:15.852246937 +0000 UTC m=+0.164491961 container attach 580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_babbage, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 18:44:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:16.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.238752) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257856238784, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 2072, "num_deletes": 256, "total_data_size": 3937122, "memory_usage": 4034664, "flush_reason": "Manual Compaction"}
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257856259528, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 3796768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47776, "largest_seqno": 49847, "table_properties": {"data_size": 3787484, "index_size": 5778, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19289, "raw_average_key_size": 20, "raw_value_size": 3768848, "raw_average_value_size": 3929, "num_data_blocks": 252, "num_entries": 959, "num_filter_entries": 959, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257654, "oldest_key_time": 1759257654, "file_creation_time": 1759257856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 20826 microseconds, and 9395 cpu microseconds.
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.259574) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 3796768 bytes OK
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.259595) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.260854) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.260870) EVENT_LOG_v1 {"time_micros": 1759257856260865, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.260889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 3928518, prev total WAL file size 3928518, number of live WAL files 2.
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.261937) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373536' seq:72057594037927935, type:22 .. '6C6F676D0032303038' seq:0, type:0; will stop at (end)
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(3707KB)], [110(11MB)]
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257856261969, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 15655499, "oldest_snapshot_seqno": -1}
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7504 keys, 15511214 bytes, temperature: kUnknown
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257856348010, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 15511214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15461285, "index_size": 30041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 196908, "raw_average_key_size": 26, "raw_value_size": 15327195, "raw_average_value_size": 2042, "num_data_blocks": 1188, "num_entries": 7504, "num_filter_entries": 7504, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.348203) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 15511214 bytes
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.350145) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.8 rd, 180.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.3 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(8.2) write-amplify(4.1) OK, records in: 8032, records dropped: 528 output_compression: NoCompression
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.350168) EVENT_LOG_v1 {"time_micros": 1759257856350157, "job": 66, "event": "compaction_finished", "compaction_time_micros": 86097, "compaction_time_cpu_micros": 39409, "output_level": 6, "num_output_files": 1, "total_output_size": 15511214, "num_input_records": 8032, "num_output_records": 7504, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257856350817, "job": 66, "event": "table_file_deletion", "file_number": 112}
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257856352796, "job": 66, "event": "table_file_deletion", "file_number": 110}
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.261864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.352916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.352921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.352922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.352924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:16.352925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:16 compute-0 lvm[354414]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:44:16 compute-0 lvm[354414]: VG ceph_vg0 finished
Sep 30 18:44:16 compute-0 awesome_babbage[354339]: {}
Sep 30 18:44:16 compute-0 systemd[1]: libpod-580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba.scope: Deactivated successfully.
Sep 30 18:44:16 compute-0 systemd[1]: libpod-580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba.scope: Consumed 1.183s CPU time.
Sep 30 18:44:16 compute-0 podman[354323]: 2025-09-30 18:44:16.595292322 +0000 UTC m=+0.907537326 container died 580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_babbage, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5760a5ff895656787ad587aa43c4975c6b45fbd493ada8465c314da16a2107b1-merged.mount: Deactivated successfully.
Sep 30 18:44:16 compute-0 podman[354323]: 2025-09-30 18:44:16.643230249 +0000 UTC m=+0.955475263 container remove 580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 18:44:16 compute-0 systemd[1]: libpod-conmon-580a67bcc418c98303df10618f5e563eb0db1355267f1672e2ab554dac8aa0ba.scope: Deactivated successfully.
Sep 30 18:44:16 compute-0 nova_compute[265391]: 2025-09-30 18:44:16.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:16 compute-0 sudo[354214]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:44:16 compute-0 ceph-mon[73755]: pgmap v1975: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 30 op/s
Sep 30 18:44:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:44:16 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:16 compute-0 sudo[354432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:44:16 compute-0 sudo[354432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:16 compute-0 sudo[354432]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1976: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 30 op/s
Sep 30 18:44:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:17.370Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:17 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:44:17 compute-0 ceph-mon[73755]: pgmap v1976: 353 pgs: 353 active+clean; 121 MiB data, 403 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 30 op/s
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.737388) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257857737715, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 286, "num_deletes": 251, "total_data_size": 94983, "memory_usage": 100552, "flush_reason": "Manual Compaction"}
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257857740702, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 94629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49848, "largest_seqno": 50133, "table_properties": {"data_size": 92695, "index_size": 162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5046, "raw_average_key_size": 18, "raw_value_size": 88883, "raw_average_value_size": 326, "num_data_blocks": 6, "num_entries": 272, "num_filter_entries": 272, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257856, "oldest_key_time": 1759257856, "file_creation_time": 1759257857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 3454 microseconds, and 1287 cpu microseconds.
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.740841) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 94629 bytes OK
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.740938) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.742551) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.742573) EVENT_LOG_v1 {"time_micros": 1759257857742566, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.742593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 92850, prev total WAL file size 92850, number of live WAL files 2.
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.743667) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(92KB)], [113(14MB)]
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257857743740, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 15605843, "oldest_snapshot_seqno": -1}
Sep 30 18:44:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:17.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7266 keys, 13646720 bytes, temperature: kUnknown
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257857830618, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 13646720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13599926, "index_size": 27546, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18181, "raw_key_size": 192654, "raw_average_key_size": 26, "raw_value_size": 13471336, "raw_average_value_size": 1854, "num_data_blocks": 1076, "num_entries": 7266, "num_filter_entries": 7266, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.830888) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 13646720 bytes
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.832479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.5 rd, 156.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.8 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(309.1) write-amplify(144.2) OK, records in: 7776, records dropped: 510 output_compression: NoCompression
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.832510) EVENT_LOG_v1 {"time_micros": 1759257857832496, "job": 68, "event": "compaction_finished", "compaction_time_micros": 86957, "compaction_time_cpu_micros": 26888, "output_level": 6, "num_output_files": 1, "total_output_size": 13646720, "num_input_records": 7776, "num_output_records": 7266, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257857832722, "job": 68, "event": "table_file_deletion", "file_number": 115}
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257857838138, "job": 68, "event": "table_file_deletion", "file_number": 113}
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.743484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.838199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.838204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.838206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.838207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:17 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:17.838209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:18.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:18 compute-0 nova_compute[265391]: 2025-09-30 18:44:18.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:18 compute-0 nova_compute[265391]: 2025-09-30 18:44:18.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:44:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:18] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:44:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:18] "GET /metrics HTTP/1.1" 200 46720 "" "Prometheus/2.51.0"
Sep 30 18:44:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:18.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1977: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 41 KiB/s rd, 2.6 KiB/s wr, 60 op/s
Sep 30 18:44:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:19.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:20.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:20 compute-0 ceph-mon[73755]: pgmap v1977: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 41 KiB/s rd, 2.6 KiB/s wr, 60 op/s
Sep 30 18:44:20 compute-0 podman[354461]: 2025-09-30 18:44:20.535132105 +0000 UTC m=+0.064274835 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Sep 30 18:44:20 compute-0 podman[354462]: 2025-09-30 18:44:20.53728346 +0000 UTC m=+0.066352569 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Sep 30 18:44:20 compute-0 sudo[354488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:44:20 compute-0 sudo[354488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:20 compute-0 sudo[354488]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:20 compute-0 podman[354463]: 2025-09-30 18:44:20.560264159 +0000 UTC m=+0.087390368 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, architecture=x86_64, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:44:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1978: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 41 KiB/s rd, 2.6 KiB/s wr, 60 op/s
Sep 30 18:44:21 compute-0 nova_compute[265391]: 2025-09-30 18:44:21.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:21.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:22.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:44:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:44:22 compute-0 ceph-mon[73755]: pgmap v1978: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 41 KiB/s rd, 2.6 KiB/s wr, 60 op/s
Sep 30 18:44:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:44:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1979: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:44:23 compute-0 nova_compute[265391]: 2025-09-30 18:44:23.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:23.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:23.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:24.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:24 compute-0 ceph-mon[73755]: pgmap v1979: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Sep 30 18:44:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1980: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:44:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:25.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:26.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:26 compute-0 ceph-mon[73755]: pgmap v1980: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:44:26 compute-0 nova_compute[265391]: 2025-09-30 18:44:26.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1981: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:44:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:27.372Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:27.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:27 compute-0 nova_compute[265391]: 2025-09-30 18:44:27.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:28.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:28 compute-0 nova_compute[265391]: 2025-09-30 18:44:28.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:28 compute-0 ceph-mon[73755]: pgmap v1981: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 18:44:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:28] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:44:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:28] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:44:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:28.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1982: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:44:29 compute-0 podman[276673]: time="2025-09-30T18:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:44:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:44:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10293 "" "Go-http-client/1.1"
Sep 30 18:44:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:29.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:30.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:30 compute-0 ceph-mon[73755]: pgmap v1982: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:44:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1983: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:31 compute-0 openstack_network_exporter[279566]: ERROR   18:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:44:31 compute-0 openstack_network_exporter[279566]: ERROR   18:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:44:31 compute-0 openstack_network_exporter[279566]: ERROR   18:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:44:31 compute-0 openstack_network_exporter[279566]: ERROR   18:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:44:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:44:31 compute-0 openstack_network_exporter[279566]: ERROR   18:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:44:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:44:31 compute-0 nova_compute[265391]: 2025-09-30 18:44:31.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:31.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:32.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:32 compute-0 ceph-mon[73755]: pgmap v1983: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1984: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:44:33 compute-0 nova_compute[265391]: 2025-09-30 18:44:33.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:33.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:33.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:34.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:34 compute-0 ceph-mon[73755]: pgmap v1984: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:44:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1985: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:35.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:36.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:44:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2356262577' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:44:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:44:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2356262577' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:44:36 compute-0 ceph-mon[73755]: pgmap v1985: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2356262577' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:44:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2356262577' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:44:36 compute-0 nova_compute[265391]: 2025-09-30 18:44:36.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:44:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:44:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:37.373Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1986: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:44:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:44:37 compute-0 podman[354559]: 2025-09-30 18:44:37.51528563 +0000 UTC m=+0.060095869 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Sep 30 18:44:37 compute-0 podman[354561]: 2025-09-30 18:44:37.523212663 +0000 UTC m=+0.061478925 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:44:37 compute-0 podman[354560]: 2025-09-30 18:44:37.547272409 +0000 UTC m=+0.083886328 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=ovn_controller, io.buildah.version=1.41.4)
Sep 30 18:44:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:44:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:37.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:38.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:38 compute-0 nova_compute[265391]: 2025-09-30 18:44:38.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:38 compute-0 ceph-mon[73755]: pgmap v1986: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:38] "GET /metrics HTTP/1.1" 200 46706 "" "Prometheus/2.51.0"
Sep 30 18:44:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:38] "GET /metrics HTTP/1.1" 200 46706 "" "Prometheus/2.51.0"
Sep 30 18:44:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:38.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1987: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 766 B/s rd, 0 op/s
Sep 30 18:44:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:39.421 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:31:e0 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c994362300fc4b68b72392279f890ca7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=793ccd68-a96c-4ced-8449-bfb1c479c4b4, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d979d9c2-641a-4559-b95a-f55833182093) old=Port_Binding(mac=['fa:16:3e:1e:31:e0'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c994362300fc4b68b72392279f890ca7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:44:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:39.422 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d979d9c2-641a-4559-b95a-f55833182093 in datapath 4a724d83-7a03-449e-b06f-f9f1f1bb686e updated
Sep 30 18:44:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:39.424 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a724d83-7a03-449e-b06f-f9f1f1bb686e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:44:39 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:39.425 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4607226c-3de0-43c0-91cc-73308f14f778]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:40.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:40 compute-0 sudo[354629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:44:40 compute-0 sudo[354629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:44:40 compute-0 sudo[354629]: pam_unix(sudo:session): session closed for user root
Sep 30 18:44:40 compute-0 ceph-mon[73755]: pgmap v1987: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 766 B/s rd, 0 op/s
Sep 30 18:44:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1988: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:41 compute-0 nova_compute[265391]: 2025-09-30 18:44:41.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:41 compute-0 ceph-mon[73755]: pgmap v1988: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:41.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:42.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1989: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 766 B/s rd, 0 op/s
Sep 30 18:44:43 compute-0 nova_compute[265391]: 2025-09-30 18:44:43.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:43.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:43.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:44:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:43.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:44.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:44 compute-0 ceph-mon[73755]: pgmap v1989: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 766 B/s rd, 0 op/s
Sep 30 18:44:44 compute-0 sshd-session[354555]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:44:44 compute-0 sshd-session[354555]: banner exchange: Connection from 115.190.39.222 port 52374: Connection timed out
Sep 30 18:44:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1990: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:46.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:46 compute-0 ceph-mon[73755]: pgmap v1990: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:46 compute-0 nova_compute[265391]: 2025-09-30 18:44:46.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:47.373Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1991: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:47.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:47.972 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:44:38 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f866cf0f-e793-4238-baa0-f0c3f17801af', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f866cf0f-e793-4238-baa0-f0c3f17801af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e1882e8f3e74aa3840e38f2ce263f25', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7288d4-137d-4796-adf2-791325730c64, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2539ca47-e3f3-40f9-9a18-3cc4160bcb4f) old=Port_Binding(mac=['fa:16:3e:65:44:38'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-f866cf0f-e793-4238-baa0-f0c3f17801af', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f866cf0f-e793-4238-baa0-f0c3f17801af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e1882e8f3e74aa3840e38f2ce263f25', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:44:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:47.974 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2539ca47-e3f3-40f9-9a18-3cc4160bcb4f in datapath f866cf0f-e793-4238-baa0-f0c3f17801af updated
Sep 30 18:44:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:47.976 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f866cf0f-e793-4238-baa0-f0c3f17801af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:44:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:47.976 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3015af-9f21-45dd-bbe3-ee1a65237d31]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:44:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:44:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:48.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:44:48 compute-0 nova_compute[265391]: 2025-09-30 18:44:48.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:48 compute-0 ceph-mon[73755]: pgmap v1991: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:48] "GET /metrics HTTP/1.1" 200 46706 "" "Prometheus/2.51.0"
Sep 30 18:44:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:48] "GET /metrics HTTP/1.1" 200 46706 "" "Prometheus/2.51.0"
Sep 30 18:44:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:48.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1992: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:44:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:44:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:44:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:50 compute-0 nova_compute[265391]: 2025-09-30 18:44:50.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:44:50 compute-0 ceph-mon[73755]: pgmap v1992: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:44:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1993: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:51 compute-0 podman[354665]: 2025-09-30 18:44:51.521022697 +0000 UTC m=+0.051729925 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true)
Sep 30 18:44:51 compute-0 podman[354664]: 2025-09-30 18:44:51.551382294 +0000 UTC m=+0.084739350 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:44:51 compute-0 podman[354666]: 2025-09-30 18:44:51.552574275 +0000 UTC m=+0.069923761 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Sep 30 18:44:51 compute-0 nova_compute[265391]: 2025-09-30 18:44:51.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:51.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:52.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:44:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:44:52 compute-0 ceph-mon[73755]: pgmap v1993: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:44:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1994: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:44:53 compute-0 nova_compute[265391]: 2025-09-30 18:44:53.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:44:53 compute-0 nova_compute[265391]: 2025-09-30 18:44:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:53.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:53.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:54.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:54.338 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:54.338 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:44:54.338 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:54 compute-0 nova_compute[265391]: 2025-09-30 18:44:54.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:44:54 compute-0 ceph-mon[73755]: pgmap v1994: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:54.999 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.000 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.000 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.000 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.001 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:44:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1995: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:44:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140021245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.461 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:44:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3303240435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1140021245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.616 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.617 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.636 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.636 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4294MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.637 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:55 compute-0 nova_compute[265391]: 2025-09-30 18:44:55.637 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:44:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:55.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:44:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:56.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.252986) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257896253017, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 546, "num_deletes": 252, "total_data_size": 674907, "memory_usage": 685728, "flush_reason": "Manual Compaction"}
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257896258642, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 449251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50134, "largest_seqno": 50679, "table_properties": {"data_size": 446580, "index_size": 707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7169, "raw_average_key_size": 20, "raw_value_size": 441126, "raw_average_value_size": 1253, "num_data_blocks": 32, "num_entries": 352, "num_filter_entries": 352, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257857, "oldest_key_time": 1759257857, "file_creation_time": 1759257896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 5711 microseconds, and 2809 cpu microseconds.
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.258693) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 449251 bytes OK
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.258718) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.260552) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.260577) EVENT_LOG_v1 {"time_micros": 1759257896260569, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.260596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 671865, prev total WAL file size 671865, number of live WAL files 2.
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.261276) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373535' seq:72057594037927935, type:22 .. '6D6772737461740032303038' seq:0, type:0; will stop at (end)
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(438KB)], [116(13MB)]
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257896261313, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 14095971, "oldest_snapshot_seqno": -1}
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7120 keys, 10456989 bytes, temperature: kUnknown
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257896329789, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10456989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10415526, "index_size": 22535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 189803, "raw_average_key_size": 26, "raw_value_size": 10293886, "raw_average_value_size": 1445, "num_data_blocks": 868, "num_entries": 7120, "num_filter_entries": 7120, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759257896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.330222) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10456989 bytes
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.331872) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.4 rd, 152.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.0 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(54.7) write-amplify(23.3) OK, records in: 7618, records dropped: 498 output_compression: NoCompression
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.331906) EVENT_LOG_v1 {"time_micros": 1759257896331891, "job": 70, "event": "compaction_finished", "compaction_time_micros": 68625, "compaction_time_cpu_micros": 49210, "output_level": 6, "num_output_files": 1, "total_output_size": 10456989, "num_input_records": 7618, "num_output_records": 7120, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257896332179, "job": 70, "event": "table_file_deletion", "file_number": 118}
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759257896336884, "job": 70, "event": "table_file_deletion", "file_number": 116}
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.261178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.336971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.336979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.336982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.336985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:44:56.336988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:44:56 compute-0 ceph-mon[73755]: pgmap v1995: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.699 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.699 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:44:55 up  1:48,  0 user,  load average: 0.58, 0.83, 0.85\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.720 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.740 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.740 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.753 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.775 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:44:56 compute-0 nova_compute[265391]: 2025-09-30 18:44:56.804 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:44:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:44:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713725337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:57 compute-0 nova_compute[265391]: 2025-09-30 18:44:57.247 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:44:57 compute-0 nova_compute[265391]: 2025-09-30 18:44:57.254 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:44:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:57.373Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1996: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2466129853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3713725337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:44:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:44:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/488943288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:44:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:44:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/488943288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:44:57 compute-0 nova_compute[265391]: 2025-09-30 18:44:57.768 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:44:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:57.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:44:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:44:58.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:44:58 compute-0 nova_compute[265391]: 2025-09-30 18:44:58.280 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:44:58 compute-0 nova_compute[265391]: 2025-09-30 18:44:58.280 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.643s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:44:58 compute-0 nova_compute[265391]: 2025-09-30 18:44:58.281 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:44:58 compute-0 nova_compute[265391]: 2025-09-30 18:44:58.281 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:44:58 compute-0 nova_compute[265391]: 2025-09-30 18:44:58.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:44:58 compute-0 ceph-mon[73755]: pgmap v1996: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:44:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/488943288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:44:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/488943288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:44:58 compute-0 nova_compute[265391]: 2025-09-30 18:44:58.787 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:44:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:58] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:44:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:44:58] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:44:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:44:58.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:44:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:44:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:44:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:44:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:44:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:44:59 compute-0 nova_compute[265391]: 2025-09-30 18:44:59.301 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "232379f8-4475-4208-8b8e-8c2ed3c630a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:44:59 compute-0 nova_compute[265391]: 2025-09-30 18:44:59.301 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:44:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1997: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:44:59 compute-0 podman[276673]: time="2025-09-30T18:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:44:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:44:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10317 "" "Go-http-client/1.1"
Sep 30 18:44:59 compute-0 nova_compute[265391]: 2025-09-30 18:44:59.806 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:44:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:44:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:44:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:44:59.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:00.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.346 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.347 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.355 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.356 2 INFO nova.compute.claims [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:45:00 compute-0 ceph-mon[73755]: pgmap v1997: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:45:00 compute-0 sudo[354779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:45:00 compute-0 sudo[354779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:00 compute-0 sudo[354779]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.788 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.788 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.789 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.789 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:00 compute-0 nova_compute[265391]: 2025-09-30 18:45:00.789 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:45:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1998: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:45:01 compute-0 nova_compute[265391]: 2025-09-30 18:45:01.406 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:01 compute-0 openstack_network_exporter[279566]: ERROR   18:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:45:01 compute-0 openstack_network_exporter[279566]: ERROR   18:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:45:01 compute-0 openstack_network_exporter[279566]: ERROR   18:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:45:01 compute-0 openstack_network_exporter[279566]: ERROR   18:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:45:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:45:01 compute-0 openstack_network_exporter[279566]: ERROR   18:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:45:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:45:01 compute-0 nova_compute[265391]: 2025-09-30 18:45:01.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:45:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412715386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:01 compute-0 nova_compute[265391]: 2025-09-30 18:45:01.828 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:01 compute-0 nova_compute[265391]: 2025-09-30 18:45:01.835 2 DEBUG nova.compute.provider_tree [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:45:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:01.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:02.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:02 compute-0 ovn_controller[156242]: 2025-09-30T18:45:02Z|00291|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Sep 30 18:45:02 compute-0 nova_compute[265391]: 2025-09-30 18:45:02.348 2 DEBUG nova.scheduler.client.report [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:45:02 compute-0 ceph-mon[73755]: pgmap v1998: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:45:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1412715386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:02 compute-0 nova_compute[265391]: 2025-09-30 18:45:02.859 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.512s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:02 compute-0 nova_compute[265391]: 2025-09-30 18:45:02.860 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:45:03 compute-0 nova_compute[265391]: 2025-09-30 18:45:03.373 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:45:03 compute-0 nova_compute[265391]: 2025-09-30 18:45:03.373 2 DEBUG nova.network.neutron [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:45:03 compute-0 nova_compute[265391]: 2025-09-30 18:45:03.374 2 WARNING neutronclient.v2_0.client [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:03 compute-0 nova_compute[265391]: 2025-09-30 18:45:03.375 2 WARNING neutronclient.v2_0.client [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v1999: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:45:03 compute-0 nova_compute[265391]: 2025-09-30 18:45:03.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:03 compute-0 nova_compute[265391]: 2025-09-30 18:45:03.886 2 INFO nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:45:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:03.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:04 compute-0 nova_compute[265391]: 2025-09-30 18:45:04.068 2 DEBUG nova.network.neutron [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Successfully created port: 43f79cf7-670d-4eb6-8f85-82c87ca3de15 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:45:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:04.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:04 compute-0 nova_compute[265391]: 2025-09-30 18:45:04.396 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:45:04 compute-0 nova_compute[265391]: 2025-09-30 18:45:04.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:04 compute-0 ceph-mon[73755]: pgmap v1999: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.119 2 DEBUG nova.network.neutron [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Successfully updated port: 43f79cf7-670d-4eb6-8f85-82c87ca3de15 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.219 2 DEBUG nova.compute.manager [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received event network-changed-43f79cf7-670d-4eb6-8f85-82c87ca3de15 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.219 2 DEBUG nova.compute.manager [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Refreshing instance network info cache due to event network-changed-43f79cf7-670d-4eb6-8f85-82c87ca3de15. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.220 2 DEBUG oslo_concurrency.lockutils [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-232379f8-4475-4208-8b8e-8c2ed3c630a0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.220 2 DEBUG oslo_concurrency.lockutils [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-232379f8-4475-4208-8b8e-8c2ed3c630a0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.220 2 DEBUG nova.network.neutron [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Refreshing network info cache for port 43f79cf7-670d-4eb6-8f85-82c87ca3de15 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:45:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2000: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.418 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.419 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.419 2 INFO nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Creating image(s)
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.445 2 DEBUG nova.storage.rbd_utils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.474 2 DEBUG nova.storage.rbd_utils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.501 2 DEBUG nova.storage.rbd_utils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.505 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.581 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.582 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.582 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.583 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.603 2 DEBUG nova.storage.rbd_utils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.606 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.628 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "refresh_cache-232379f8-4475-4208-8b8e-8c2ed3c630a0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.728 2 WARNING neutronclient.v2_0.client [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.846 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:05.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.901 2 DEBUG nova.storage.rbd_utils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] resizing rbd image 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.988 2 DEBUG nova.network.neutron [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.994 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.995 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Ensure instance console log exists: /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.995 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.996 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:05 compute-0 nova_compute[265391]: 2025-09-30 18:45:05.996 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:06 compute-0 nova_compute[265391]: 2025-09-30 18:45:06.124 2 DEBUG nova.network.neutron [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:45:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:06.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:06 compute-0 nova_compute[265391]: 2025-09-30 18:45:06.633 2 DEBUG oslo_concurrency.lockutils [req-e85d1454-f6ed-44a8-bee7-10db98301bd0 req-2a9f4829-66e2-4ecd-97f8-f4955d196f0b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-232379f8-4475-4208-8b8e-8c2ed3c630a0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:45:06 compute-0 nova_compute[265391]: 2025-09-30 18:45:06.634 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquired lock "refresh_cache-232379f8-4475-4208-8b8e-8c2ed3c630a0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:45:06 compute-0 nova_compute[265391]: 2025-09-30 18:45:06.634 2 DEBUG nova.network.neutron [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:45:06 compute-0 ceph-mon[73755]: pgmap v2000: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:45:06 compute-0 nova_compute[265391]: 2025-09-30 18:45:06.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:45:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:45:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:07.375Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2001: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:45:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:45:07 compute-0 nova_compute[265391]: 2025-09-30 18:45:07.489 2 DEBUG nova.network.neutron [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:45:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:45:07 compute-0 nova_compute[265391]: 2025-09-30 18:45:07.717 2 WARNING neutronclient.v2_0.client [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:07.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:07 compute-0 nova_compute[265391]: 2025-09-30 18:45:07.891 2 DEBUG nova.network.neutron [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Updating instance_info_cache with network_info: [{"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:45:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:08.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:45:08
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'volumes', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.400 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Releasing lock "refresh_cache-232379f8-4475-4208-8b8e-8c2ed3c630a0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.401 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Instance network_info: |[{"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.405 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Start _get_guest_xml network_info=[{"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.410 2 WARNING nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.412 2 DEBUG nova.virt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalancingStrategy-server-1435974356', uuid='232379f8-4475-4208-8b8e-8c2ed3c630a0'), owner=OwnerMeta(userid='a881d4a6df1c47b5ae0d12c72bd2a02b', username='tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin', projectid='0e1882e8f3e74aa3840e38f2ce263f25', projectname='tempest-TestExecuteWorkloadBalancingStrategy-1499811811'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257908.4122784) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.417 2 DEBUG nova.virt.libvirt.host [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.418 2 DEBUG nova.virt.libvirt.host [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.422 2 DEBUG nova.virt.libvirt.host [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.423 2 DEBUG nova.virt.libvirt.host [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.424 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.424 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.425 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.425 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.426 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.426 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.427 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.427 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.428 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.428 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.428 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.429 2 DEBUG nova.virt.hardware [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.434 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:08 compute-0 podman[355001]: 2025-09-30 18:45:08.565793785 +0000 UTC m=+0.094495229 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:45:08 compute-0 podman[355004]: 2025-09-30 18:45:08.566889273 +0000 UTC m=+0.081658491 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:45:08 compute-0 podman[355003]: 2025-09-30 18:45:08.6211013 +0000 UTC m=+0.134028371 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest)
Sep 30 18:45:08 compute-0 ceph-mon[73755]: pgmap v2001: 353 pgs: 353 active+clean; 41 MiB data, 356 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:45:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:08] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:45:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:08] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:45:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:45:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3958410412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:45:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:08.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.917 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.939 2 DEBUG nova.storage.rbd_utils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:08 compute-0 nova_compute[265391]: 2025-09-30 18:45:08.942 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:45:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2119052121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.382 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.385 2 DEBUG nova.virt.libvirt.vif [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:44:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalancingStrategy-server-1435974356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancingstrategy-server-1435974356',id=34,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e1882e8f3e74aa3840e38f2ce263f25',ramdisk_id='',reservation_id='r-7s4vhh01',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811',owner_user_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:45:04Z,user_data=None,user_id='a881d4a6df1c47b5ae0d12c72bd2a02b',uuid=232379f8-4475-4208-8b8e-8c2ed3c630a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.386 2 DEBUG nova.network.os_vif_util [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converting VIF {"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.388 2 DEBUG nova.network.os_vif_util [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:ad:c0,bridge_name='br-int',has_traffic_filtering=True,id=43f79cf7-670d-4eb6-8f85-82c87ca3de15,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f79cf7-67') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:45:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2002: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.390 2 DEBUG nova.objects.instance [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 232379f8-4475-4208-8b8e-8c2ed3c630a0 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:45:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3958410412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:45:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2119052121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:45:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:09.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.903 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <uuid>232379f8-4475-4208-8b8e-8c2ed3c630a0</uuid>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <name>instance-00000022</name>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalancingStrategy-server-1435974356</nova:name>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:45:08</nova:creationTime>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:45:09 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:45:09 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:user uuid="a881d4a6df1c47b5ae0d12c72bd2a02b">tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin</nova:user>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:project uuid="0e1882e8f3e74aa3840e38f2ce263f25">tempest-TestExecuteWorkloadBalancingStrategy-1499811811</nova:project>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <nova:port uuid="43f79cf7-670d-4eb6-8f85-82c87ca3de15">
Sep 30 18:45:09 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <system>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <entry name="serial">232379f8-4475-4208-8b8e-8c2ed3c630a0</entry>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <entry name="uuid">232379f8-4475-4208-8b8e-8c2ed3c630a0</entry>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </system>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <os>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   </os>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <features>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   </features>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/232379f8-4475-4208-8b8e-8c2ed3c630a0_disk">
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       </source>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/232379f8-4475-4208-8b8e-8c2ed3c630a0_disk.config">
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       </source>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:45:09 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:40:ad:c0"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <target dev="tap43f79cf7-67"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0/console.log" append="off"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <video>
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </video>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:45:09 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:45:09 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:45:09 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:45:09 compute-0 nova_compute[265391]: </domain>
Sep 30 18:45:09 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.904 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Preparing to wait for external event network-vif-plugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.905 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.905 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.905 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.906 2 DEBUG nova.virt.libvirt.vif [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:44:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalancingStrategy-server-1435974356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancingstrategy-server-1435974356',id=34,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e1882e8f3e74aa3840e38f2ce263f25',ramdisk_id='',reservation_id='r-7s4vhh01',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811',owner_user_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:45:04Z,user_data=None,user_id='a881d4a6df1c47b5ae0d12c72bd2a02b',uuid=232379f8-4475-4208-8b8e-8c2ed3c630a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.906 2 DEBUG nova.network.os_vif_util [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converting VIF {"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.907 2 DEBUG nova.network.os_vif_util [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:ad:c0,bridge_name='br-int',has_traffic_filtering=True,id=43f79cf7-670d-4eb6-8f85-82c87ca3de15,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f79cf7-67') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.908 2 DEBUG os_vif [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:ad:c0,bridge_name='br-int',has_traffic_filtering=True,id=43f79cf7-670d-4eb6-8f85-82c87ca3de15,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f79cf7-67') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.910 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '38403908-dd7c-59c9-9522-526c1909bca5', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43f79cf7-67, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap43f79cf7-67, col_values=(('qos', UUID('e6aecf88-c3bc-4a9b-8e90-2ef2ceef6b09')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap43f79cf7-67, col_values=(('external_ids', {'iface-id': '43f79cf7-670d-4eb6-8f85-82c87ca3de15', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:ad:c0', 'vm-uuid': '232379f8-4475-4208-8b8e-8c2ed3c630a0'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:09 compute-0 NetworkManager[45059]: <info>  [1759257909.9228] manager: (tap43f79cf7-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:09 compute-0 nova_compute[265391]: 2025-09-30 18:45:09.929 2 INFO os_vif [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:ad:c0,bridge_name='br-int',has_traffic_filtering=True,id=43f79cf7-670d-4eb6-8f85-82c87ca3de15,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f79cf7-67')
Sep 30 18:45:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:10.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:10 compute-0 nova_compute[265391]: 2025-09-30 18:45:10.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:10 compute-0 ceph-mon[73755]: pgmap v2002: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2003: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:11 compute-0 nova_compute[265391]: 2025-09-30 18:45:11.466 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:45:11 compute-0 nova_compute[265391]: 2025-09-30 18:45:11.466 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:45:11 compute-0 nova_compute[265391]: 2025-09-30 18:45:11.466 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] No VIF found with MAC fa:16:3e:40:ad:c0, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:45:11 compute-0 nova_compute[265391]: 2025-09-30 18:45:11.467 2 INFO nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Using config drive
Sep 30 18:45:11 compute-0 nova_compute[265391]: 2025-09-30 18:45:11.487 2 DEBUG nova.storage.rbd_utils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:11 compute-0 nova_compute[265391]: 2025-09-30 18:45:11.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:11.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:12 compute-0 nova_compute[265391]: 2025-09-30 18:45:12.003 2 WARNING neutronclient.v2_0.client [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:12.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:12 compute-0 ceph-mon[73755]: pgmap v2003: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.076 2 INFO nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Creating config drive at /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0/disk.config
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.081 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmplxxkl3u_ execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.212 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmplxxkl3u_" returned: 0 in 0.131s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.234 2 DEBUG nova.storage.rbd_utils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.237 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0/disk.config 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:13 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 18:45:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2004: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.601 2 DEBUG oslo_concurrency.processutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0/disk.config 232379f8-4475-4208-8b8e-8c2ed3c630a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.601 2 INFO nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Deleting local config drive /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0/disk.config because it was imported into RBD.
Sep 30 18:45:13 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 18:45:13 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 18:45:13 compute-0 kernel: tap43f79cf7-67: entered promiscuous mode
Sep 30 18:45:13 compute-0 NetworkManager[45059]: <info>  [1759257913.6856] manager: (tap43f79cf7-67): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:13 compute-0 ovn_controller[156242]: 2025-09-30T18:45:13Z|00292|binding|INFO|Claiming lport 43f79cf7-670d-4eb6-8f85-82c87ca3de15 for this chassis.
Sep 30 18:45:13 compute-0 ovn_controller[156242]: 2025-09-30T18:45:13Z|00293|binding|INFO|43f79cf7-670d-4eb6-8f85-82c87ca3de15: Claiming fa:16:3e:40:ad:c0 10.100.0.11
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.709 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:ad:c0 10.100.0.11'], port_security=['fa:16:3e:40:ad:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '232379f8-4475-4208-8b8e-8c2ed3c630a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e1882e8f3e74aa3840e38f2ce263f25', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac9c09fc-7c26-43d2-bbe1-4bd3d1cfcf25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=793ccd68-a96c-4ced-8449-bfb1c479c4b4, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=43f79cf7-670d-4eb6-8f85-82c87ca3de15) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.710 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 43f79cf7-670d-4eb6-8f85-82c87ca3de15 in datapath 4a724d83-7a03-449e-b06f-f9f1f1bb686e bound to our chassis
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.714 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a724d83-7a03-449e-b06f-f9f1f1bb686e
Sep 30 18:45:13 compute-0 systemd-machined[219917]: New machine qemu-26-instance-00000022.
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.727 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b263aaf3-281a-4a71-a7c1-144bbe1d08ec]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.728 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a724d83-71 in ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.729 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a724d83-70 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.729 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad38992-40a4-47ba-a756-e9e9a75988d9]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.730 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6096fe6c-2112-4060-b2b5-26b05050a2ee]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000022.
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.742 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[aa44b5cd-298e-48ab-a236-e228bb2d768e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 systemd-udevd[355228]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:45:13 compute-0 NetworkManager[45059]: <info>  [1759257913.7584] device (tap43f79cf7-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:45:13 compute-0 NetworkManager[45059]: <info>  [1759257913.7597] device (tap43f79cf7-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.759 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1928b3-d40c-4488-a9e8-0095aeb395f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:13 compute-0 ceph-mon[73755]: pgmap v2004: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:13 compute-0 nova_compute[265391]: 2025-09-30 18:45:13.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:13 compute-0 ovn_controller[156242]: 2025-09-30T18:45:13Z|00294|binding|INFO|Setting lport 43f79cf7-670d-4eb6-8f85-82c87ca3de15 ovn-installed in OVS
Sep 30 18:45:13 compute-0 ovn_controller[156242]: 2025-09-30T18:45:13Z|00295|binding|INFO|Setting lport 43f79cf7-670d-4eb6-8f85-82c87ca3de15 up in Southbound
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.786 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bd1000-1a3c-4cee-b7eb-304470199d18]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.791 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[16514762-3ffd-4673-9f65-d04d05ddacf5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 NetworkManager[45059]: <info>  [1759257913.7928] manager: (tap4a724d83-70): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.827 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[dc70ca6f-3627-4cc3-affd-32c2509f1c39]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.832 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd3d3ec-2db6-4ebc-b629-5f41ea059b40]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 NetworkManager[45059]: <info>  [1759257913.8555] device (tap4a724d83-70): carrier: link connected
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.863 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1cb206-88cf-4cce-a8a6-094b172ac424]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:13.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.880 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c8be9596-fa7e-4cf4-bfd0-9472ea6ddbcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a724d83-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:31:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651255, 'reachable_time': 40989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355260, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.896 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[65f24e34-fabf-4d15-87f9-a7a3170c56b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:31e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651255, 'tstamp': 651255}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355261, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:13.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.914 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0ab342-c32c-409f-a54f-afeea97e02f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a724d83-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:31:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651255, 'reachable_time': 40989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355262, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:13.946 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b688e3c2-76ab-4028-af03-239520ed8199]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.003 2 DEBUG nova.compute.manager [req-e9a4a1f9-1525-497d-bb89-ffbd02637faf req-57077b59-d41b-4de5-8421-270188276e22 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received event network-vif-plugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.003 2 DEBUG oslo_concurrency.lockutils [req-e9a4a1f9-1525-497d-bb89-ffbd02637faf req-57077b59-d41b-4de5-8421-270188276e22 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.004 2 DEBUG oslo_concurrency.lockutils [req-e9a4a1f9-1525-497d-bb89-ffbd02637faf req-57077b59-d41b-4de5-8421-270188276e22 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.004 2 DEBUG oslo_concurrency.lockutils [req-e9a4a1f9-1525-497d-bb89-ffbd02637faf req-57077b59-d41b-4de5-8421-270188276e22 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.004 2 DEBUG nova.compute.manager [req-e9a4a1f9-1525-497d-bb89-ffbd02637faf req-57077b59-d41b-4de5-8421-270188276e22 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Processing event network-vif-plugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.015 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[947526be-4bc9-4cd0-8a79-d47539ff7ada]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.029 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a724d83-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.029 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.029 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a724d83-70, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:14 compute-0 kernel: tap4a724d83-70: entered promiscuous mode
Sep 30 18:45:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:14 compute-0 NetworkManager[45059]: <info>  [1759257914.0323] manager: (tap4a724d83-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.033 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a724d83-70, col_values=(('external_ids', {'iface-id': 'd979d9c2-641a-4559-b95a-f55833182093'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:14 compute-0 ovn_controller[156242]: 2025-09-30T18:45:14Z|00296|binding|INFO|Releasing lport d979d9c2-641a-4559-b95a-f55833182093 from this chassis (sb_readonly=0)
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.048 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.050 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c667fa87-d76a-49ea-b7de-a9d3a45df54f]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.051 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.051 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.051 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 4a724d83-7a03-449e-b06f-f9f1f1bb686e disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.051 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.052 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5cee5a-665e-4576-9bca-2dd50bc80187]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.052 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.053 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb0b82e-26fe-4c37-b1d7-73e86fb1a7a1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.053 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-4a724d83-7a03-449e-b06f-f9f1f1bb686e
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID 4a724d83-7a03-449e-b06f-f9f1f1bb686e
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.054 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'env', 'PROCESS_TAG=haproxy-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a724d83-7a03-449e-b06f-f9f1f1bb686e.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:45:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:14.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:14 compute-0 podman[355335]: 2025-09-30 18:45:14.434444367 +0000 UTC m=+0.051658213 container create f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Sep 30 18:45:14 compute-0 systemd[1]: Started libpod-conmon-f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273.scope.
Sep 30 18:45:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:45:14 compute-0 podman[355335]: 2025-09-30 18:45:14.404199863 +0000 UTC m=+0.021413729 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc00bf7f2966f6032419279011829979354ba2079bc2f409e49f4ca8245c001e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:14 compute-0 podman[355335]: 2025-09-30 18:45:14.512367541 +0000 UTC m=+0.129581407 container init f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, tcib_managed=true)
Sep 30 18:45:14 compute-0 podman[355335]: 2025-09-30 18:45:14.517595915 +0000 UTC m=+0.134809761 container start f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Sep 30 18:45:14 compute-0 neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e[355352]: [NOTICE]   (355356) : New worker (355358) forked
Sep 30 18:45:14 compute-0 neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e[355352]: [NOTICE]   (355356) : Loading success.
Sep 30 18:45:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:14.582 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.804 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.807 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.810 2 INFO nova.virt.libvirt.driver [-] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Instance spawned successfully.
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.811 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:45:14 compute-0 nova_compute[265391]: 2025-09-30 18:45:14.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:15 compute-0 nova_compute[265391]: 2025-09-30 18:45:15.323 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:15 compute-0 nova_compute[265391]: 2025-09-30 18:45:15.324 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:15 compute-0 nova_compute[265391]: 2025-09-30 18:45:15.325 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:15 compute-0 nova_compute[265391]: 2025-09-30 18:45:15.325 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:15 compute-0 nova_compute[265391]: 2025-09-30 18:45:15.326 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:15 compute-0 nova_compute[265391]: 2025-09-30 18:45:15.327 2 DEBUG nova.virt.libvirt.driver [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2005: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:15 compute-0 nova_compute[265391]: 2025-09-30 18:45:15.838 2 INFO nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Took 10.42 seconds to spawn the instance on the hypervisor.
Sep 30 18:45:15 compute-0 nova_compute[265391]: 2025-09-30 18:45:15.839 2 DEBUG nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:45:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.096 2 DEBUG nova.compute.manager [req-94f376b0-00b4-4ea7-9aeb-988dfcbbb34e req-68ee988a-7e9c-49cf-ac48-f23840c6804e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received event network-vif-plugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.096 2 DEBUG oslo_concurrency.lockutils [req-94f376b0-00b4-4ea7-9aeb-988dfcbbb34e req-68ee988a-7e9c-49cf-ac48-f23840c6804e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.097 2 DEBUG oslo_concurrency.lockutils [req-94f376b0-00b4-4ea7-9aeb-988dfcbbb34e req-68ee988a-7e9c-49cf-ac48-f23840c6804e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.097 2 DEBUG oslo_concurrency.lockutils [req-94f376b0-00b4-4ea7-9aeb-988dfcbbb34e req-68ee988a-7e9c-49cf-ac48-f23840c6804e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.097 2 DEBUG nova.compute.manager [req-94f376b0-00b4-4ea7-9aeb-988dfcbbb34e req-68ee988a-7e9c-49cf-ac48-f23840c6804e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] No waiting events found dispatching network-vif-plugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.097 2 WARNING nova.compute.manager [req-94f376b0-00b4-4ea7-9aeb-988dfcbbb34e req-68ee988a-7e9c-49cf-ac48-f23840c6804e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received unexpected event network-vif-plugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 for instance with vm_state active and task_state None.
Sep 30 18:45:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:16.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.373 2 INFO nova.compute.manager [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Took 16.06 seconds to build instance.
Sep 30 18:45:16 compute-0 ceph-mon[73755]: pgmap v2005: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:16 compute-0 nova_compute[265391]: 2025-09-30 18:45:16.879 2 DEBUG oslo_concurrency.lockutils [None req-8ea570b0-8fcc-4a80-a3c7-dbcf1600ff34 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.578s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:17 compute-0 sudo[355370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:45:17 compute-0 sudo[355370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:17 compute-0 sudo[355370]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:17 compute-0 sudo[355395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:45:17 compute-0 sudo[355395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:17.375Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:45:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:17.375Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:45:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2006: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:17 compute-0 sudo[355395]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:45:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:45:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:45:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:45:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2007: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Sep 30 18:45:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2008: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 362 B/s rd, 0 op/s
Sep 30 18:45:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:45:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:45:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:45:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:45:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:17.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:45:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:45:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:45:17 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:45:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:45:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:45:17 compute-0 sudo[355454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:45:17 compute-0 sudo[355454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:17 compute-0 sudo[355454]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:17 compute-0 sudo[355479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:45:17 compute-0 sudo[355479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:18.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:18 compute-0 podman[355547]: 2025-09-30 18:45:18.429508333 +0000 UTC m=+0.044804227 container create c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:45:18 compute-0 systemd[1]: Started libpod-conmon-c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f.scope.
Sep 30 18:45:18 compute-0 ceph-mon[73755]: pgmap v2006: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:45:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:45:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:45:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:45:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:45:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:45:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:45:18 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:45:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:45:18 compute-0 podman[355547]: 2025-09-30 18:45:18.4133751 +0000 UTC m=+0.028671014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:45:18 compute-0 podman[355547]: 2025-09-30 18:45:18.520620705 +0000 UTC m=+0.135916619 container init c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:45:18 compute-0 podman[355547]: 2025-09-30 18:45:18.527920662 +0000 UTC m=+0.143216556 container start c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:45:18 compute-0 podman[355547]: 2025-09-30 18:45:18.531086183 +0000 UTC m=+0.146382087 container attach c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cerf, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:45:18 compute-0 systemd[1]: libpod-c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f.scope: Deactivated successfully.
Sep 30 18:45:18 compute-0 fervent_cerf[355563]: 167 167
Sep 30 18:45:18 compute-0 conmon[355563]: conmon c2bb6b94c1dbc8e65a16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f.scope/container/memory.events
Sep 30 18:45:18 compute-0 podman[355547]: 2025-09-30 18:45:18.534371867 +0000 UTC m=+0.149667771 container died c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cerf, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:45:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b5cd51d72b803ec7f8bf1071841241e97fe969db81716e7e6bf4940f76152ee-merged.mount: Deactivated successfully.
Sep 30 18:45:18 compute-0 podman[355547]: 2025-09-30 18:45:18.576628368 +0000 UTC m=+0.191924272 container remove c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_cerf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:45:18 compute-0 systemd[1]: libpod-conmon-c2bb6b94c1dbc8e65a168cc7bd6ff466061c7f403b8076c03a82af722e20dd3f.scope: Deactivated successfully.
Sep 30 18:45:18 compute-0 podman[355587]: 2025-09-30 18:45:18.756767218 +0000 UTC m=+0.046189293 container create 69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_booth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 18:45:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:18] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:45:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:18] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:45:18 compute-0 systemd[1]: Started libpod-conmon-69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b.scope.
Sep 30 18:45:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5476672afb49d80b384ca589bcfdd1fca1547ea31931846978fb913e5b901/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:18 compute-0 podman[355587]: 2025-09-30 18:45:18.738768838 +0000 UTC m=+0.028190933 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5476672afb49d80b384ca589bcfdd1fca1547ea31931846978fb913e5b901/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5476672afb49d80b384ca589bcfdd1fca1547ea31931846978fb913e5b901/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5476672afb49d80b384ca589bcfdd1fca1547ea31931846978fb913e5b901/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe5476672afb49d80b384ca589bcfdd1fca1547ea31931846978fb913e5b901/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:18 compute-0 podman[355587]: 2025-09-30 18:45:18.849128282 +0000 UTC m=+0.138550357 container init 69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_booth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:45:18 compute-0 podman[355587]: 2025-09-30 18:45:18.856287765 +0000 UTC m=+0.145709840 container start 69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_booth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:45:18 compute-0 podman[355587]: 2025-09-30 18:45:18.859439926 +0000 UTC m=+0.148862031 container attach 69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_booth, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:45:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:18.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:19 compute-0 sad_booth[355604]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:45:19 compute-0 sad_booth[355604]: --> All data devices are unavailable
Sep 30 18:45:19 compute-0 systemd[1]: libpod-69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b.scope: Deactivated successfully.
Sep 30 18:45:19 compute-0 podman[355619]: 2025-09-30 18:45:19.278691874 +0000 UTC m=+0.035819348 container died 69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbe5476672afb49d80b384ca589bcfdd1fca1547ea31931846978fb913e5b901-merged.mount: Deactivated successfully.
Sep 30 18:45:19 compute-0 podman[355619]: 2025-09-30 18:45:19.317353043 +0000 UTC m=+0.074480497 container remove 69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 18:45:19 compute-0 systemd[1]: libpod-conmon-69bf8bf1cd4fb33050aa0ff273ae11a86cc4b4e240b3f27faf5bd4366523a83b.scope: Deactivated successfully.
Sep 30 18:45:19 compute-0 sudo[355479]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:19 compute-0 sudo[355635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:45:19 compute-0 sudo[355635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:19 compute-0 sudo[355635]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:19 compute-0 ceph-mon[73755]: pgmap v2007: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Sep 30 18:45:19 compute-0 ceph-mon[73755]: pgmap v2008: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 362 B/s rd, 0 op/s
Sep 30 18:45:19 compute-0 sudo[355660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:45:19 compute-0 sudo[355660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:19 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:19.583 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2009: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 105 op/s
Sep 30 18:45:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:19.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:19 compute-0 nova_compute[265391]: 2025-09-30 18:45:19.943 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "31444af1-147d-40c9-b93a-19a7618c7703" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:19 compute-0 nova_compute[265391]: 2025-09-30 18:45:19.943 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:19 compute-0 nova_compute[265391]: 2025-09-30 18:45:19.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:19 compute-0 podman[355726]: 2025-09-30 18:45:19.979631272 +0000 UTC m=+0.039981395 container create 8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:45:20 compute-0 systemd[1]: Started libpod-conmon-8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36.scope.
Sep 30 18:45:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:45:20 compute-0 podman[355726]: 2025-09-30 18:45:19.961522998 +0000 UTC m=+0.021873141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:45:20 compute-0 podman[355726]: 2025-09-30 18:45:20.06552822 +0000 UTC m=+0.125878353 container init 8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:45:20 compute-0 podman[355726]: 2025-09-30 18:45:20.072932109 +0000 UTC m=+0.133282252 container start 8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banzai, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:45:20 compute-0 podman[355726]: 2025-09-30 18:45:20.076663975 +0000 UTC m=+0.137014118 container attach 8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banzai, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:45:20 compute-0 exciting_banzai[355742]: 167 167
Sep 30 18:45:20 compute-0 systemd[1]: libpod-8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36.scope: Deactivated successfully.
Sep 30 18:45:20 compute-0 podman[355726]: 2025-09-30 18:45:20.078647265 +0000 UTC m=+0.138997388 container died 8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banzai, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d653c658ba977d06d7c9504b33ecea53b815297f1e3be1f0282ecad680c7cf7a-merged.mount: Deactivated successfully.
Sep 30 18:45:20 compute-0 podman[355726]: 2025-09-30 18:45:20.120049585 +0000 UTC m=+0.180399718 container remove 8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:45:20 compute-0 systemd[1]: libpod-conmon-8c97cfa7de1c2784e554fe64a16e9c692348f183104492e32082470ddf123f36.scope: Deactivated successfully.
Sep 30 18:45:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:20.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:20 compute-0 podman[355767]: 2025-09-30 18:45:20.329806983 +0000 UTC m=+0.077559726 container create 582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:45:20 compute-0 systemd[1]: Started libpod-conmon-582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b.scope.
Sep 30 18:45:20 compute-0 podman[355767]: 2025-09-30 18:45:20.2886815 +0000 UTC m=+0.036434333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:45:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fe7a2dbf1f8c0e97e0d3c5e55e36a280e2d82ee347357ddb84b7a5e5a55abc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fe7a2dbf1f8c0e97e0d3c5e55e36a280e2d82ee347357ddb84b7a5e5a55abc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fe7a2dbf1f8c0e97e0d3c5e55e36a280e2d82ee347357ddb84b7a5e5a55abc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fe7a2dbf1f8c0e97e0d3c5e55e36a280e2d82ee347357ddb84b7a5e5a55abc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:20 compute-0 podman[355767]: 2025-09-30 18:45:20.433784754 +0000 UTC m=+0.181537577 container init 582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:45:20 compute-0 podman[355767]: 2025-09-30 18:45:20.443581065 +0000 UTC m=+0.191333828 container start 582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_easley, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 18:45:20 compute-0 podman[355767]: 2025-09-30 18:45:20.447470104 +0000 UTC m=+0.195222937 container attach 582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_easley, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 18:45:20 compute-0 nova_compute[265391]: 2025-09-30 18:45:20.459 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:45:20 compute-0 practical_easley[355783]: {
Sep 30 18:45:20 compute-0 practical_easley[355783]:     "0": [
Sep 30 18:45:20 compute-0 practical_easley[355783]:         {
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "devices": [
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "/dev/loop3"
Sep 30 18:45:20 compute-0 practical_easley[355783]:             ],
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "lv_name": "ceph_lv0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "lv_size": "21470642176",
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "name": "ceph_lv0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "tags": {
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.cluster_name": "ceph",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.crush_device_class": "",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.encrypted": "0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.osd_id": "0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.type": "block",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.vdo": "0",
Sep 30 18:45:20 compute-0 practical_easley[355783]:                 "ceph.with_tpm": "0"
Sep 30 18:45:20 compute-0 practical_easley[355783]:             },
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "type": "block",
Sep 30 18:45:20 compute-0 practical_easley[355783]:             "vg_name": "ceph_vg0"
Sep 30 18:45:20 compute-0 practical_easley[355783]:         }
Sep 30 18:45:20 compute-0 practical_easley[355783]:     ]
Sep 30 18:45:20 compute-0 practical_easley[355783]: }
Sep 30 18:45:20 compute-0 systemd[1]: libpod-582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b.scope: Deactivated successfully.
Sep 30 18:45:20 compute-0 podman[355767]: 2025-09-30 18:45:20.766386225 +0000 UTC m=+0.514138988 container died 582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3fe7a2dbf1f8c0e97e0d3c5e55e36a280e2d82ee347357ddb84b7a5e5a55abc-merged.mount: Deactivated successfully.
Sep 30 18:45:20 compute-0 podman[355767]: 2025-09-30 18:45:20.80487859 +0000 UTC m=+0.552631333 container remove 582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:45:20 compute-0 systemd[1]: libpod-conmon-582addbd18056528971dc9c9ef7e8e39ade080f678503c0be249d11feafb408b.scope: Deactivated successfully.
Sep 30 18:45:20 compute-0 sudo[355660]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:20 compute-0 sudo[355805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:45:20 compute-0 sudo[355805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:20 compute-0 sudo[355805]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:20 compute-0 sudo[355829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:45:20 compute-0 sudo[355829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:20 compute-0 sudo[355829]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:20 compute-0 sudo[355855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:45:20 compute-0 sudo[355855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:21 compute-0 nova_compute[265391]: 2025-09-30 18:45:21.019 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:21 compute-0 nova_compute[265391]: 2025-09-30 18:45:21.026 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.007s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:21 compute-0 nova_compute[265391]: 2025-09-30 18:45:21.032 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:45:21 compute-0 nova_compute[265391]: 2025-09-30 18:45:21.033 2 INFO nova.compute.claims [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:45:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:21 compute-0 podman[355921]: 2025-09-30 18:45:21.392523799 +0000 UTC m=+0.041027031 container create b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rubin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 18:45:21 compute-0 systemd[1]: Started libpod-conmon-b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed.scope.
Sep 30 18:45:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:45:21 compute-0 podman[355921]: 2025-09-30 18:45:21.373951554 +0000 UTC m=+0.022454846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:45:21 compute-0 podman[355921]: 2025-09-30 18:45:21.490055265 +0000 UTC m=+0.138558527 container init b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:45:21 compute-0 podman[355921]: 2025-09-30 18:45:21.497448184 +0000 UTC m=+0.145951436 container start b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 18:45:21 compute-0 podman[355921]: 2025-09-30 18:45:21.500451801 +0000 UTC m=+0.148955143 container attach b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:45:21 compute-0 angry_rubin[355938]: 167 167
Sep 30 18:45:21 compute-0 systemd[1]: libpod-b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed.scope: Deactivated successfully.
Sep 30 18:45:21 compute-0 podman[355921]: 2025-09-30 18:45:21.502623346 +0000 UTC m=+0.151126618 container died b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rubin, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:45:21 compute-0 ceph-mon[73755]: pgmap v2009: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 105 op/s
Sep 30 18:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c50666dda0417c49ca47023f8a4be2041fa742888c1623891c4af6f0b379fa73-merged.mount: Deactivated successfully.
Sep 30 18:45:21 compute-0 podman[355921]: 2025-09-30 18:45:21.545975296 +0000 UTC m=+0.194478538 container remove b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_rubin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:45:21 compute-0 systemd[1]: libpod-conmon-b30fa1b756e9848144bed9860ad1fcbf19498da5f0a71506f03985db346befed.scope: Deactivated successfully.
Sep 30 18:45:21 compute-0 podman[355959]: 2025-09-30 18:45:21.654496643 +0000 UTC m=+0.070706101 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4)
Sep 30 18:45:21 compute-0 podman[355956]: 2025-09-30 18:45:21.656183466 +0000 UTC m=+0.074207390 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2)
Sep 30 18:45:21 compute-0 nova_compute[265391]: 2025-09-30 18:45:21.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:21 compute-0 podman[355960]: 2025-09-30 18:45:21.694220429 +0000 UTC m=+0.104567837 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm)
Sep 30 18:45:21 compute-0 podman[356020]: 2025-09-30 18:45:21.729232315 +0000 UTC m=+0.043337210 container create 1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:45:21 compute-0 systemd[1]: Started libpod-conmon-1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76.scope.
Sep 30 18:45:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683bd43851a5d70016025dae3a7da8fc788e92afcdca262cd94aae24607bcb35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683bd43851a5d70016025dae3a7da8fc788e92afcdca262cd94aae24607bcb35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683bd43851a5d70016025dae3a7da8fc788e92afcdca262cd94aae24607bcb35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683bd43851a5d70016025dae3a7da8fc788e92afcdca262cd94aae24607bcb35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:45:21 compute-0 podman[356020]: 2025-09-30 18:45:21.708771782 +0000 UTC m=+0.022876707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:45:21 compute-0 podman[356020]: 2025-09-30 18:45:21.810809753 +0000 UTC m=+0.124914678 container init 1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curran, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:45:21 compute-0 podman[356020]: 2025-09-30 18:45:21.817986367 +0000 UTC m=+0.132091262 container start 1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curran, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:45:21 compute-0 podman[356020]: 2025-09-30 18:45:21.820692416 +0000 UTC m=+0.134797331 container attach 1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curran, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:45:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2010: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 104 op/s
Sep 30 18:45:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:21.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:22 compute-0 nova_compute[265391]: 2025-09-30 18:45:22.094 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:22.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:45:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:45:22 compute-0 lvm[356132]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:45:22 compute-0 lvm[356132]: VG ceph_vg0 finished
Sep 30 18:45:22 compute-0 festive_curran[356037]: {}
Sep 30 18:45:22 compute-0 systemd[1]: libpod-1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76.scope: Deactivated successfully.
Sep 30 18:45:22 compute-0 systemd[1]: libpod-1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76.scope: Consumed 1.048s CPU time.
Sep 30 18:45:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:45:22 compute-0 podman[356136]: 2025-09-30 18:45:22.54765321 +0000 UTC m=+0.024880428 container died 1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curran, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:45:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:45:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493896786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-683bd43851a5d70016025dae3a7da8fc788e92afcdca262cd94aae24607bcb35-merged.mount: Deactivated successfully.
Sep 30 18:45:22 compute-0 podman[356136]: 2025-09-30 18:45:22.589759086 +0000 UTC m=+0.066986284 container remove 1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:45:22 compute-0 systemd[1]: libpod-conmon-1963d15e42af1ff86d8dd391a423b43d384ca84939846ee76ba117b72bafca76.scope: Deactivated successfully.
Sep 30 18:45:22 compute-0 nova_compute[265391]: 2025-09-30 18:45:22.610 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:22 compute-0 nova_compute[265391]: 2025-09-30 18:45:22.616 2 DEBUG nova.compute.provider_tree [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:45:22 compute-0 sudo[355855]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:45:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:45:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:45:22 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:45:22 compute-0 sudo[356153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:45:22 compute-0 sudo[356153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:22 compute-0 sudo[356153]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:23 compute-0 nova_compute[265391]: 2025-09-30 18:45:23.126 2 DEBUG nova.scheduler.client.report [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:45:23 compute-0 ceph-mon[73755]: pgmap v2010: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 104 op/s
Sep 30 18:45:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2493896786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:45:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:45:23 compute-0 nova_compute[265391]: 2025-09-30 18:45:23.639 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.613s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:23 compute-0 nova_compute[265391]: 2025-09-30 18:45:23.639 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:45:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2011: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 105 op/s
Sep 30 18:45:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:23.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:23 compute-0 nova_compute[265391]: 2025-09-30 18:45:23.935 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:23 compute-0 nova_compute[265391]: 2025-09-30 18:45:23.935 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:45:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:24 compute-0 nova_compute[265391]: 2025-09-30 18:45:24.149 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:45:24 compute-0 nova_compute[265391]: 2025-09-30 18:45:24.150 2 DEBUG nova.network.neutron [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:45:24 compute-0 nova_compute[265391]: 2025-09-30 18:45:24.150 2 WARNING neutronclient.v2_0.client [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:24 compute-0 nova_compute[265391]: 2025-09-30 18:45:24.151 2 WARNING neutronclient.v2_0.client [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:24.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:24 compute-0 nova_compute[265391]: 2025-09-30 18:45:24.658 2 INFO nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:45:24 compute-0 nova_compute[265391]: 2025-09-30 18:45:24.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:25 compute-0 nova_compute[265391]: 2025-09-30 18:45:25.167 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:45:25 compute-0 ceph-mon[73755]: pgmap v2011: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 105 op/s
Sep 30 18:45:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2012: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 105 op/s
Sep 30 18:45:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:25.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:26 compute-0 unix_chkpwd[356184]: password check failed for user (root)
Sep 30 18:45:26 compute-0 sshd-session[356180]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=106.36.198.78  user=root
Sep 30 18:45:26 compute-0 ovn_controller[156242]: 2025-09-30T18:45:26Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:ad:c0 10.100.0.11
Sep 30 18:45:26 compute-0 ovn_controller[156242]: 2025-09-30T18:45:26Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:ad:c0 10.100.0.11
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.183 2 DEBUG nova.network.neutron [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Successfully created port: 2f1b70a8-5d22-4c58-b80e-ec83063488da _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.188 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.190 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.191 2 INFO nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Creating image(s)
Sep 30 18:45:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:26.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.234 2 DEBUG nova.storage.rbd_utils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 31444af1-147d-40c9-b93a-19a7618c7703_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.266 2 DEBUG nova.storage.rbd_utils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 31444af1-147d-40c9-b93a-19a7618c7703_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.298 2 DEBUG nova.storage.rbd_utils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 31444af1-147d-40c9-b93a-19a7618c7703_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.301 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.360 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.360 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.361 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.361 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.387 2 DEBUG nova.storage.rbd_utils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 31444af1-147d-40c9-b93a-19a7618c7703_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.390 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 31444af1-147d-40c9-b93a-19a7618c7703_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.660 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 31444af1-147d-40c9-b93a-19a7618c7703_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.751 2 DEBUG nova.storage.rbd_utils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] resizing rbd image 31444af1-147d-40c9-b93a-19a7618c7703_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.871 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.872 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Ensure instance console log exists: /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.872 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.873 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:26 compute-0 nova_compute[265391]: 2025-09-30 18:45:26.873 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:27.376Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:27 compute-0 ceph-mon[73755]: pgmap v2012: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 105 op/s
Sep 30 18:45:27 compute-0 nova_compute[265391]: 2025-09-30 18:45:27.740 2 DEBUG nova.network.neutron [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Successfully updated port: 2f1b70a8-5d22-4c58-b80e-ec83063488da _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:45:27 compute-0 nova_compute[265391]: 2025-09-30 18:45:27.806 2 DEBUG nova.compute.manager [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received event network-changed-2f1b70a8-5d22-4c58-b80e-ec83063488da external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:27 compute-0 nova_compute[265391]: 2025-09-30 18:45:27.806 2 DEBUG nova.compute.manager [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Refreshing instance network info cache due to event network-changed-2f1b70a8-5d22-4c58-b80e-ec83063488da. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:45:27 compute-0 nova_compute[265391]: 2025-09-30 18:45:27.806 2 DEBUG oslo_concurrency.lockutils [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-31444af1-147d-40c9-b93a-19a7618c7703" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:45:27 compute-0 nova_compute[265391]: 2025-09-30 18:45:27.807 2 DEBUG oslo_concurrency.lockutils [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-31444af1-147d-40c9-b93a-19a7618c7703" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:45:27 compute-0 nova_compute[265391]: 2025-09-30 18:45:27.807 2 DEBUG nova.network.neutron [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Refreshing network info cache for port 2f1b70a8-5d22-4c58-b80e-ec83063488da _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:45:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2013: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 89 op/s
Sep 30 18:45:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:27.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:28.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:28 compute-0 sshd-session[356180]: Failed password for root from 106.36.198.78 port 40350 ssh2
Sep 30 18:45:28 compute-0 nova_compute[265391]: 2025-09-30 18:45:28.249 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "refresh_cache-31444af1-147d-40c9-b93a-19a7618c7703" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:45:28 compute-0 nova_compute[265391]: 2025-09-30 18:45:28.315 2 WARNING neutronclient.v2_0.client [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:28] "GET /metrics HTTP/1.1" 200 46730 "" "Prometheus/2.51.0"
Sep 30 18:45:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:28] "GET /metrics HTTP/1.1" 200 46730 "" "Prometheus/2.51.0"
Sep 30 18:45:28 compute-0 nova_compute[265391]: 2025-09-30 18:45:28.806 2 DEBUG nova.network.neutron [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:45:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:28.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:28 compute-0 nova_compute[265391]: 2025-09-30 18:45:28.976 2 DEBUG nova.network.neutron [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:45:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:29 compute-0 sshd-session[356180]: Received disconnect from 106.36.198.78 port 40350:11:  [preauth]
Sep 30 18:45:29 compute-0 sshd-session[356180]: Disconnected from authenticating user root 106.36.198.78 port 40350 [preauth]
Sep 30 18:45:29 compute-0 nova_compute[265391]: 2025-09-30 18:45:29.491 2 DEBUG oslo_concurrency.lockutils [req-b0b5e3be-d8a9-487f-83a0-7b8c252167c8 req-fd0542bc-fff4-4e4c-8714-a0d833576a52 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-31444af1-147d-40c9-b93a-19a7618c7703" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:45:29 compute-0 nova_compute[265391]: 2025-09-30 18:45:29.492 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquired lock "refresh_cache-31444af1-147d-40c9-b93a-19a7618c7703" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:45:29 compute-0 nova_compute[265391]: 2025-09-30 18:45:29.492 2 DEBUG nova.network.neutron [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:45:29 compute-0 ceph-mon[73755]: pgmap v2013: 353 pgs: 353 active+clean; 88 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 89 op/s
Sep 30 18:45:29 compute-0 podman[276673]: time="2025-09-30T18:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:45:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:45:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10767 "" "Go-http-client/1.1"
Sep 30 18:45:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2014: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Sep 30 18:45:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:29.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:29 compute-0 nova_compute[265391]: 2025-09-30 18:45:29.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:30.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:31 compute-0 nova_compute[265391]: 2025-09-30 18:45:31.007 2 DEBUG nova.network.neutron [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:45:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:31 compute-0 openstack_network_exporter[279566]: ERROR   18:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:45:31 compute-0 openstack_network_exporter[279566]: ERROR   18:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:45:31 compute-0 openstack_network_exporter[279566]: ERROR   18:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:45:31 compute-0 openstack_network_exporter[279566]: ERROR   18:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:45:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:45:31 compute-0 openstack_network_exporter[279566]: ERROR   18:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:45:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:45:31 compute-0 ceph-mon[73755]: pgmap v2014: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Sep 30 18:45:31 compute-0 nova_compute[265391]: 2025-09-30 18:45:31.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:31 compute-0 nova_compute[265391]: 2025-09-30 18:45:31.778 2 WARNING neutronclient.v2_0.client [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2015: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:45:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:31.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:31 compute-0 nova_compute[265391]: 2025-09-30 18:45:31.958 2 DEBUG nova.network.neutron [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Updating instance_info_cache with network_info: [{"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:45:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:32.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.464 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Releasing lock "refresh_cache-31444af1-147d-40c9-b93a-19a7618c7703" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.465 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Instance network_info: |[{"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.467 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Start _get_guest_xml network_info=[{"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.471 2 WARNING nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.472 2 DEBUG nova.virt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteWorkloadBalancingStrategy-server-2085781867', uuid='31444af1-147d-40c9-b93a-19a7618c7703'), owner=OwnerMeta(userid='a881d4a6df1c47b5ae0d12c72bd2a02b', username='tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin', projectid='0e1882e8f3e74aa3840e38f2ce263f25', projectname='tempest-TestExecuteWorkloadBalancingStrategy-1499811811'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759257932.4723678) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.476 2 DEBUG nova.virt.libvirt.host [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.477 2 DEBUG nova.virt.libvirt.host [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.479 2 DEBUG nova.virt.libvirt.host [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.480 2 DEBUG nova.virt.libvirt.host [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.480 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.480 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.481 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.481 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.481 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.481 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.481 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.482 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.482 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.482 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.482 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.482 2 DEBUG nova.virt.hardware [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.485 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:45:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814742906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.933 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.958 2 DEBUG nova.storage.rbd_utils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 31444af1-147d-40c9-b93a-19a7618c7703_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:32 compute-0 nova_compute[265391]: 2025-09-30 18:45:32.961 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:45:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336090798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.408 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.410 2 DEBUG nova.virt.libvirt.vif [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:45:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalancingStrategy-server-2085781867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancingstrategy-server-2085781867',id=35,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e1882e8f3e74aa3840e38f2ce263f25',ramdisk_id='',reservation_id='r-z8t0wt1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811',owner_user_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:45:25Z,user_data=None,user_id='a881d4a6df1c47b5ae0d12c72bd2a02b',uuid=31444af1-147d-40c9-b93a-19a7618c7703,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.410 2 DEBUG nova.network.os_vif_util [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converting VIF {"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.411 2 DEBUG nova.network.os_vif_util [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:98:c3,bridge_name='br-int',has_traffic_filtering=True,id=2f1b70a8-5d22-4c58-b80e-ec83063488da,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f1b70a8-5d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.412 2 DEBUG nova.objects.instance [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 31444af1-147d-40c9-b93a-19a7618c7703 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:45:33 compute-0 ceph-mon[73755]: pgmap v2015: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:45:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3814742906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:45:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/336090798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:45:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2016: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:45:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:33.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:33.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.921 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <uuid>31444af1-147d-40c9-b93a-19a7618c7703</uuid>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <name>instance-00000023</name>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteWorkloadBalancingStrategy-server-2085781867</nova:name>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:45:32</nova:creationTime>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:45:33 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:45:33 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:user uuid="a881d4a6df1c47b5ae0d12c72bd2a02b">tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin</nova:user>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:project uuid="0e1882e8f3e74aa3840e38f2ce263f25">tempest-TestExecuteWorkloadBalancingStrategy-1499811811</nova:project>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <nova:port uuid="2f1b70a8-5d22-4c58-b80e-ec83063488da">
Sep 30 18:45:33 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <system>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <entry name="serial">31444af1-147d-40c9-b93a-19a7618c7703</entry>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <entry name="uuid">31444af1-147d-40c9-b93a-19a7618c7703</entry>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </system>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <os>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   </os>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <features>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   </features>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/31444af1-147d-40c9-b93a-19a7618c7703_disk">
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       </source>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/31444af1-147d-40c9-b93a-19a7618c7703_disk.config">
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       </source>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:45:33 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:95:98:c3"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <target dev="tap2f1b70a8-5d"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703/console.log" append="off"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <video>
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </video>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:45:33 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:45:33 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:45:33 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:45:33 compute-0 nova_compute[265391]: </domain>
Sep 30 18:45:33 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.922 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Preparing to wait for external event network-vif-plugged-2f1b70a8-5d22-4c58-b80e-ec83063488da prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.923 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "31444af1-147d-40c9-b93a-19a7618c7703-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.923 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.923 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.924 2 DEBUG nova.virt.libvirt.vif [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:45:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalancingStrategy-server-2085781867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancingstrategy-server-2085781867',id=35,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e1882e8f3e74aa3840e38f2ce263f25',ramdisk_id='',reservation_id='r-z8t0wt1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811',owner_user_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:45:25Z,user_data=None,user_id='a881d4a6df1c47b5ae0d12c72bd2a02b',uuid=31444af1-147d-40c9-b93a-19a7618c7703,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.925 2 DEBUG nova.network.os_vif_util [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converting VIF {"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.925 2 DEBUG nova.network.os_vif_util [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:98:c3,bridge_name='br-int',has_traffic_filtering=True,id=2f1b70a8-5d22-4c58-b80e-ec83063488da,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f1b70a8-5d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.926 2 DEBUG os_vif [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:98:c3,bridge_name='br-int',has_traffic_filtering=True,id=2f1b70a8-5d22-4c58-b80e-ec83063488da,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f1b70a8-5d') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.927 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.927 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.928 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'dee6db97-1e1c-5a93-beec-9da39085b41d', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f1b70a8-5d, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap2f1b70a8-5d, col_values=(('qos', UUID('73221b52-5cef-45db-93dd-6183d3cd46b8')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap2f1b70a8-5d, col_values=(('external_ids', {'iface-id': '2f1b70a8-5d22-4c58-b80e-ec83063488da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:98:c3', 'vm-uuid': '31444af1-147d-40c9-b93a-19a7618c7703'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:33 compute-0 NetworkManager[45059]: <info>  [1759257933.9377] manager: (tap2f1b70a8-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:33 compute-0 nova_compute[265391]: 2025-09-30 18:45:33.944 2 INFO os_vif [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:98:c3,bridge_name='br-int',has_traffic_filtering=True,id=2f1b70a8-5d22-4c58-b80e-ec83063488da,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f1b70a8-5d')
Sep 30 18:45:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:34.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:35 compute-0 nova_compute[265391]: 2025-09-30 18:45:35.520 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:45:35 compute-0 nova_compute[265391]: 2025-09-30 18:45:35.521 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:45:35 compute-0 nova_compute[265391]: 2025-09-30 18:45:35.521 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] No VIF found with MAC fa:16:3e:95:98:c3, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:45:35 compute-0 nova_compute[265391]: 2025-09-30 18:45:35.522 2 INFO nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Using config drive
Sep 30 18:45:35 compute-0 nova_compute[265391]: 2025-09-30 18:45:35.550 2 DEBUG nova.storage.rbd_utils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 31444af1-147d-40c9-b93a-19a7618c7703_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:35 compute-0 ceph-mon[73755]: pgmap v2016: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:45:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2017: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:45:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:35.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:36 compute-0 nova_compute[265391]: 2025-09-30 18:45:36.064 2 WARNING neutronclient.v2_0.client [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:36.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:45:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:45:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1833220988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:45:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:45:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1833220988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:45:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1833220988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:45:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1833220988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:45:36 compute-0 nova_compute[265391]: 2025-09-30 18:45:36.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.139 2 INFO nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Creating config drive at /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703/disk.config
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.144 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmprduxx6qa execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.280 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmprduxx6qa" returned: 0 in 0.136s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.315 2 DEBUG nova.storage.rbd_utils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] rbd image 31444af1-147d-40c9-b93a-19a7618c7703_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.319 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703/disk.config 31444af1-147d-40c9-b93a-19a7618c7703_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:45:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:45:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:37.377Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:45:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.480 2 DEBUG oslo_concurrency.processutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703/disk.config 31444af1-147d-40c9-b93a-19a7618c7703_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.481 2 INFO nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Deleting local config drive /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703/disk.config because it was imported into RBD.
Sep 30 18:45:37 compute-0 kernel: tap2f1b70a8-5d: entered promiscuous mode
Sep 30 18:45:37 compute-0 NetworkManager[45059]: <info>  [1759257937.5300] manager: (tap2f1b70a8-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:37 compute-0 ovn_controller[156242]: 2025-09-30T18:45:37Z|00297|binding|INFO|Claiming lport 2f1b70a8-5d22-4c58-b80e-ec83063488da for this chassis.
Sep 30 18:45:37 compute-0 ovn_controller[156242]: 2025-09-30T18:45:37Z|00298|binding|INFO|2f1b70a8-5d22-4c58-b80e-ec83063488da: Claiming fa:16:3e:95:98:c3 10.100.0.6
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.539 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:98:c3 10.100.0.6'], port_security=['fa:16:3e:95:98:c3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '31444af1-147d-40c9-b93a-19a7618c7703', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e1882e8f3e74aa3840e38f2ce263f25', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac9c09fc-7c26-43d2-bbe1-4bd3d1cfcf25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=793ccd68-a96c-4ced-8449-bfb1c479c4b4, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=2f1b70a8-5d22-4c58-b80e-ec83063488da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.540 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 2f1b70a8-5d22-4c58-b80e-ec83063488da in datapath 4a724d83-7a03-449e-b06f-f9f1f1bb686e bound to our chassis
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.542 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a724d83-7a03-449e-b06f-f9f1f1bb686e
Sep 30 18:45:37 compute-0 ovn_controller[156242]: 2025-09-30T18:45:37Z|00299|binding|INFO|Setting lport 2f1b70a8-5d22-4c58-b80e-ec83063488da ovn-installed in OVS
Sep 30 18:45:37 compute-0 ovn_controller[156242]: 2025-09-30T18:45:37Z|00300|binding|INFO|Setting lport 2f1b70a8-5d22-4c58-b80e-ec83063488da up in Southbound
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.562 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9f6209-421e-4b74-a6d0-67e992b8654c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:37 compute-0 systemd-udevd[356499]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:45:37 compute-0 systemd-machined[219917]: New machine qemu-27-instance-00000023.
Sep 30 18:45:37 compute-0 NetworkManager[45059]: <info>  [1759257937.5801] device (tap2f1b70a8-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:45:37 compute-0 NetworkManager[45059]: <info>  [1759257937.5814] device (tap2f1b70a8-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:45:37 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000023.
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.592 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5d954d-78ab-4d83-a102-e5f0d82854f7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.595 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[413e3f3f-9060-473d-858c-ca596ee941cb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.628 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[a4598377-8293-4f81-8ff5-c58f7db7575c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.647 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[860d9fed-5d44-4324-9323-c9bce50b3f2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a724d83-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:31:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651255, 'reachable_time': 40989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356511, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.672 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e17db9d0-30f0-4e60-ad5b-5a2f7a826292]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a724d83-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651267, 'tstamp': 651267}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356513, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a724d83-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651270, 'tstamp': 651270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356513, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.673 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a724d83-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:37 compute-0 nova_compute[265391]: 2025-09-30 18:45:37.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.677 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a724d83-70, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.677 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.677 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a724d83-70, col_values=(('external_ids', {'iface-id': 'd979d9c2-641a-4559-b95a-f55833182093'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.677 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:45:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:37.679 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[380f14c6-d3f8-451c-85d1-295e6ac910b9]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-4a724d83-7a03-449e-b06f-f9f1f1bb686e\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 4a724d83-7a03-449e-b06f-f9f1f1bb686e\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:37 compute-0 ceph-mon[73755]: pgmap v2017: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:45:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:45:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2018: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:45:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:37.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.132 2 DEBUG nova.compute.manager [req-73738dd1-600c-40d0-af53-8947dbef235b req-6f0b63f9-ca4d-43b8-9104-eb85d838db33 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received event network-vif-plugged-2f1b70a8-5d22-4c58-b80e-ec83063488da external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.132 2 DEBUG oslo_concurrency.lockutils [req-73738dd1-600c-40d0-af53-8947dbef235b req-6f0b63f9-ca4d-43b8-9104-eb85d838db33 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "31444af1-147d-40c9-b93a-19a7618c7703-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.132 2 DEBUG oslo_concurrency.lockutils [req-73738dd1-600c-40d0-af53-8947dbef235b req-6f0b63f9-ca4d-43b8-9104-eb85d838db33 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.133 2 DEBUG oslo_concurrency.lockutils [req-73738dd1-600c-40d0-af53-8947dbef235b req-6f0b63f9-ca4d-43b8-9104-eb85d838db33 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.133 2 DEBUG nova.compute.manager [req-73738dd1-600c-40d0-af53-8947dbef235b req-6f0b63f9-ca4d-43b8-9104-eb85d838db33 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Processing event network-vif-plugged-2f1b70a8-5d22-4c58-b80e-ec83063488da _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:45:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:38.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.520 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.524 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.528 2 INFO nova.virt.libvirt.driver [-] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Instance spawned successfully.
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.528 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:45:38 compute-0 ceph-mon[73755]: pgmap v2018: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:45:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:38] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:45:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:38] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:45:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:38.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:38 compute-0 nova_compute[265391]: 2025-09-30 18:45:38.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:39 compute-0 nova_compute[265391]: 2025-09-30 18:45:39.042 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:39 compute-0 nova_compute[265391]: 2025-09-30 18:45:39.045 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:39 compute-0 nova_compute[265391]: 2025-09-30 18:45:39.046 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:39 compute-0 nova_compute[265391]: 2025-09-30 18:45:39.046 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:39 compute-0 nova_compute[265391]: 2025-09-30 18:45:39.047 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:39 compute-0 nova_compute[265391]: 2025-09-30 18:45:39.047 2 DEBUG nova.virt.libvirt.driver [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:45:39 compute-0 podman[356557]: 2025-09-30 18:45:39.542405309 +0000 UTC m=+0.076849923 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:45:39 compute-0 podman[356559]: 2025-09-30 18:45:39.558832765 +0000 UTC m=+0.084030959 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:45:39 compute-0 nova_compute[265391]: 2025-09-30 18:45:39.562 2 INFO nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Took 13.37 seconds to spawn the instance on the hypervisor.
Sep 30 18:45:39 compute-0 nova_compute[265391]: 2025-09-30 18:45:39.562 2 DEBUG nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:45:39 compute-0 podman[356558]: 2025-09-30 18:45:39.574357757 +0000 UTC m=+0.107043335 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:45:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2019: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Sep 30 18:45:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:40 compute-0 nova_compute[265391]: 2025-09-30 18:45:40.101 2 INFO nova.compute.manager [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Took 19.13 seconds to build instance.
Sep 30 18:45:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:40 compute-0 nova_compute[265391]: 2025-09-30 18:45:40.229 2 DEBUG nova.compute.manager [req-f7929860-f9da-4713-bee7-050f2aac17e8 req-239dfb0e-ddc0-43bc-86b7-937a684ddaf0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received event network-vif-plugged-2f1b70a8-5d22-4c58-b80e-ec83063488da external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:40 compute-0 nova_compute[265391]: 2025-09-30 18:45:40.229 2 DEBUG oslo_concurrency.lockutils [req-f7929860-f9da-4713-bee7-050f2aac17e8 req-239dfb0e-ddc0-43bc-86b7-937a684ddaf0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "31444af1-147d-40c9-b93a-19a7618c7703-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:40 compute-0 nova_compute[265391]: 2025-09-30 18:45:40.230 2 DEBUG oslo_concurrency.lockutils [req-f7929860-f9da-4713-bee7-050f2aac17e8 req-239dfb0e-ddc0-43bc-86b7-937a684ddaf0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:40 compute-0 nova_compute[265391]: 2025-09-30 18:45:40.230 2 DEBUG oslo_concurrency.lockutils [req-f7929860-f9da-4713-bee7-050f2aac17e8 req-239dfb0e-ddc0-43bc-86b7-937a684ddaf0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:40 compute-0 nova_compute[265391]: 2025-09-30 18:45:40.231 2 DEBUG nova.compute.manager [req-f7929860-f9da-4713-bee7-050f2aac17e8 req-239dfb0e-ddc0-43bc-86b7-937a684ddaf0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] No waiting events found dispatching network-vif-plugged-2f1b70a8-5d22-4c58-b80e-ec83063488da pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:45:40 compute-0 nova_compute[265391]: 2025-09-30 18:45:40.231 2 WARNING nova.compute.manager [req-f7929860-f9da-4713-bee7-050f2aac17e8 req-239dfb0e-ddc0-43bc-86b7-937a684ddaf0 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received unexpected event network-vif-plugged-2f1b70a8-5d22-4c58-b80e-ec83063488da for instance with vm_state active and task_state None.
Sep 30 18:45:40 compute-0 nova_compute[265391]: 2025-09-30 18:45:40.608 2 DEBUG oslo_concurrency.lockutils [None req-254bb2e9-dee2-49e2-b6b2-aa289ffa64e3 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.665s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:40 compute-0 ceph-mon[73755]: pgmap v2019: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Sep 30 18:45:40 compute-0 sudo[356627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:45:40 compute-0 sudo[356627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:45:40 compute-0 sudo[356627]: pam_unix(sudo:session): session closed for user root
Sep 30 18:45:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:41 compute-0 nova_compute[265391]: 2025-09-30 18:45:41.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2020: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 26 KiB/s wr, 10 op/s
Sep 30 18:45:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:41.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:42.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:42 compute-0 ceph-mon[73755]: pgmap v2020: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 26 KiB/s wr, 10 op/s
Sep 30 18:45:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2021: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:45:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:43.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:43.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:43 compute-0 nova_compute[265391]: 2025-09-30 18:45:43.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:44.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:44 compute-0 ceph-mon[73755]: pgmap v2021: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:45:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2022: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:45:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:45.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:46.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:46 compute-0 nova_compute[265391]: 2025-09-30 18:45:46.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:46 compute-0 ceph-mon[73755]: pgmap v2022: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:45:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:47.377Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2023: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:45:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:47.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:48.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:48] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:45:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:48] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:45:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:48.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:48 compute-0 nova_compute[265391]: 2025-09-30 18:45:48.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:48 compute-0 ceph-mon[73755]: pgmap v2023: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:45:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2024: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:45:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:49.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:50.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:50 compute-0 ovn_controller[156242]: 2025-09-30T18:45:50Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:98:c3 10.100.0.6
Sep 30 18:45:50 compute-0 ovn_controller[156242]: 2025-09-30T18:45:50Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:98:c3 10.100.0.6
Sep 30 18:45:51 compute-0 ceph-mon[73755]: pgmap v2024: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Sep 30 18:45:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:51 compute-0 nova_compute[265391]: 2025-09-30 18:45:51.543 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "31444af1-147d-40c9-b93a-19a7618c7703" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:51 compute-0 nova_compute[265391]: 2025-09-30 18:45:51.543 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:51 compute-0 nova_compute[265391]: 2025-09-30 18:45:51.543 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "31444af1-147d-40c9-b93a-19a7618c7703-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:51 compute-0 nova_compute[265391]: 2025-09-30 18:45:51.544 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:51 compute-0 nova_compute[265391]: 2025-09-30 18:45:51.544 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:51 compute-0 nova_compute[265391]: 2025-09-30 18:45:51.560 2 INFO nova.compute.manager [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Terminating instance
Sep 30 18:45:51 compute-0 nova_compute[265391]: 2025-09-30 18:45:51.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2025: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 64 op/s
Sep 30 18:45:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:51.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.081 2 DEBUG nova.compute.manager [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:45:52 compute-0 kernel: tap2f1b70a8-5d (unregistering): left promiscuous mode
Sep 30 18:45:52 compute-0 NetworkManager[45059]: <info>  [1759257952.1374] device (tap2f1b70a8-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:45:52 compute-0 ovn_controller[156242]: 2025-09-30T18:45:52Z|00301|binding|INFO|Releasing lport 2f1b70a8-5d22-4c58-b80e-ec83063488da from this chassis (sb_readonly=0)
Sep 30 18:45:52 compute-0 ovn_controller[156242]: 2025-09-30T18:45:52Z|00302|binding|INFO|Setting lport 2f1b70a8-5d22-4c58-b80e-ec83063488da down in Southbound
Sep 30 18:45:52 compute-0 ovn_controller[156242]: 2025-09-30T18:45:52Z|00303|binding|INFO|Removing iface tap2f1b70a8-5d ovn-installed in OVS
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.150 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:98:c3 10.100.0.6'], port_security=['fa:16:3e:95:98:c3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '31444af1-147d-40c9-b93a-19a7618c7703', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e1882e8f3e74aa3840e38f2ce263f25', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ac9c09fc-7c26-43d2-bbe1-4bd3d1cfcf25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=793ccd68-a96c-4ced-8449-bfb1c479c4b4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=2f1b70a8-5d22-4c58-b80e-ec83063488da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.151 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 2f1b70a8-5d22-4c58-b80e-ec83063488da in datapath 4a724d83-7a03-449e-b06f-f9f1f1bb686e unbound from our chassis
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.152 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a724d83-7a03-449e-b06f-f9f1f1bb686e
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.173 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[29e2e196-1c68-4317-b4b5-efae58bedfd4]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:52 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000023.scope: Deactivated successfully.
Sep 30 18:45:52 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000023.scope: Consumed 13.187s CPU time.
Sep 30 18:45:52 compute-0 systemd-machined[219917]: Machine qemu-27-instance-00000023 terminated.
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.214 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[ced4f7e4-c446-4d1f-8b36-8ed23ebbccc8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.220 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[597f873c-abf3-429d-883e-3216ee8ea63d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:52 compute-0 podman[356668]: 2025-09-30 18:45:52.24622284 +0000 UTC m=+0.077116620 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.253 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8a1113-387a-4c07-8aa4-3f76cb5bd9fc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:52 compute-0 podman[356665]: 2025-09-30 18:45:52.270733475 +0000 UTC m=+0.102142498 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.272 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7029efb7-8678-459d-bf5d-3e33c5b85efe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a724d83-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:31:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651255, 'reachable_time': 40989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356730, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:52 compute-0 podman[356670]: 2025-09-30 18:45:52.28441075 +0000 UTC m=+0.108161575 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.288 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a213447-06d5-40da-8a75-68579ffa01f1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a724d83-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651267, 'tstamp': 651267}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356731, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a724d83-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651270, 'tstamp': 651270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356731, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.290 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a724d83-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.297 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a724d83-70, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.297 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.297 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a724d83-70, col_values=(('external_ids', {'iface-id': 'd979d9c2-641a-4559-b95a-f55833182093'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.298 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:45:52 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:52.298 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8270d1-08b3-4507-9c29-6c1554b3c042]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-4a724d83-7a03-449e-b06f-f9f1f1bb686e\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 4a724d83-7a03-449e-b06f-f9f1f1bb686e\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.316 2 INFO nova.virt.libvirt.driver [-] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Instance destroyed successfully.
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.317 2 DEBUG nova.objects.instance [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lazy-loading 'resources' on Instance uuid 31444af1-147d-40c9-b93a-19a7618c7703 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:45:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:45:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.823 2 DEBUG nova.virt.libvirt.vif [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:45:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalancingStrategy-server-2085781867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancingstrategy-server-2085781867',id=35,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:45:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0e1882e8f3e74aa3840e38f2ce263f25',ramdisk_id='',reservation_id='r-z8t0wt1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811',owner_user_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:45:39Z,user_data=None,user_id='a881d4a6df1c47b5ae0d12c72bd2a02b',uuid=31444af1-147d-40c9-b93a-19a7618c7703,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.825 2 DEBUG nova.network.os_vif_util [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converting VIF {"id": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "address": "fa:16:3e:95:98:c3", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f1b70a8-5d", "ovs_interfaceid": "2f1b70a8-5d22-4c58-b80e-ec83063488da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.826 2 DEBUG nova.network.os_vif_util [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:98:c3,bridge_name='br-int',has_traffic_filtering=True,id=2f1b70a8-5d22-4c58-b80e-ec83063488da,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f1b70a8-5d') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.826 2 DEBUG os_vif [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:98:c3,bridge_name='br-int',has_traffic_filtering=True,id=2f1b70a8-5d22-4c58-b80e-ec83063488da,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f1b70a8-5d') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f1b70a8-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=73221b52-5cef-45db-93dd-6183d3cd46b8) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.841 2 INFO os_vif [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:98:c3,bridge_name='br-int',has_traffic_filtering=True,id=2f1b70a8-5d22-4c58-b80e-ec83063488da,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f1b70a8-5d')
Sep 30 18:45:52 compute-0 nova_compute[265391]: 2025-09-30 18:45:52.933 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:53 compute-0 ceph-mon[73755]: pgmap v2025: 353 pgs: 353 active+clean; 167 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 64 op/s
Sep 30 18:45:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.119 2 DEBUG nova.compute.manager [req-297412a2-4e65-447b-9c46-8439ca8dcfc9 req-c6ee0aa9-5b57-4d82-960b-7cb0733f6188 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received event network-vif-unplugged-2f1b70a8-5d22-4c58-b80e-ec83063488da external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.120 2 DEBUG oslo_concurrency.lockutils [req-297412a2-4e65-447b-9c46-8439ca8dcfc9 req-c6ee0aa9-5b57-4d82-960b-7cb0733f6188 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "31444af1-147d-40c9-b93a-19a7618c7703-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.121 2 DEBUG oslo_concurrency.lockutils [req-297412a2-4e65-447b-9c46-8439ca8dcfc9 req-c6ee0aa9-5b57-4d82-960b-7cb0733f6188 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.122 2 DEBUG oslo_concurrency.lockutils [req-297412a2-4e65-447b-9c46-8439ca8dcfc9 req-c6ee0aa9-5b57-4d82-960b-7cb0733f6188 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.122 2 DEBUG nova.compute.manager [req-297412a2-4e65-447b-9c46-8439ca8dcfc9 req-c6ee0aa9-5b57-4d82-960b-7cb0733f6188 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] No waiting events found dispatching network-vif-unplugged-2f1b70a8-5d22-4c58-b80e-ec83063488da pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.122 2 DEBUG nova.compute.manager [req-297412a2-4e65-447b-9c46-8439ca8dcfc9 req-c6ee0aa9-5b57-4d82-960b-7cb0733f6188 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received event network-vif-unplugged-2f1b70a8-5d22-4c58-b80e-ec83063488da for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.231 2 INFO nova.virt.libvirt.driver [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Deleting instance files /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703_del
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.232 2 INFO nova.virt.libvirt.driver [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Deletion of /var/lib/nova/instances/31444af1-147d-40c9-b93a-19a7618c7703_del complete
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.743 2 INFO nova.compute.manager [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Took 1.66 seconds to destroy the instance on the hypervisor.
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.744 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.744 2 DEBUG nova.compute.manager [-] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.744 2 DEBUG nova.network.neutron [-] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.745 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:53 compute-0 nova_compute[265391]: 2025-09-30 18:45:53.865 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:45:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2026: 353 pgs: 353 active+clean; 121 MiB data, 467 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Sep 30 18:45:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:53.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:54.339 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:54.339 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:45:54.339 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:54 compute-0 nova_compute[265391]: 2025-09-30 18:45:54.655 2 DEBUG nova.network.neutron [-] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:45:55 compute-0 ceph-mon[73755]: pgmap v2026: 353 pgs: 353 active+clean; 121 MiB data, 467 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.163 2 INFO nova.compute.manager [-] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Took 1.42 seconds to deallocate network for instance.
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.213 2 DEBUG nova.compute.manager [req-c29fd266-86fb-4154-9f31-4cd86c5e4907 req-fa1913aa-0817-466e-8c0b-744231165554 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received event network-vif-unplugged-2f1b70a8-5d22-4c58-b80e-ec83063488da external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.214 2 DEBUG oslo_concurrency.lockutils [req-c29fd266-86fb-4154-9f31-4cd86c5e4907 req-fa1913aa-0817-466e-8c0b-744231165554 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "31444af1-147d-40c9-b93a-19a7618c7703-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.215 2 DEBUG oslo_concurrency.lockutils [req-c29fd266-86fb-4154-9f31-4cd86c5e4907 req-fa1913aa-0817-466e-8c0b-744231165554 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.215 2 DEBUG oslo_concurrency.lockutils [req-c29fd266-86fb-4154-9f31-4cd86c5e4907 req-fa1913aa-0817-466e-8c0b-744231165554 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.215 2 DEBUG nova.compute.manager [req-c29fd266-86fb-4154-9f31-4cd86c5e4907 req-fa1913aa-0817-466e-8c0b-744231165554 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] No waiting events found dispatching network-vif-unplugged-2f1b70a8-5d22-4c58-b80e-ec83063488da pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.215 2 DEBUG nova.compute.manager [req-c29fd266-86fb-4154-9f31-4cd86c5e4907 req-fa1913aa-0817-466e-8c0b-744231165554 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received event network-vif-unplugged-2f1b70a8-5d22-4c58-b80e-ec83063488da for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.216 2 DEBUG nova.compute.manager [req-c29fd266-86fb-4154-9f31-4cd86c5e4907 req-fa1913aa-0817-466e-8c0b-744231165554 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 31444af1-147d-40c9-b93a-19a7618c7703] Received event network-vif-deleted-2f1b70a8-5d22-4c58-b80e-ec83063488da external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.691 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:55 compute-0 nova_compute[265391]: 2025-09-30 18:45:55.692 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2027: 353 pgs: 353 active+clean; 121 MiB data, 467 MiB used, 40 GiB / 40 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:45:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:56 compute-0 nova_compute[265391]: 2025-09-30 18:45:56.014 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:56 compute-0 nova_compute[265391]: 2025-09-30 18:45:56.057 2 DEBUG oslo_concurrency.processutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:56.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:45:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:45:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258217513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:56 compute-0 nova_compute[265391]: 2025-09-30 18:45:56.513 2 DEBUG oslo_concurrency.processutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:56 compute-0 nova_compute[265391]: 2025-09-30 18:45:56.518 2 DEBUG nova.compute.provider_tree [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:45:56 compute-0 nova_compute[265391]: 2025-09-30 18:45:56.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:57 compute-0 nova_compute[265391]: 2025-09-30 18:45:57.028 2 DEBUG nova.scheduler.client.report [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:45:57 compute-0 ceph-mon[73755]: pgmap v2027: 353 pgs: 353 active+clean; 121 MiB data, 467 MiB used, 40 GiB / 40 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:45:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4265865493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/258217513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:57.379Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:57 compute-0 nova_compute[265391]: 2025-09-30 18:45:57.540 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.849s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:57 compute-0 nova_compute[265391]: 2025-09-30 18:45:57.542 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.528s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:57 compute-0 nova_compute[265391]: 2025-09-30 18:45:57.543 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:57 compute-0 nova_compute[265391]: 2025-09-30 18:45:57.543 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:45:57 compute-0 nova_compute[265391]: 2025-09-30 18:45:57.543 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:57 compute-0 nova_compute[265391]: 2025-09-30 18:45:57.577 2 INFO nova.scheduler.client.report [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Deleted allocations for instance 31444af1-147d-40c9-b93a-19a7618c7703
Sep 30 18:45:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:45:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4248379773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:45:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:45:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4248379773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:45:57 compute-0 nova_compute[265391]: 2025-09-30 18:45:57.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:45:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2028: 353 pgs: 353 active+clean; 121 MiB data, 467 MiB used, 40 GiB / 40 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:45:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:45:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:57.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:45:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:45:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949252793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:58 compute-0 nova_compute[265391]: 2025-09-30 18:45:58.005 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4248379773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:45:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4248379773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:45:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1949252793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:45:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:45:58.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:45:58 compute-0 nova_compute[265391]: 2025-09-30 18:45:58.606 2 DEBUG oslo_concurrency.lockutils [None req-9dd87049-f47a-45e4-ba24-bed76e8cfc05 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "31444af1-147d-40c9-b93a-19a7618c7703" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.063s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:45:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:58] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:45:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:45:58] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:45:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:45:58.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:45:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:45:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:45:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:45:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:45:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:45:59 compute-0 nova_compute[265391]: 2025-09-30 18:45:59.053 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:45:59 compute-0 nova_compute[265391]: 2025-09-30 18:45:59.054 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:45:59 compute-0 ceph-mon[73755]: pgmap v2028: 353 pgs: 353 active+clean; 121 MiB data, 467 MiB used, 40 GiB / 40 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:45:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4201903888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:45:59 compute-0 nova_compute[265391]: 2025-09-30 18:45:59.230 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:45:59 compute-0 nova_compute[265391]: 2025-09-30 18:45:59.232 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:45:59 compute-0 nova_compute[265391]: 2025-09-30 18:45:59.251 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:45:59 compute-0 nova_compute[265391]: 2025-09-30 18:45:59.251 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4085MB free_disk=39.94667053222656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:45:59 compute-0 nova_compute[265391]: 2025-09-30 18:45:59.251 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:45:59 compute-0 nova_compute[265391]: 2025-09-30 18:45:59.252 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:45:59 compute-0 podman[276673]: time="2025-09-30T18:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:45:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:45:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10768 "" "Go-http-client/1.1"
Sep 30 18:45:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2029: 353 pgs: 353 active+clean; 121 MiB data, 424 MiB used, 40 GiB / 40 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:45:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:45:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:45:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:45:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:46:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:00.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.286 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 232379f8-4475-4208-8b8e-8c2ed3c630a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.286 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.286 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:45:59 up  1:49,  0 user,  load average: 1.03, 0.91, 0.88\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_0e1882e8f3e74aa3840e38f2ce263f25': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.314 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.567 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "232379f8-4475-4208-8b8e-8c2ed3c630a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.570 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.571 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.572 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.572 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.594 2 INFO nova.compute.manager [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Terminating instance
Sep 30 18:46:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:46:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/504335263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.740 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:46:00 compute-0 nova_compute[265391]: 2025-09-30 18:46:00.744 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:46:01 compute-0 sudo[356841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:46:01 compute-0 sudo[356841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:01 compute-0 sudo[356841]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.113 2 DEBUG nova.compute.manager [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Sep 30 18:46:01 compute-0 ceph-mon[73755]: pgmap v2029: 353 pgs: 353 active+clean; 121 MiB data, 424 MiB used, 40 GiB / 40 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:46:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/504335263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:01 compute-0 kernel: tap43f79cf7-67 (unregistering): left promiscuous mode
Sep 30 18:46:01 compute-0 NetworkManager[45059]: <info>  [1759257961.1882] device (tap43f79cf7-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:46:01 compute-0 ovn_controller[156242]: 2025-09-30T18:46:01Z|00304|binding|INFO|Releasing lport 43f79cf7-670d-4eb6-8f85-82c87ca3de15 from this chassis (sb_readonly=0)
Sep 30 18:46:01 compute-0 ovn_controller[156242]: 2025-09-30T18:46:01Z|00305|binding|INFO|Setting lport 43f79cf7-670d-4eb6-8f85-82c87ca3de15 down in Southbound
Sep 30 18:46:01 compute-0 ovn_controller[156242]: 2025-09-30T18:46:01Z|00306|binding|INFO|Removing iface tap43f79cf7-67 ovn-installed in OVS
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.204 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:ad:c0 10.100.0.11'], port_security=['fa:16:3e:40:ad:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '232379f8-4475-4208-8b8e-8c2ed3c630a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e1882e8f3e74aa3840e38f2ce263f25', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ac9c09fc-7c26-43d2-bbe1-4bd3d1cfcf25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=793ccd68-a96c-4ced-8449-bfb1c479c4b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=43f79cf7-670d-4eb6-8f85-82c87ca3de15) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.204 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 43f79cf7-670d-4eb6-8f85-82c87ca3de15 in datapath 4a724d83-7a03-449e-b06f-f9f1f1bb686e unbound from our chassis
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.206 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a724d83-7a03-449e-b06f-f9f1f1bb686e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.207 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d86543-ee62-4f80-9933-f2f020770cb3]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.209 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e namespace which is not needed anymore
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000022.scope: Deactivated successfully.
Sep 30 18:46:01 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000022.scope: Consumed 13.732s CPU time.
Sep 30 18:46:01 compute-0 systemd-machined[219917]: Machine qemu-26-instance-00000022 terminated.
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.252 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:46:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:01 compute-0 podman[356890]: 2025-09-30 18:46:01.322232344 +0000 UTC m=+0.030179023 container kill f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:46:01 compute-0 neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e[355352]: [NOTICE]   (355356) : haproxy version is 3.0.5-8e879a5
Sep 30 18:46:01 compute-0 neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e[355352]: [NOTICE]   (355356) : path to executable is /usr/sbin/haproxy
Sep 30 18:46:01 compute-0 neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e[355352]: [WARNING]  (355356) : Exiting Master process...
Sep 30 18:46:01 compute-0 neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e[355352]: [ALERT]    (355356) : Current worker (355358) exited with code 143 (Terminated)
Sep 30 18:46:01 compute-0 neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e[355352]: [WARNING]  (355356) : All workers exited. Exiting... (0)
Sep 30 18:46:01 compute-0 systemd[1]: libpod-f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273.scope: Deactivated successfully.
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.346 2 INFO nova.virt.libvirt.driver [-] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Instance destroyed successfully.
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.351 2 DEBUG nova.objects.instance [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lazy-loading 'resources' on Instance uuid 232379f8-4475-4208-8b8e-8c2ed3c630a0 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:46:01 compute-0 podman[356912]: 2025-09-30 18:46:01.370365631 +0000 UTC m=+0.024517476 container died f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.383 2 DEBUG nova.compute.manager [req-208de47b-e9e5-486c-98d2-83bc6435b570 req-986b7161-3d3a-48c9-9b12-df46d1c9365b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received event network-vif-unplugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.384 2 DEBUG oslo_concurrency.lockutils [req-208de47b-e9e5-486c-98d2-83bc6435b570 req-986b7161-3d3a-48c9-9b12-df46d1c9365b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.384 2 DEBUG oslo_concurrency.lockutils [req-208de47b-e9e5-486c-98d2-83bc6435b570 req-986b7161-3d3a-48c9-9b12-df46d1c9365b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.385 2 DEBUG oslo_concurrency.lockutils [req-208de47b-e9e5-486c-98d2-83bc6435b570 req-986b7161-3d3a-48c9-9b12-df46d1c9365b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.385 2 DEBUG nova.compute.manager [req-208de47b-e9e5-486c-98d2-83bc6435b570 req-986b7161-3d3a-48c9-9b12-df46d1c9365b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] No waiting events found dispatching network-vif-unplugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.386 2 DEBUG nova.compute.manager [req-208de47b-e9e5-486c-98d2-83bc6435b570 req-986b7161-3d3a-48c9-9b12-df46d1c9365b 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received event network-vif-unplugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:46:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273-userdata-shm.mount: Deactivated successfully.
Sep 30 18:46:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc00bf7f2966f6032419279011829979354ba2079bc2f409e49f4ca8245c001e-merged.mount: Deactivated successfully.
Sep 30 18:46:01 compute-0 podman[356912]: 2025-09-30 18:46:01.403462389 +0000 UTC m=+0.057614214 container cleanup f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 18:46:01 compute-0 systemd[1]: libpod-conmon-f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273.scope: Deactivated successfully.
Sep 30 18:46:01 compute-0 openstack_network_exporter[279566]: ERROR   18:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:46:01 compute-0 openstack_network_exporter[279566]: ERROR   18:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:46:01 compute-0 openstack_network_exporter[279566]: ERROR   18:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:46:01 compute-0 openstack_network_exporter[279566]: ERROR   18:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:46:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:46:01 compute-0 openstack_network_exporter[279566]: ERROR   18:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:46:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:46:01 compute-0 podman[356917]: 2025-09-30 18:46:01.421651091 +0000 UTC m=+0.069792410 container remove f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.440 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f47e4eb2-6ee7-4d40-ab2a-d166f833a09e]: (4, ("Tue Sep 30 06:46:01 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e (f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273)\nf9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273\nTue Sep 30 06:46:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e (f9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273)\nf9a5681654dc5a16eb540bf2641c7d71d1272f8f2c63c16a920ec19741ae6273\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.442 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3db4e416-f852-4835-bf3e-027cf559c958]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.442 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a724d83-7a03-449e-b06f-f9f1f1bb686e.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.442 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9cca0660-5d97-4300-ab82-e5f17a352ce9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.443 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a724d83-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 kernel: tap4a724d83-70: left promiscuous mode
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.463 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc9ebf8-4431-40e9-9204-5849646f8fda]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.493 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0a2cb590-cdd0-4ec7-80fc-ce2a908d6724]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.495 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d18f3c0a-b311-4bf4-bdca-e6871e3a5736]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.510 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[424ab89c-b324-4045-9d9d-69536f4c986f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651247, 'reachable_time': 24990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356953, 'error': None, 'target': 'ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.513 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a724d83-7a03-449e-b06f-f9f1f1bb686e deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:46:01 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:01.513 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6bc540-bb80-4db4-8727-e2f3d78102c9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d4a724d83\x2d7a03\x2d449e\x2db06f\x2df9f1f1bb686e.mount: Deactivated successfully.
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.761 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.762 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.510s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.857 2 DEBUG nova.virt.libvirt.vif [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:44:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalancingStrategy-server-1435974356',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancingstrategy-server-1435974356',id=34,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:45:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0e1882e8f3e74aa3840e38f2ce263f25',ramdisk_id='',reservation_id='r-7s4vhh01',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811',owner_user_name='tempest-TestExecuteWorkloadBalancingStrategy-1499811811-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:45:15Z,user_data=None,user_id='a881d4a6df1c47b5ae0d12c72bd2a02b',uuid=232379f8-4475-4208-8b8e-8c2ed3c630a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.858 2 DEBUG nova.network.os_vif_util [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converting VIF {"id": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "address": "fa:16:3e:40:ad:c0", "network": {"id": "4a724d83-7a03-449e-b06f-f9f1f1bb686e", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalancingStrategy-921789264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c994362300fc4b68b72392279f890ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f79cf7-67", "ovs_interfaceid": "43f79cf7-670d-4eb6-8f85-82c87ca3de15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.859 2 DEBUG nova.network.os_vif_util [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:ad:c0,bridge_name='br-int',has_traffic_filtering=True,id=43f79cf7-670d-4eb6-8f85-82c87ca3de15,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f79cf7-67') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.859 2 DEBUG os_vif [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:ad:c0,bridge_name='br-int',has_traffic_filtering=True,id=43f79cf7-670d-4eb6-8f85-82c87ca3de15,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f79cf7-67') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43f79cf7-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=e6aecf88-c3bc-4a9b-8e90-2ef2ceef6b09) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:01 compute-0 nova_compute[265391]: 2025-09-30 18:46:01.869 2 INFO os_vif [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:ad:c0,bridge_name='br-int',has_traffic_filtering=True,id=43f79cf7-670d-4eb6-8f85-82c87ca3de15,network=Network(4a724d83-7a03-449e-b06f-f9f1f1bb686e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f79cf7-67')
Sep 30 18:46:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2030: 353 pgs: 353 active+clean; 121 MiB data, 424 MiB used, 40 GiB / 40 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:46:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:46:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:01.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:46:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:02.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:02 compute-0 nova_compute[265391]: 2025-09-30 18:46:02.353 2 INFO nova.virt.libvirt.driver [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Deleting instance files /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0_del
Sep 30 18:46:02 compute-0 nova_compute[265391]: 2025-09-30 18:46:02.355 2 INFO nova.virt.libvirt.driver [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Deletion of /var/lib/nova/instances/232379f8-4475-4208-8b8e-8c2ed3c630a0_del complete
Sep 30 18:46:02 compute-0 nova_compute[265391]: 2025-09-30 18:46:02.869 2 INFO nova.compute.manager [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Took 1.75 seconds to destroy the instance on the hypervisor.
Sep 30 18:46:02 compute-0 nova_compute[265391]: 2025-09-30 18:46:02.870 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Sep 30 18:46:02 compute-0 nova_compute[265391]: 2025-09-30 18:46:02.870 2 DEBUG nova.compute.manager [-] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Sep 30 18:46:02 compute-0 nova_compute[265391]: 2025-09-30 18:46:02.870 2 DEBUG nova.network.neutron [-] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Sep 30 18:46:02 compute-0 nova_compute[265391]: 2025-09-30 18:46:02.870 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:46:02 compute-0 nova_compute[265391]: 2025-09-30 18:46:02.987 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:46:03 compute-0 ceph-mon[73755]: pgmap v2030: 353 pgs: 353 active+clean; 121 MiB data, 424 MiB used, 40 GiB / 40 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.294 2 DEBUG nova.compute.manager [req-2e449378-d87d-438b-be95-3422e3263f9e req-fed6a6f7-70e2-42ae-9a15-c677d371a779 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received event network-vif-deleted-43f79cf7-670d-4eb6-8f85-82c87ca3de15 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.294 2 INFO nova.compute.manager [req-2e449378-d87d-438b-be95-3422e3263f9e req-fed6a6f7-70e2-42ae-9a15-c677d371a779 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Neutron deleted interface 43f79cf7-670d-4eb6-8f85-82c87ca3de15; detaching it from the instance and deleting it from the info cache
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.294 2 DEBUG nova.network.neutron [req-2e449378-d87d-438b-be95-3422e3263f9e req-fed6a6f7-70e2-42ae-9a15-c677d371a779 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.461 2 DEBUG nova.compute.manager [req-fbf888a9-b606-44f3-8abe-5df39cb6c364 req-4eda5eb7-20ef-4a44-b8d6-36c6a36a42d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received event network-vif-unplugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.462 2 DEBUG oslo_concurrency.lockutils [req-fbf888a9-b606-44f3-8abe-5df39cb6c364 req-4eda5eb7-20ef-4a44-b8d6-36c6a36a42d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.462 2 DEBUG oslo_concurrency.lockutils [req-fbf888a9-b606-44f3-8abe-5df39cb6c364 req-4eda5eb7-20ef-4a44-b8d6-36c6a36a42d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.462 2 DEBUG oslo_concurrency.lockutils [req-fbf888a9-b606-44f3-8abe-5df39cb6c364 req-4eda5eb7-20ef-4a44-b8d6-36c6a36a42d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.463 2 DEBUG nova.compute.manager [req-fbf888a9-b606-44f3-8abe-5df39cb6c364 req-4eda5eb7-20ef-4a44-b8d6-36c6a36a42d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] No waiting events found dispatching network-vif-unplugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.463 2 DEBUG nova.compute.manager [req-fbf888a9-b606-44f3-8abe-5df39cb6c364 req-4eda5eb7-20ef-4a44-b8d6-36c6a36a42d6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Received event network-vif-unplugged-43f79cf7-670d-4eb6-8f85-82c87ca3de15 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.744 2 DEBUG nova.network.neutron [-] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:46:03 compute-0 nova_compute[265391]: 2025-09-30 18:46:03.802 2 DEBUG nova.compute.manager [req-2e449378-d87d-438b-be95-3422e3263f9e req-fed6a6f7-70e2-42ae-9a15-c677d371a779 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Detach interface failed, port_id=43f79cf7-670d-4eb6-8f85-82c87ca3de15, reason: Instance 232379f8-4475-4208-8b8e-8c2ed3c630a0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Sep 30 18:46:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2031: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Sep 30 18:46:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:03.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:03.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:04.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.252 2 INFO nova.compute.manager [-] [instance: 232379f8-4475-4208-8b8e-8c2ed3c630a0] Took 1.38 seconds to deallocate network for instance.
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.762 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.763 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.763 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.764 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.764 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.764 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.779 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.779 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:04 compute-0 nova_compute[265391]: 2025-09-30 18:46:04.818 2 DEBUG oslo_concurrency.processutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:46:05 compute-0 ceph-mon[73755]: pgmap v2031: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Sep 30 18:46:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:46:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783028067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:05 compute-0 nova_compute[265391]: 2025-09-30 18:46:05.269 2 DEBUG oslo_concurrency.processutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:46:05 compute-0 nova_compute[265391]: 2025-09-30 18:46:05.277 2 DEBUG nova.compute.provider_tree [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:46:05 compute-0 nova_compute[265391]: 2025-09-30 18:46:05.792 2 DEBUG nova.scheduler.client.report [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:46:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2032: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:46:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:05.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/783028067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:06.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:06 compute-0 nova_compute[265391]: 2025-09-30 18:46:06.305 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.525s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:46:06 compute-0 nova_compute[265391]: 2025-09-30 18:46:06.346 2 INFO nova.scheduler.client.report [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Deleted allocations for instance 232379f8-4475-4208-8b8e-8c2ed3c630a0
Sep 30 18:46:06 compute-0 nova_compute[265391]: 2025-09-30 18:46:06.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:06 compute-0 nova_compute[265391]: 2025-09-30 18:46:06.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:07 compute-0 ceph-mon[73755]: pgmap v2032: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:46:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:46:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:46:07 compute-0 nova_compute[265391]: 2025-09-30 18:46:07.379 2 DEBUG oslo_concurrency.lockutils [None req-6ed1dda1-dfe9-48fe-8f70-b9a6fc64a9e8 a881d4a6df1c47b5ae0d12c72bd2a02b 0e1882e8f3e74aa3840e38f2ce263f25 - - default default] Lock "232379f8-4475-4208-8b8e-8c2ed3c630a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.808s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:46:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:07.380Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:46:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:46:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2033: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:46:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:46:08
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'backups', '.nfs', 'cephfs.cephfs.meta']
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:46:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:08.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:08] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:46:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:08] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:46:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:08.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:09 compute-0 ceph-mon[73755]: pgmap v2033: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:46:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2034: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:46:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:09.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:10.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:10 compute-0 podman[357008]: 2025-09-30 18:46:10.567660289 +0000 UTC m=+0.087413006 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:46:10 compute-0 podman[357006]: 2025-09-30 18:46:10.582286318 +0000 UTC m=+0.111686715 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:46:10 compute-0 podman[357007]: 2025-09-30 18:46:10.599088754 +0000 UTC m=+0.115667229 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:46:11 compute-0 ceph-mon[73755]: pgmap v2034: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Sep 30 18:46:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:11 compute-0 nova_compute[265391]: 2025-09-30 18:46:11.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2035: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:46:11 compute-0 nova_compute[265391]: 2025-09-30 18:46:11.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:11.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:13 compute-0 ceph-mon[73755]: pgmap v2035: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:46:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2036: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:46:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:13.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:13.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:15 compute-0 ceph-mon[73755]: pgmap v2036: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:46:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2037: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:15.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:46:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:46:16 compute-0 nova_compute[265391]: 2025-09-30 18:46:16.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:16 compute-0 nova_compute[265391]: 2025-09-30 18:46:16.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:17 compute-0 ceph-mon[73755]: pgmap v2037: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:17.380Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2038: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:17.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:18.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:18 compute-0 nova_compute[265391]: 2025-09-30 18:46:18.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:18] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:46:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:18] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:46:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:18.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:19 compute-0 ceph-mon[73755]: pgmap v2038: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2039: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:19.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:20.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:21 compute-0 sudo[357083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:46:21 compute-0 sudo[357083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:21 compute-0 sudo[357083]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:21 compute-0 ceph-mon[73755]: pgmap v2039: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:21 compute-0 nova_compute[265391]: 2025-09-30 18:46:21.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2040: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:21 compute-0 nova_compute[265391]: 2025-09-30 18:46:21.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:21.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:22.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:46:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:46:22 compute-0 podman[357110]: 2025-09-30 18:46:22.534169421 +0000 UTC m=+0.064775440 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:46:22 compute-0 podman[357112]: 2025-09-30 18:46:22.55962185 +0000 UTC m=+0.081704388 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Sep 30 18:46:22 compute-0 podman[357111]: 2025-09-30 18:46:22.559926308 +0000 UTC m=+0.085986749 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20250930, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 18:46:22 compute-0 sudo[357171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:46:23 compute-0 sudo[357171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:23 compute-0 sudo[357171]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:23 compute-0 sudo[357196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:46:23 compute-0 sudo[357196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:23 compute-0 ceph-mon[73755]: pgmap v2040: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:46:23 compute-0 sudo[357196]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:46:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:46:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:46:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:46:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:46:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2041: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 776 B/s rd, 0 op/s
Sep 30 18:46:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:46:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:46:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:46:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:46:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:46:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:46:23 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:46:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:46:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:46:23 compute-0 sudo[357255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:46:23 compute-0 sudo[357255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:23 compute-0 sudo[357255]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:23 compute-0 sudo[357281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:46:23 compute-0 sudo[357281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:23.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:46:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:23.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:23.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:24.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:24 compute-0 podman[357348]: 2025-09-30 18:46:24.248929583 +0000 UTC m=+0.021063127 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:46:24 compute-0 podman[357348]: 2025-09-30 18:46:24.380182845 +0000 UTC m=+0.152316389 container create 448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:46:24 compute-0 systemd[1]: Started libpod-conmon-448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b.scope.
Sep 30 18:46:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:46:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:46:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:46:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:46:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:46:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:46:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:46:24 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:46:24 compute-0 podman[357348]: 2025-09-30 18:46:24.513837319 +0000 UTC m=+0.285970933 container init 448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_kirch, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:46:24 compute-0 podman[357348]: 2025-09-30 18:46:24.523284014 +0000 UTC m=+0.295417538 container start 448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_kirch, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:46:24 compute-0 ecstatic_kirch[357364]: 167 167
Sep 30 18:46:24 compute-0 systemd[1]: libpod-448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b.scope: Deactivated successfully.
Sep 30 18:46:24 compute-0 podman[357348]: 2025-09-30 18:46:24.636372285 +0000 UTC m=+0.408505829 container attach 448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 18:46:24 compute-0 podman[357348]: 2025-09-30 18:46:24.638527811 +0000 UTC m=+0.410661375 container died 448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_kirch, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-09ae959973de5e18ad612c3e026f73bdefef50ae9682575e60290ba9a73c1612-merged.mount: Deactivated successfully.
Sep 30 18:46:24 compute-0 podman[357348]: 2025-09-30 18:46:24.886647772 +0000 UTC m=+0.658781306 container remove 448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:46:24 compute-0 systemd[1]: libpod-conmon-448fca79ab10b441f8f627d6d7d8af1799e2ac9176a4b9820156e5cafc77b68b.scope: Deactivated successfully.
Sep 30 18:46:25 compute-0 podman[357388]: 2025-09-30 18:46:25.109610731 +0000 UTC m=+0.059675027 container create efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_sutherland, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:46:25 compute-0 podman[357388]: 2025-09-30 18:46:25.081892633 +0000 UTC m=+0.031956949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:46:25 compute-0 systemd[1]: Started libpod-conmon-efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e.scope.
Sep 30 18:46:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1706057fd8e24ed5ab29f86a1962dc9614bb2226bd07cc61e88018a2eecfc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1706057fd8e24ed5ab29f86a1962dc9614bb2226bd07cc61e88018a2eecfc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1706057fd8e24ed5ab29f86a1962dc9614bb2226bd07cc61e88018a2eecfc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1706057fd8e24ed5ab29f86a1962dc9614bb2226bd07cc61e88018a2eecfc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1706057fd8e24ed5ab29f86a1962dc9614bb2226bd07cc61e88018a2eecfc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:25 compute-0 podman[357388]: 2025-09-30 18:46:25.334520811 +0000 UTC m=+0.284585117 container init efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_sutherland, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:46:25 compute-0 podman[357388]: 2025-09-30 18:46:25.34721596 +0000 UTC m=+0.297280246 container start efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_sutherland, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:46:25 compute-0 podman[357388]: 2025-09-30 18:46:25.379269851 +0000 UTC m=+0.329334157 container attach efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_sutherland, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 18:46:25 compute-0 ceph-mon[73755]: pgmap v2041: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 776 B/s rd, 0 op/s
Sep 30 18:46:25 compute-0 nostalgic_sutherland[357404]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:46:25 compute-0 nostalgic_sutherland[357404]: --> All data devices are unavailable
Sep 30 18:46:25 compute-0 systemd[1]: libpod-efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e.scope: Deactivated successfully.
Sep 30 18:46:25 compute-0 podman[357388]: 2025-09-30 18:46:25.718044081 +0000 UTC m=+0.668108357 container died efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:46:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2042: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 517 B/s rd, 0 op/s
Sep 30 18:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef1706057fd8e24ed5ab29f86a1962dc9614bb2226bd07cc61e88018a2eecfc6-merged.mount: Deactivated successfully.
Sep 30 18:46:25 compute-0 podman[357388]: 2025-09-30 18:46:25.841998444 +0000 UTC m=+0.792062710 container remove efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:46:25 compute-0 systemd[1]: libpod-conmon-efece04fe0fd149cf1ad43872a55c8e1f0b5cb1d6c366bdc1ddc803f0765772e.scope: Deactivated successfully.
Sep 30 18:46:25 compute-0 sudo[357281]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:25 compute-0 sudo[357434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:46:25 compute-0 sudo[357434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:25 compute-0 sudo[357434]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:25.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:25 compute-0 sudo[357459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:46:25 compute-0 sudo[357459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:26.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:26.292 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:46:26 compute-0 nova_compute[265391]: 2025-09-30 18:46:26.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:26 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:26.294 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:46:26 compute-0 podman[357526]: 2025-09-30 18:46:26.353743378 +0000 UTC m=+0.035332447 container create 402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gauss, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:46:26 compute-0 systemd[1]: Started libpod-conmon-402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73.scope.
Sep 30 18:46:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:46:26 compute-0 podman[357526]: 2025-09-30 18:46:26.430811525 +0000 UTC m=+0.112400614 container init 402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gauss, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:46:26 compute-0 podman[357526]: 2025-09-30 18:46:26.337676311 +0000 UTC m=+0.019265400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:46:26 compute-0 podman[357526]: 2025-09-30 18:46:26.437302843 +0000 UTC m=+0.118891902 container start 402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:46:26 compute-0 quizzical_gauss[357542]: 167 167
Sep 30 18:46:26 compute-0 systemd[1]: libpod-402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73.scope: Deactivated successfully.
Sep 30 18:46:26 compute-0 podman[357526]: 2025-09-30 18:46:26.44489806 +0000 UTC m=+0.126487129 container attach 402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gauss, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:46:26 compute-0 podman[357526]: 2025-09-30 18:46:26.447498338 +0000 UTC m=+0.129087407 container died 402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 18:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-40c45a4fe30b29808a48f7f01c3faa1d0fdac68ceeecda20378fdb5a04ad7d76-merged.mount: Deactivated successfully.
Sep 30 18:46:26 compute-0 podman[357526]: 2025-09-30 18:46:26.494127346 +0000 UTC m=+0.175716415 container remove 402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:46:26 compute-0 systemd[1]: libpod-conmon-402a3f2216aae867b7cb2401fda63d89b6ae5d51a49e902c8b9c5624e1bc9b73.scope: Deactivated successfully.
Sep 30 18:46:26 compute-0 nova_compute[265391]: 2025-09-30 18:46:26.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:26 compute-0 podman[357569]: 2025-09-30 18:46:26.643738534 +0000 UTC m=+0.037675298 container create 09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:46:26 compute-0 systemd[1]: Started libpod-conmon-09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876.scope.
Sep 30 18:46:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec27eaa1000acba959ebb1639b418df0b085ead1c48158f072e86922ae7f0ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec27eaa1000acba959ebb1639b418df0b085ead1c48158f072e86922ae7f0ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec27eaa1000acba959ebb1639b418df0b085ead1c48158f072e86922ae7f0ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec27eaa1000acba959ebb1639b418df0b085ead1c48158f072e86922ae7f0ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:26 compute-0 podman[357569]: 2025-09-30 18:46:26.627064552 +0000 UTC m=+0.021001336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:46:26 compute-0 nova_compute[265391]: 2025-09-30 18:46:26.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:26 compute-0 podman[357569]: 2025-09-30 18:46:26.734117376 +0000 UTC m=+0.128054140 container init 09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:46:26 compute-0 podman[357569]: 2025-09-30 18:46:26.748949401 +0000 UTC m=+0.142886165 container start 09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 18:46:26 compute-0 podman[357569]: 2025-09-30 18:46:26.752231876 +0000 UTC m=+0.146168640 container attach 09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 18:46:26 compute-0 nova_compute[265391]: 2025-09-30 18:46:26.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]: {
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:     "0": [
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:         {
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "devices": [
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "/dev/loop3"
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             ],
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "lv_name": "ceph_lv0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "lv_size": "21470642176",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "name": "ceph_lv0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "tags": {
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.cluster_name": "ceph",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.crush_device_class": "",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.encrypted": "0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.osd_id": "0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.type": "block",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.vdo": "0",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:                 "ceph.with_tpm": "0"
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             },
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "type": "block",
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:             "vg_name": "ceph_vg0"
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:         }
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]:     ]
Sep 30 18:46:27 compute-0 stupefied_davinci[357585]: }
Sep 30 18:46:27 compute-0 systemd[1]: libpod-09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876.scope: Deactivated successfully.
Sep 30 18:46:27 compute-0 podman[357569]: 2025-09-30 18:46:27.053945235 +0000 UTC m=+0.447881999 container died 09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bec27eaa1000acba959ebb1639b418df0b085ead1c48158f072e86922ae7f0ee-merged.mount: Deactivated successfully.
Sep 30 18:46:27 compute-0 podman[357569]: 2025-09-30 18:46:27.105499501 +0000 UTC m=+0.499436265 container remove 09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_davinci, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:46:27 compute-0 systemd[1]: libpod-conmon-09f62feeb53d960aaad7e049cfd9b2e434d46ddf837d2dbbdc7b4b5922c36876.scope: Deactivated successfully.
Sep 30 18:46:27 compute-0 sudo[357459]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:27 compute-0 sudo[357607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:46:27 compute-0 sudo[357607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:27 compute-0 sudo[357607]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:27 compute-0 sudo[357632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:46:27 compute-0 sudo[357632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:27.381Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:27 compute-0 ceph-mon[73755]: pgmap v2042: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 517 B/s rd, 0 op/s
Sep 30 18:46:27 compute-0 podman[357701]: 2025-09-30 18:46:27.725032898 +0000 UTC m=+0.046961418 container create 38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:46:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2043: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 517 B/s rd, 0 op/s
Sep 30 18:46:27 compute-0 systemd[1]: Started libpod-conmon-38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122.scope.
Sep 30 18:46:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:46:27 compute-0 podman[357701]: 2025-09-30 18:46:27.706129498 +0000 UTC m=+0.028058068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:46:27 compute-0 podman[357701]: 2025-09-30 18:46:27.821053207 +0000 UTC m=+0.142981757 container init 38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:46:27 compute-0 podman[357701]: 2025-09-30 18:46:27.835095951 +0000 UTC m=+0.157024481 container start 38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hawking, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:46:27 compute-0 podman[357701]: 2025-09-30 18:46:27.839478375 +0000 UTC m=+0.161406945 container attach 38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:46:27 compute-0 focused_hawking[357717]: 167 167
Sep 30 18:46:27 compute-0 systemd[1]: libpod-38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122.scope: Deactivated successfully.
Sep 30 18:46:27 compute-0 podman[357701]: 2025-09-30 18:46:27.842639107 +0000 UTC m=+0.164567637 container died 38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-51cfc4fd500c897c34af1f7b53b587369bdf94d60a42f46cf24181659cfde074-merged.mount: Deactivated successfully.
Sep 30 18:46:27 compute-0 podman[357701]: 2025-09-30 18:46:27.891846302 +0000 UTC m=+0.213774872 container remove 38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 18:46:27 compute-0 systemd[1]: libpod-conmon-38cf158b35b991076df79570d542b55cb192a34d75c37e41c975ec8b0c912122.scope: Deactivated successfully.
Sep 30 18:46:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:28 compute-0 podman[357744]: 2025-09-30 18:46:28.112609874 +0000 UTC m=+0.040101421 container create 2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_tu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:46:28 compute-0 systemd[1]: Started libpod-conmon-2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a.scope.
Sep 30 18:46:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:46:28 compute-0 podman[357744]: 2025-09-30 18:46:28.097052571 +0000 UTC m=+0.024544138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55287c481661390f588c2894b326631d3d5f8566d174ae562a4aab334369f01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55287c481661390f588c2894b326631d3d5f8566d174ae562a4aab334369f01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55287c481661390f588c2894b326631d3d5f8566d174ae562a4aab334369f01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55287c481661390f588c2894b326631d3d5f8566d174ae562a4aab334369f01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:46:28 compute-0 podman[357744]: 2025-09-30 18:46:28.21083222 +0000 UTC m=+0.138323807 container init 2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_tu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 18:46:28 compute-0 podman[357744]: 2025-09-30 18:46:28.220968582 +0000 UTC m=+0.148460129 container start 2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:46:28 compute-0 podman[357744]: 2025-09-30 18:46:28.226638639 +0000 UTC m=+0.154130196 container attach 2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:46:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:28.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:28] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:46:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:28] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:46:28 compute-0 lvm[357835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:46:28 compute-0 lvm[357835]: VG ceph_vg0 finished
Sep 30 18:46:28 compute-0 condescending_tu[357761]: {}
Sep 30 18:46:28 compute-0 systemd[1]: libpod-2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a.scope: Deactivated successfully.
Sep 30 18:46:28 compute-0 podman[357744]: 2025-09-30 18:46:28.911719736 +0000 UTC m=+0.839211283 container died 2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:46:28 compute-0 systemd[1]: libpod-2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a.scope: Consumed 1.042s CPU time.
Sep 30 18:46:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:28.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c55287c481661390f588c2894b326631d3d5f8566d174ae562a4aab334369f01-merged.mount: Deactivated successfully.
Sep 30 18:46:29 compute-0 podman[357744]: 2025-09-30 18:46:29.334239267 +0000 UTC m=+1.261730844 container remove 2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_tu, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:46:29 compute-0 systemd[1]: libpod-conmon-2133c78b252954a792ea68253453407b50162ad5f5624fd399805252b4b8a84a.scope: Deactivated successfully.
Sep 30 18:46:29 compute-0 sudo[357632]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:46:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:46:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:46:29 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:46:29 compute-0 sudo[357854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:46:29 compute-0 sudo[357854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:29 compute-0 sudo[357854]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:29 compute-0 ceph-mon[73755]: pgmap v2043: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 517 B/s rd, 0 op/s
Sep 30 18:46:29 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:46:29 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:46:29 compute-0 podman[276673]: time="2025-09-30T18:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:46:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:46:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2044: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 776 B/s rd, 0 op/s
Sep 30 18:46:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10314 "" "Go-http-client/1.1"
Sep 30 18:46:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:29.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:30.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:31 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:31.296 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:46:31 compute-0 openstack_network_exporter[279566]: ERROR   18:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:46:31 compute-0 openstack_network_exporter[279566]: ERROR   18:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:46:31 compute-0 openstack_network_exporter[279566]: ERROR   18:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:46:31 compute-0 openstack_network_exporter[279566]: ERROR   18:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:46:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:46:31 compute-0 openstack_network_exporter[279566]: ERROR   18:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:46:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:46:31 compute-0 ceph-mon[73755]: pgmap v2044: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 776 B/s rd, 0 op/s
Sep 30 18:46:31 compute-0 nova_compute[265391]: 2025-09-30 18:46:31.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2045: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 517 B/s rd, 0 op/s
Sep 30 18:46:31 compute-0 nova_compute[265391]: 2025-09-30 18:46:31.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:32.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:33 compute-0 ceph-mon[73755]: pgmap v2045: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 517 B/s rd, 0 op/s
Sep 30 18:46:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2046: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 776 B/s rd, 0 op/s
Sep 30 18:46:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:33.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:46:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:33.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:46:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1518 writes, 7135 keys, 1518 commit groups, 1.0 writes per commit group, ingest: 11.01 MB, 0.02 MB/s
                                           Interval WAL: 1518 writes, 1518 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    103.7      0.70              0.22        35    0.020       0      0       0.0       0.0
                                             L6      1/0    9.97 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   5.3    200.1    172.0      2.24              1.07        34    0.066    212K    18K       0.0       0.0
                                            Sum      1/0    9.97 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   6.3    152.3    155.7      2.95              1.29        69    0.043    212K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.9    184.5    176.6      0.51              0.24        14    0.036     53K   3605       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    200.1    172.0      2.24              1.07        34    0.066    212K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    157.8      0.46              0.22        34    0.014       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4201.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.071, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.45 GB write, 0.11 MB/s write, 0.44 GB read, 0.11 MB/s read, 2.9 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 304.00 MB usage: 43.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000227 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2414,41.55 MB,13.6669%) FilterBlock(70,596.11 KB,0.191493%) IndexBlock(70,925.98 KB,0.297461%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 18:46:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:34.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:35 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:35.222 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:a8:99 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4658d55-a8f9-48f1-846d-61df3d830821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67cbb3b670e445a4b97abcc92749d126', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0884332-fe68-47c8-9c8c-5c6a7c53f7f5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=862fbe9e-132a-4b8a-83f6-7b020c6192ad) old=Port_Binding(mac=['fa:16:3e:37:a8:99'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4658d55-a8f9-48f1-846d-61df3d830821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67cbb3b670e445a4b97abcc92749d126', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:46:35 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:35.224 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 862fbe9e-132a-4b8a-83f6-7b020c6192ad in datapath f4658d55-a8f9-48f1-846d-61df3d830821 updated
Sep 30 18:46:35 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:35.224 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4658d55-a8f9-48f1-846d-61df3d830821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:46:35 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:35.225 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[6baa883f-b86d-4492-8682-df55e6beecfe]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:35 compute-0 sshd-session[357737]: Connection closed by 154.125.120.7 port 43664 [preauth]
Sep 30 18:46:35 compute-0 ceph-mon[73755]: pgmap v2046: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 776 B/s rd, 0 op/s
Sep 30 18:46:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2047: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:35.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:36.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:46:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2367675944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:46:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:46:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2367675944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:46:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2367675944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:46:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2367675944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:46:36 compute-0 nova_compute[265391]: 2025-09-30 18:46:36.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:36 compute-0 nova_compute[265391]: 2025-09-30 18:46:36.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:46:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:46:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:37.383Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:46:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:46:37 compute-0 ceph-mon[73755]: pgmap v2047: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:46:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2048: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:37.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:38.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:38] "GET /metrics HTTP/1.1" 200 46712 "" "Prometheus/2.51.0"
Sep 30 18:46:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:38] "GET /metrics HTTP/1.1" 200 46712 "" "Prometheus/2.51.0"
Sep 30 18:46:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:38.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:39 compute-0 ceph-mon[73755]: pgmap v2048: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2049: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:39.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:40.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:41 compute-0 sudo[357891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:46:41 compute-0 sudo[357891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:46:41 compute-0 sudo[357891]: pam_unix(sudo:session): session closed for user root
Sep 30 18:46:41 compute-0 podman[357915]: 2025-09-30 18:46:41.424193545 +0000 UTC m=+0.063440976 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Sep 30 18:46:41 compute-0 podman[357917]: 2025-09-30 18:46:41.424329768 +0000 UTC m=+0.057768428 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:46:41 compute-0 podman[357916]: 2025-09-30 18:46:41.460411103 +0000 UTC m=+0.098915174 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20250930, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:46:41 compute-0 ceph-mon[73755]: pgmap v2049: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:41 compute-0 nova_compute[265391]: 2025-09-30 18:46:41.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2050: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:41.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:42 compute-0 nova_compute[265391]: 2025-09-30 18:46:41.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:42.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:43 compute-0 ceph-mon[73755]: pgmap v2050: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2051: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:43.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:43.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:44.151 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:ac:29 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d3db2eea-4a5b-4481-bccc-d3aa8bd5a6db', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3db2eea-4a5b-4481-bccc-d3aa8bd5a6db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3359c464e0344756a39ce5c7088b9eba', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49a549bb-c1b5-46c1-833a-6afe8a0d1bab, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=93d705df-d94d-4266-a1a5-9ccb90896904) old=Port_Binding(mac=['fa:16:3e:57:ac:29'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-d3db2eea-4a5b-4481-bccc-d3aa8bd5a6db', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3db2eea-4a5b-4481-bccc-d3aa8bd5a6db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3359c464e0344756a39ce5c7088b9eba', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:46:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:44.152 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 93d705df-d94d-4266-a1a5-9ccb90896904 in datapath d3db2eea-4a5b-4481-bccc-d3aa8bd5a6db updated
Sep 30 18:46:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:44.153 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3db2eea-4a5b-4481-bccc-d3aa8bd5a6db, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:46:44 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:44.153 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[23b2c239-3fc5-4843-bf43-88a114ba00c9]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:46:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:44.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:44 compute-0 ceph-mon[73755]: pgmap v2051: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2052: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:46.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:46 compute-0 nova_compute[265391]: 2025-09-30 18:46:46.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:46 compute-0 ceph-mon[73755]: pgmap v2052: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:47 compute-0 nova_compute[265391]: 2025-09-30 18:46:47.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:47.384Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2053: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:47.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:48.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:48] "GET /metrics HTTP/1.1" 200 46712 "" "Prometheus/2.51.0"
Sep 30 18:46:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:48] "GET /metrics HTTP/1.1" 200 46712 "" "Prometheus/2.51.0"
Sep 30 18:46:48 compute-0 ceph-mon[73755]: pgmap v2053: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:48.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:46:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:48.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:46:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2054: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:49.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:50 compute-0 ceph-mon[73755]: pgmap v2054: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2055: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:51 compute-0 nova_compute[265391]: 2025-09-30 18:46:51.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:51.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:52 compute-0 nova_compute[265391]: 2025-09-30 18:46:52.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:46:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:46:52 compute-0 ceph-mon[73755]: pgmap v2055: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:46:53 compute-0 nova_compute[265391]: 2025-09-30 18:46:53.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:53 compute-0 nova_compute[265391]: 2025-09-30 18:46:53.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:53 compute-0 podman[357997]: 2025-09-30 18:46:53.523105407 +0000 UTC m=+0.057514051 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:46:53 compute-0 podman[357996]: 2025-09-30 18:46:53.532652305 +0000 UTC m=+0.069195534 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:46:53 compute-0 podman[357998]: 2025-09-30 18:46:53.533716683 +0000 UTC m=+0.064208746 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git)
Sep 30 18:46:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2056: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:53.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:53.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:54.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:54.340 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:54.341 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:46:54.341 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:46:54 compute-0 ceph-mon[73755]: pgmap v2056: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2057: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:46:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:55.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:46:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:46:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:56.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:56 compute-0 nova_compute[265391]: 2025-09-30 18:46:56.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:46:56 compute-0 nova_compute[265391]: 2025-09-30 18:46:56.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:56 compute-0 ceph-mon[73755]: pgmap v2057: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:56 compute-0 nova_compute[265391]: 2025-09-30 18:46:56.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:56 compute-0 nova_compute[265391]: 2025-09-30 18:46:56.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:56 compute-0 nova_compute[265391]: 2025-09-30 18:46:56.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:46:56 compute-0 nova_compute[265391]: 2025-09-30 18:46:56.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:46:56 compute-0 nova_compute[265391]: 2025-09-30 18:46:56.944 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:46:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:57.386Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:46:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111823747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.443 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:46:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:46:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/402236633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:46:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:46:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/402236633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.594 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.595 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.618 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.619 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.620 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.025s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.621 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4307MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.621 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:57 compute-0 nova_compute[265391]: 2025-09-30 18:46:57.621 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:46:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2058: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1258656081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4111823747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/402236633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:46:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/402236633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:46:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:57.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:58 compute-0 nova_compute[265391]: 2025-09-30 18:46:58.124 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:46:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:46:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:46:58.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:46:58 compute-0 nova_compute[265391]: 2025-09-30 18:46:58.691 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:46:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:58] "GET /metrics HTTP/1.1" 200 46711 "" "Prometheus/2.51.0"
Sep 30 18:46:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:46:58] "GET /metrics HTTP/1.1" 200 46711 "" "Prometheus/2.51.0"
Sep 30 18:46:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:46:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3173180938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:46:58.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:46:58 compute-0 ceph-mon[73755]: pgmap v2058: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:46:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3173180938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:46:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:46:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:46:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:46:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:46:59 compute-0 nova_compute[265391]: 2025-09-30 18:46:59.196 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 656a0137-3214-4992-a68a-cdbedf0336f6 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1797
Sep 30 18:46:59 compute-0 nova_compute[265391]: 2025-09-30 18:46:59.197 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:46:59 compute-0 nova_compute[265391]: 2025-09-30 18:46:59.197 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:46:57 up  1:50,  0 user,  load average: 0.83, 0.88, 0.87\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:46:59 compute-0 nova_compute[265391]: 2025-09-30 18:46:59.240 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:46:59 compute-0 ovn_controller[156242]: 2025-09-30T18:46:59Z|00307|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Sep 30 18:46:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:46:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2989727732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:59 compute-0 nova_compute[265391]: 2025-09-30 18:46:59.741 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:46:59 compute-0 podman[276673]: time="2025-09-30T18:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:46:59 compute-0 nova_compute[265391]: 2025-09-30 18:46:59.748 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:46:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:46:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10307 "" "Go-http-client/1.1"
Sep 30 18:46:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2059: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:46:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2989727732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:46:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:46:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:46:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:46:59.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:47:00 compute-0 nova_compute[265391]: 2025-09-30 18:47:00.262 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:47:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:00.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:00 compute-0 nova_compute[265391]: 2025-09-30 18:47:00.776 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:47:00 compute-0 nova_compute[265391]: 2025-09-30 18:47:00.777 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.155s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:00 compute-0 nova_compute[265391]: 2025-09-30 18:47:00.777 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.086s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:00 compute-0 nova_compute[265391]: 2025-09-30 18:47:00.788 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:47:00 compute-0 nova_compute[265391]: 2025-09-30 18:47:00.788 2 INFO nova.compute.claims [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:47:00 compute-0 ceph-mon[73755]: pgmap v2059: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:47:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:01 compute-0 openstack_network_exporter[279566]: ERROR   18:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:47:01 compute-0 openstack_network_exporter[279566]: ERROR   18:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:47:01 compute-0 openstack_network_exporter[279566]: ERROR   18:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:47:01 compute-0 openstack_network_exporter[279566]: ERROR   18:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:47:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:47:01 compute-0 openstack_network_exporter[279566]: ERROR   18:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:47:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:47:01 compute-0 sudo[358105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:47:01 compute-0 sudo[358105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:01 compute-0 sudo[358105]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:01 compute-0 nova_compute[265391]: 2025-09-30 18:47:01.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2060: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:47:01 compute-0 nova_compute[265391]: 2025-09-30 18:47:01.848 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:47:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:01.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:02 compute-0 nova_compute[265391]: 2025-09-30 18:47:02.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:47:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250951644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:47:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:02.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:02 compute-0 nova_compute[265391]: 2025-09-30 18:47:02.360 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:47:02 compute-0 nova_compute[265391]: 2025-09-30 18:47:02.369 2 DEBUG nova.compute.provider_tree [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:47:02 compute-0 nova_compute[265391]: 2025-09-30 18:47:02.882 2 DEBUG nova.scheduler.client.report [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:47:02 compute-0 ceph-mon[73755]: pgmap v2060: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:47:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1250951644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.395 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.617s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.395 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:47:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2061: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.779 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.780 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.780 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.780 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.780 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.908 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.908 2 DEBUG nova.network.neutron [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.909 2 WARNING neutronclient.v2_0.client [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:47:03 compute-0 nova_compute[265391]: 2025-09-30 18:47:03.909 2 WARNING neutronclient.v2_0.client [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:47:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:03.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:03.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:04.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:04 compute-0 nova_compute[265391]: 2025-09-30 18:47:04.420 2 DEBUG nova.network.neutron [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Successfully created port: d728eab4-88db-4811-b199-c75155b08c82 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:47:04 compute-0 nova_compute[265391]: 2025-09-30 18:47:04.424 2 INFO nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:47:04 compute-0 nova_compute[265391]: 2025-09-30 18:47:04.931 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:47:05 compute-0 ceph-mon[73755]: pgmap v2061: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.430 2 DEBUG nova.network.neutron [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Successfully updated port: d728eab4-88db-4811-b199-c75155b08c82 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.503 2 DEBUG nova.compute.manager [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-changed-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.504 2 DEBUG nova.compute.manager [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Refreshing instance network info cache due to event network-changed-d728eab4-88db-4811-b199-c75155b08c82. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.504 2 DEBUG oslo_concurrency.lockutils [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.505 2 DEBUG oslo_concurrency.lockutils [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.505 2 DEBUG nova.network.neutron [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Refreshing network info cache for port d728eab4-88db-4811-b199-c75155b08c82 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:47:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2062: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.937 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.953 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.955 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.955 2 INFO nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Creating image(s)
Sep 30 18:47:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:05.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:05 compute-0 nova_compute[265391]: 2025-09-30 18:47:05.991 2 DEBUG nova.storage.rbd_utils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 656a0137-3214-4992-a68a-cdbedf0336f6_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.022 2 DEBUG nova.storage.rbd_utils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 656a0137-3214-4992-a68a-cdbedf0336f6_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.062 2 DEBUG nova.storage.rbd_utils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 656a0137-3214-4992-a68a-cdbedf0336f6_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.068 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.082 2 WARNING neutronclient.v2_0.client [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.128 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.129 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.129 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.130 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.159 2 DEBUG nova.storage.rbd_utils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 656a0137-3214-4992-a68a-cdbedf0336f6_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.163 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 656a0137-3214-4992-a68a-cdbedf0336f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:47:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:06.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.428 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 656a0137-3214-4992-a68a-cdbedf0336f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.490 2 DEBUG nova.storage.rbd_utils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] resizing rbd image 656a0137-3214-4992-a68a-cdbedf0336f6_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.605 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.606 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Ensure instance console log exists: /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.606 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.607 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.607 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:06 compute-0 nova_compute[265391]: 2025-09-30 18:47:06.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:07 compute-0 nova_compute[265391]: 2025-09-30 18:47:07.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:07 compute-0 ceph-mon[73755]: pgmap v2062: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:47:07 compute-0 nova_compute[265391]: 2025-09-30 18:47:07.028 2 DEBUG nova.network.neutron [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:47:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:47:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:47:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:07.386Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:47:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:47:07 compute-0 nova_compute[265391]: 2025-09-30 18:47:07.497 2 DEBUG nova.network.neutron [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:47:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2063: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:47:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:07.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:08 compute-0 nova_compute[265391]: 2025-09-30 18:47:08.005 2 DEBUG oslo_concurrency.lockutils [req-4c6fba7d-4d51-4712-8e4d-db5765c632b9 req-32175658-354e-4c76-b095-73492a516dd6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:47:08 compute-0 nova_compute[265391]: 2025-09-30 18:47:08.006 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquired lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:47:08 compute-0 nova_compute[265391]: 2025-09-30 18:47:08.006 2 DEBUG nova.network.neutron [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:47:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:47:08
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.rgw.root', '.nfs', 'vms']
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:47:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:47:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:08.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:47:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:08] "GET /metrics HTTP/1.1" 200 46707 "" "Prometheus/2.51.0"
Sep 30 18:47:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:08] "GET /metrics HTTP/1.1" 200 46707 "" "Prometheus/2.51.0"
Sep 30 18:47:08 compute-0 nova_compute[265391]: 2025-09-30 18:47:08.816 2 DEBUG nova.network.neutron [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:47:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:08.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.034 2 WARNING neutronclient.v2_0.client [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:47:09 compute-0 ceph-mon[73755]: pgmap v2063: 353 pgs: 353 active+clean; 41 MiB data, 378 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.210 2 DEBUG nova.network.neutron [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Updating instance_info_cache with network_info: [{"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.716 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Releasing lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.716 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Instance network_info: |[{"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.718 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Start _get_guest_xml network_info=[{"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.723 2 WARNING nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.725 2 DEBUG nova.virt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteZoneMigrationStrategy-server-752609519', uuid='656a0137-3214-4992-a68a-cdbedf0336f6'), owner=OwnerMeta(userid='f560266d133f4f1ba4a908e3cdcfa59d', username='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin', projectid='3359c464e0344756a39ce5c7088b9eba', projectname='tempest-TestExecuteZoneMigrationStrategy-613400940'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759258029.7253244) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.730 2 DEBUG nova.virt.libvirt.host [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.731 2 DEBUG nova.virt.libvirt.host [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.736 2 DEBUG nova.virt.libvirt.host [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.737 2 DEBUG nova.virt.libvirt.host [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.737 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.737 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.737 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.738 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.738 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.738 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.738 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.738 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.738 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.739 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.739 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.739 2 DEBUG nova.virt.hardware [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:47:09 compute-0 nova_compute[265391]: 2025-09-30 18:47:09.741 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:47:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2064: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:09.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:47:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478917871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:47:10 compute-0 nova_compute[265391]: 2025-09-30 18:47:10.273 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:47:10 compute-0 nova_compute[265391]: 2025-09-30 18:47:10.313 2 DEBUG nova.storage.rbd_utils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 656a0137-3214-4992-a68a-cdbedf0336f6_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:47:10 compute-0 nova_compute[265391]: 2025-09-30 18:47:10.319 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:47:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:10.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:47:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807568960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:47:10 compute-0 nova_compute[265391]: 2025-09-30 18:47:10.736 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:47:10 compute-0 nova_compute[265391]: 2025-09-30 18:47:10.739 2 DEBUG nova.virt.libvirt.vif [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:46:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-752609519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-752609519',id=36,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3359c464e0344756a39ce5c7088b9eba',ramdisk_id='',reservation_id='r-9hll55u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-613400940',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:47:04Z,user_data=None,user_id='f560266d133f4f1ba4a908e3cdcfa59d',uuid=656a0137-3214-4992-a68a-cdbedf0336f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:47:10 compute-0 nova_compute[265391]: 2025-09-30 18:47:10.739 2 DEBUG nova.network.os_vif_util [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Converting VIF {"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:47:10 compute-0 nova_compute[265391]: 2025-09-30 18:47:10.741 2 DEBUG nova.network.os_vif_util [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:57:7e,bridge_name='br-int',has_traffic_filtering=True,id=d728eab4-88db-4811-b199-c75155b08c82,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd728eab4-88') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:47:10 compute-0 nova_compute[265391]: 2025-09-30 18:47:10.742 2 DEBUG nova.objects.instance [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lazy-loading 'pci_devices' on Instance uuid 656a0137-3214-4992-a68a-cdbedf0336f6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:47:11 compute-0 ceph-mon[73755]: pgmap v2064: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/478917871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:47:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1807568960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.252 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <uuid>656a0137-3214-4992-a68a-cdbedf0336f6</uuid>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <name>instance-00000024</name>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-752609519</nova:name>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:47:09</nova:creationTime>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:47:11 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:47:11 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:user uuid="f560266d133f4f1ba4a908e3cdcfa59d">tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin</nova:user>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:project uuid="3359c464e0344756a39ce5c7088b9eba">tempest-TestExecuteZoneMigrationStrategy-613400940</nova:project>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <nova:port uuid="d728eab4-88db-4811-b199-c75155b08c82">
Sep 30 18:47:11 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <system>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <entry name="serial">656a0137-3214-4992-a68a-cdbedf0336f6</entry>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <entry name="uuid">656a0137-3214-4992-a68a-cdbedf0336f6</entry>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </system>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <os>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   </os>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <features>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   </features>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/656a0137-3214-4992-a68a-cdbedf0336f6_disk">
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       </source>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/656a0137-3214-4992-a68a-cdbedf0336f6_disk.config">
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       </source>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:47:11 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:b3:57:7e"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <target dev="tapd728eab4-88"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/console.log" append="off"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <video>
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </video>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:47:11 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:47:11 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:47:11 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:47:11 compute-0 nova_compute[265391]: </domain>
Sep 30 18:47:11 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.254 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Preparing to wait for external event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.254 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.255 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.255 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.256 2 DEBUG nova.virt.libvirt.vif [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:46:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-752609519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-752609519',id=36,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3359c464e0344756a39ce5c7088b9eba',ramdisk_id='',reservation_id='r-9hll55u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-613400940',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:47:04Z,user_data=None,user_id='f560266d133f4f1ba4a908e3cdcfa59d',uuid=656a0137-3214-4992-a68a-cdbedf0336f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.256 2 DEBUG nova.network.os_vif_util [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Converting VIF {"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.257 2 DEBUG nova.network.os_vif_util [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:57:7e,bridge_name='br-int',has_traffic_filtering=True,id=d728eab4-88db-4811-b199-c75155b08c82,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd728eab4-88') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.257 2 DEBUG os_vif [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:57:7e,bridge_name='br-int',has_traffic_filtering=True,id=d728eab4-88db-4811-b199-c75155b08c82,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd728eab4-88') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '5d2e0724-05f8-585c-95a8-128e7bd904b0', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd728eab4-88, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapd728eab4-88, col_values=(('qos', UUID('0d00f60e-6597-4de5-a8f7-bba6f911b803')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapd728eab4-88, col_values=(('external_ids', {'iface-id': 'd728eab4-88db-4811-b199-c75155b08c82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:57:7e', 'vm-uuid': '656a0137-3214-4992-a68a-cdbedf0336f6'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:11 compute-0 NetworkManager[45059]: <info>  [1759258031.2696] manager: (tapd728eab4-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.276 2 INFO os_vif [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:57:7e,bridge_name='br-int',has_traffic_filtering=True,id=d728eab4-88db-4811-b199-c75155b08c82,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd728eab4-88')
Sep 30 18:47:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:11 compute-0 podman[358394]: 2025-09-30 18:47:11.52943706 +0000 UTC m=+0.056548477 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:47:11 compute-0 podman[358393]: 2025-09-30 18:47:11.579788735 +0000 UTC m=+0.113919364 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:47:11 compute-0 podman[358438]: 2025-09-30 18:47:11.636148946 +0000 UTC m=+0.075131588 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:47:11 compute-0 nova_compute[265391]: 2025-09-30 18:47:11.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2065: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:11.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:12.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:12 compute-0 nova_compute[265391]: 2025-09-30 18:47:12.826 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:47:12 compute-0 nova_compute[265391]: 2025-09-30 18:47:12.826 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:47:12 compute-0 nova_compute[265391]: 2025-09-30 18:47:12.826 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] No VIF found with MAC fa:16:3e:b3:57:7e, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:47:12 compute-0 nova_compute[265391]: 2025-09-30 18:47:12.827 2 INFO nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Using config drive
Sep 30 18:47:12 compute-0 nova_compute[265391]: 2025-09-30 18:47:12.855 2 DEBUG nova.storage.rbd_utils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 656a0137-3214-4992-a68a-cdbedf0336f6_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:47:13 compute-0 ceph-mon[73755]: pgmap v2065: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.372 2 WARNING neutronclient.v2_0.client [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.561 2 INFO nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Creating config drive at /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/disk.config
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.567 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmps2vawbt5 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.694 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmps2vawbt5" returned: 0 in 0.127s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.721 2 DEBUG nova.storage.rbd_utils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 656a0137-3214-4992-a68a-cdbedf0336f6_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.725 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/disk.config 656a0137-3214-4992-a68a-cdbedf0336f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:47:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2066: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.874 2 DEBUG oslo_concurrency.processutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/disk.config 656a0137-3214-4992-a68a-cdbedf0336f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.876 2 INFO nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Deleting local config drive /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/disk.config because it was imported into RBD.
Sep 30 18:47:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:13.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:13 compute-0 kernel: tapd728eab4-88: entered promiscuous mode
Sep 30 18:47:13 compute-0 NetworkManager[45059]: <info>  [1759258033.9550] manager: (tapd728eab4-88): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Sep 30 18:47:13 compute-0 ovn_controller[156242]: 2025-09-30T18:47:13Z|00308|binding|INFO|Claiming lport d728eab4-88db-4811-b199-c75155b08c82 for this chassis.
Sep 30 18:47:13 compute-0 ovn_controller[156242]: 2025-09-30T18:47:13Z|00309|binding|INFO|d728eab4-88db-4811-b199-c75155b08c82: Claiming fa:16:3e:b3:57:7e 10.100.0.14
Sep 30 18:47:13 compute-0 nova_compute[265391]: 2025-09-30 18:47:13.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:13.972 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:57:7e 10.100.0.14'], port_security=['fa:16:3e:b3:57:7e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '656a0137-3214-4992-a68a-cdbedf0336f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4658d55-a8f9-48f1-846d-61df3d830821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3359c464e0344756a39ce5c7088b9eba', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3a57c776-d79c-4096-859e-411dcf78cfa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0884332-fe68-47c8-9c8c-5c6a7c53f7f5, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=d728eab4-88db-4811-b199-c75155b08c82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:47:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:13.973 166158 INFO neutron.agent.ovn.metadata.agent [-] Port d728eab4-88db-4811-b199-c75155b08c82 in datapath f4658d55-a8f9-48f1-846d-61df3d830821 bound to our chassis
Sep 30 18:47:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:13.974 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4658d55-a8f9-48f1-846d-61df3d830821
Sep 30 18:47:13 compute-0 systemd-udevd[358537]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:47:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:13.991 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f846bdbe-3687-40a3-a814-d4c888c684ee]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:13.992 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4658d55-a1 in ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:47:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:13.993 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4658d55-a0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:47:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:13.994 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8fc70a-69af-41ec-83f1-8b07a26f5511]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:13.994 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[4501bb42-ca4d-42c1-9ca1-a9bd5eed7d7d]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:47:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:13.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:47:14 compute-0 NetworkManager[45059]: <info>  [1759258034.0027] device (tapd728eab4-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:47:14 compute-0 NetworkManager[45059]: <info>  [1759258034.0041] device (tapd728eab4-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.006 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[cc041823-48fa-4047-88e9-2e961c9c5487]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:14 compute-0 systemd-machined[219917]: New machine qemu-28-instance-00000024.
Sep 30 18:47:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:14 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000024.
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.026 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[504ec2fb-18a9-4d2f-811d-5e415c1fbeb0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 nova_compute[265391]: 2025-09-30 18:47:14.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:14 compute-0 ovn_controller[156242]: 2025-09-30T18:47:14Z|00310|binding|INFO|Setting lport d728eab4-88db-4811-b199-c75155b08c82 ovn-installed in OVS
Sep 30 18:47:14 compute-0 ovn_controller[156242]: 2025-09-30T18:47:14Z|00311|binding|INFO|Setting lport d728eab4-88db-4811-b199-c75155b08c82 up in Southbound
Sep 30 18:47:14 compute-0 nova_compute[265391]: 2025-09-30 18:47:14.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.055 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecc6e59-ae65-4eea-8b2c-2dcb46da1ee8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.059 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa3b4fd-54d5-477f-9254-e823bb1f8c58]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 NetworkManager[45059]: <info>  [1759258034.0616] manager: (tapf4658d55-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Sep 30 18:47:14 compute-0 systemd-udevd[358542]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.093 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[be08e6b8-080c-4dc5-97e7-776c4355818b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.096 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e26b14-a6c8-4664-9a8a-0dc4127fe414]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 NetworkManager[45059]: <info>  [1759258034.1175] device (tapf4658d55-a0): carrier: link connected
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.121 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[f29d9895-bf4a-4736-a3e5-e0e0e38658ce]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.140 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1ad722-3772-4412-a7ef-ec59e0c25f21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4658d55-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:a8:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663281, 'reachable_time': 24180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358571, 'error': None, 'target': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.155 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a62bcb60-dab8-4fe4-b054-301e44d371ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:a899'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663281, 'tstamp': 663281}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358572, 'error': None, 'target': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.170 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff8c95b-d2e1-43e5-8dbf-d4880ba77356]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4658d55-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:a8:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663281, 'reachable_time': 24180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358573, 'error': None, 'target': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.202 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[bb497f7c-523b-462f-b77b-867ef353fe04]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.268 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9c2d45-8058-4ec5-8d8c-78fbc70d52a8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.269 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4658d55-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.269 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.270 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4658d55-a0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:14 compute-0 NetworkManager[45059]: <info>  [1759258034.2727] manager: (tapf4658d55-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Sep 30 18:47:14 compute-0 kernel: tapf4658d55-a0: entered promiscuous mode
Sep 30 18:47:14 compute-0 nova_compute[265391]: 2025-09-30 18:47:14.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.276 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4658d55-a0, col_values=(('external_ids', {'iface-id': '862fbe9e-132a-4b8a-83f6-7b020c6192ad'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:14 compute-0 nova_compute[265391]: 2025-09-30 18:47:14.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:14 compute-0 ovn_controller[156242]: 2025-09-30T18:47:14Z|00312|binding|INFO|Releasing lport 862fbe9e-132a-4b8a-83f6-7b020c6192ad from this chassis (sb_readonly=0)
Sep 30 18:47:14 compute-0 nova_compute[265391]: 2025-09-30 18:47:14.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.294 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e3b89b-2abd-4d57-948b-c6d2dd776fb0]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.295 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.295 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.295 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for f4658d55-a8f9-48f1-846d-61df3d830821 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.295 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.296 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbf9131-9e29-43d3-9d3d-92490785bc14]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.296 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.297 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[48e2e9b0-5165-46ee-943f-611e7e35979a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.297 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-f4658d55-a8f9-48f1-846d-61df3d830821
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID f4658d55-a8f9-48f1-846d-61df3d830821
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:47:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:14.298 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'env', 'PROCESS_TAG=haproxy-f4658d55-a8f9-48f1-846d-61df3d830821', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4658d55-a8f9-48f1-846d-61df3d830821.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:47:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:14.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:14 compute-0 podman[358605]: 2025-09-30 18:47:14.659999798 +0000 UTC m=+0.051280290 container create d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Sep 30 18:47:14 compute-0 systemd[1]: Started libpod-conmon-d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad.scope.
Sep 30 18:47:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dddba190a93339fd9c7fb102870ba5499b724d50782140107a0ddd1626fd7bda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:14 compute-0 podman[358605]: 2025-09-30 18:47:14.630683058 +0000 UTC m=+0.021963580 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:47:14 compute-0 podman[358605]: 2025-09-30 18:47:14.727912428 +0000 UTC m=+0.119192950 container init d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:47:14 compute-0 podman[358605]: 2025-09-30 18:47:14.733942575 +0000 UTC m=+0.125223067 container start d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Sep 30 18:47:14 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[358621]: [NOTICE]   (358640) : New worker (358652) forked
Sep 30 18:47:14 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[358621]: [NOTICE]   (358640) : Loading success.
Sep 30 18:47:15 compute-0 ceph-mon[73755]: pgmap v2066: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.140 2 DEBUG nova.compute.manager [req-d00e71a4-1e94-4dc2-819d-f8eeb1d95202 req-6fac2c3e-379c-4b84-91e4-073eee70b4e6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.141 2 DEBUG oslo_concurrency.lockutils [req-d00e71a4-1e94-4dc2-819d-f8eeb1d95202 req-6fac2c3e-379c-4b84-91e4-073eee70b4e6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.141 2 DEBUG oslo_concurrency.lockutils [req-d00e71a4-1e94-4dc2-819d-f8eeb1d95202 req-6fac2c3e-379c-4b84-91e4-073eee70b4e6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.141 2 DEBUG oslo_concurrency.lockutils [req-d00e71a4-1e94-4dc2-819d-f8eeb1d95202 req-6fac2c3e-379c-4b84-91e4-073eee70b4e6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.141 2 DEBUG nova.compute.manager [req-d00e71a4-1e94-4dc2-819d-f8eeb1d95202 req-6fac2c3e-379c-4b84-91e4-073eee70b4e6 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Processing event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.242 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.246 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.250 2 INFO nova.virt.libvirt.driver [-] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Instance spawned successfully.
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.250 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.763 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.764 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.764 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.764 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.764 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:47:15 compute-0 nova_compute[265391]: 2025-09-30 18:47:15.765 2 DEBUG nova.virt.libvirt.driver [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:47:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2067: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:15.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:16 compute-0 nova_compute[265391]: 2025-09-30 18:47:16.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:16 compute-0 nova_compute[265391]: 2025-09-30 18:47:16.273 2 INFO nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Took 10.32 seconds to spawn the instance on the hypervisor.
Sep 30 18:47:16 compute-0 nova_compute[265391]: 2025-09-30 18:47:16.274 2 DEBUG nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:47:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:16 compute-0 nova_compute[265391]: 2025-09-30 18:47:16.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:16 compute-0 nova_compute[265391]: 2025-09-30 18:47:16.809 2 INFO nova.compute.manager [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Took 18.17 seconds to build instance.
Sep 30 18:47:17 compute-0 ceph-mon[73755]: pgmap v2067: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:17 compute-0 nova_compute[265391]: 2025-09-30 18:47:17.233 2 DEBUG nova.compute.manager [req-5864d9b5-5091-4721-97ba-c7117bd09962 req-b0f7f5c5-3a39-445a-8c67-42e277fb9717 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:47:17 compute-0 nova_compute[265391]: 2025-09-30 18:47:17.234 2 DEBUG oslo_concurrency.lockutils [req-5864d9b5-5091-4721-97ba-c7117bd09962 req-b0f7f5c5-3a39-445a-8c67-42e277fb9717 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:47:17 compute-0 nova_compute[265391]: 2025-09-30 18:47:17.234 2 DEBUG oslo_concurrency.lockutils [req-5864d9b5-5091-4721-97ba-c7117bd09962 req-b0f7f5c5-3a39-445a-8c67-42e277fb9717 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:17 compute-0 nova_compute[265391]: 2025-09-30 18:47:17.234 2 DEBUG oslo_concurrency.lockutils [req-5864d9b5-5091-4721-97ba-c7117bd09962 req-b0f7f5c5-3a39-445a-8c67-42e277fb9717 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:17 compute-0 nova_compute[265391]: 2025-09-30 18:47:17.235 2 DEBUG nova.compute.manager [req-5864d9b5-5091-4721-97ba-c7117bd09962 req-b0f7f5c5-3a39-445a-8c67-42e277fb9717 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] No waiting events found dispatching network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:47:17 compute-0 nova_compute[265391]: 2025-09-30 18:47:17.235 2 WARNING nova.compute.manager [req-5864d9b5-5091-4721-97ba-c7117bd09962 req-b0f7f5c5-3a39-445a-8c67-42e277fb9717 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received unexpected event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 for instance with vm_state active and task_state None.
Sep 30 18:47:17 compute-0 nova_compute[265391]: 2025-09-30 18:47:17.314 2 DEBUG oslo_concurrency.lockutils [None req-0cc318d3-9dd7-4c60-a888-97441d1d494b f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.695s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:17.386Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2068: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:17.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:18.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:18] "GET /metrics HTTP/1.1" 200 46707 "" "Prometheus/2.51.0"
Sep 30 18:47:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:18] "GET /metrics HTTP/1.1" 200 46707 "" "Prometheus/2.51.0"
Sep 30 18:47:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:18.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:19 compute-0 ceph-mon[73755]: pgmap v2068: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:47:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2069: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:47:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:19.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:20.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:21 compute-0 ceph-mon[73755]: pgmap v2069: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:47:21 compute-0 nova_compute[265391]: 2025-09-30 18:47:21.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:21 compute-0 sudo[358684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:47:21 compute-0 sudo[358684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:21 compute-0 sudo[358684]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2070: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:21 compute-0 nova_compute[265391]: 2025-09-30 18:47:21.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:22.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 18:47:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:47:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:47:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:22.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:23 compute-0 ceph-mon[73755]: pgmap v2070: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:47:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2071: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:23.915Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:47:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:23.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:24.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:24 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/774896620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:47:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:24.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:24 compute-0 podman[358713]: 2025-09-30 18:47:24.525882064 +0000 UTC m=+0.064436411 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Sep 30 18:47:24 compute-0 podman[358715]: 2025-09-30 18:47:24.55080409 +0000 UTC m=+0.079368318 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Sep 30 18:47:24 compute-0 podman[358714]: 2025-09-30 18:47:24.551157959 +0000 UTC m=+0.088124075 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 18:47:25 compute-0 ceph-mon[73755]: pgmap v2071: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2072: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:26.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:26 compute-0 nova_compute[265391]: 2025-09-30 18:47:26.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:26.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:26 compute-0 ovn_controller[156242]: 2025-09-30T18:47:26Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:57:7e 10.100.0.14
Sep 30 18:47:26 compute-0 ovn_controller[156242]: 2025-09-30T18:47:26Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:57:7e 10.100.0.14
Sep 30 18:47:26 compute-0 nova_compute[265391]: 2025-09-30 18:47:26.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:27 compute-0 ceph-mon[73755]: pgmap v2072: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:27.387Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2073: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:28.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:28.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:28] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:47:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:28] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:47:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:28.925Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:29 compute-0 ceph-mon[73755]: pgmap v2073: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:29 compute-0 sudo[358780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:47:29 compute-0 sudo[358780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:29 compute-0 sudo[358780]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:29 compute-0 podman[276673]: time="2025-09-30T18:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:47:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:47:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2074: 353 pgs: 353 active+clean; 167 MiB data, 456 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Sep 30 18:47:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10775 "" "Go-http-client/1.1"
Sep 30 18:47:29 compute-0 sudo[358805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:47:29 compute-0 sudo[358805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:47:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:30.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:47:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:30.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:30 compute-0 sudo[358805]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:47:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:47:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:47:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:47:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:47:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2075: 353 pgs: 353 active+clean; 167 MiB data, 456 MiB used, 40 GiB / 40 GiB avail; 375 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Sep 30 18:47:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:47:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:47:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:47:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:47:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:47:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:47:30 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:47:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:47:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:47:30 compute-0 sudo[358863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:47:30 compute-0 sudo[358863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:30 compute-0 sudo[358863]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:30 compute-0 sudo[358888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:47:30 compute-0 sudo[358888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:31 compute-0 podman[358954]: 2025-09-30 18:47:31.207430758 +0000 UTC m=+0.048320473 container create 5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:47:31 compute-0 ceph-mon[73755]: pgmap v2074: 353 pgs: 353 active+clean; 167 MiB data, 456 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Sep 30 18:47:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:47:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:47:31 compute-0 ceph-mon[73755]: pgmap v2075: 353 pgs: 353 active+clean; 167 MiB data, 456 MiB used, 40 GiB / 40 GiB avail; 375 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Sep 30 18:47:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:47:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:47:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:47:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:47:31 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:47:31 compute-0 systemd[1]: Started libpod-conmon-5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc.scope.
Sep 30 18:47:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:47:31 compute-0 nova_compute[265391]: 2025-09-30 18:47:31.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:31 compute-0 podman[358954]: 2025-09-30 18:47:31.188492698 +0000 UTC m=+0.029382413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:47:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:31 compute-0 podman[358954]: 2025-09-30 18:47:31.294117245 +0000 UTC m=+0.135006950 container init 5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 18:47:31 compute-0 podman[358954]: 2025-09-30 18:47:31.301211959 +0000 UTC m=+0.142101654 container start 5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:47:31 compute-0 podman[358954]: 2025-09-30 18:47:31.304553486 +0000 UTC m=+0.145443191 container attach 5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:47:31 compute-0 friendly_mirzakhani[358970]: 167 167
Sep 30 18:47:31 compute-0 systemd[1]: libpod-5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc.scope: Deactivated successfully.
Sep 30 18:47:31 compute-0 podman[358954]: 2025-09-30 18:47:31.307134193 +0000 UTC m=+0.148023928 container died 5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:47:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c960768348b34e4e875c9cc58209976c162f16b9d419721814946552e81c0c98-merged.mount: Deactivated successfully.
Sep 30 18:47:31 compute-0 podman[358954]: 2025-09-30 18:47:31.354962542 +0000 UTC m=+0.195852247 container remove 5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_mirzakhani, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:47:31 compute-0 systemd[1]: libpod-conmon-5e1801258f21e1523f9cdd79c12d6e4f99c6668e0bfcd73884b25cc30c7625cc.scope: Deactivated successfully.
Sep 30 18:47:31 compute-0 openstack_network_exporter[279566]: ERROR   18:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:47:31 compute-0 openstack_network_exporter[279566]: ERROR   18:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:47:31 compute-0 openstack_network_exporter[279566]: ERROR   18:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:47:31 compute-0 openstack_network_exporter[279566]: ERROR   18:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:47:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:47:31 compute-0 openstack_network_exporter[279566]: ERROR   18:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:47:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:47:31 compute-0 podman[358997]: 2025-09-30 18:47:31.545089439 +0000 UTC m=+0.048107948 container create bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_morse, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:47:31 compute-0 systemd[1]: Started libpod-conmon-bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b.scope.
Sep 30 18:47:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:47:31 compute-0 podman[358997]: 2025-09-30 18:47:31.524165657 +0000 UTC m=+0.027184196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c025033a2ff615e022a19c3f9ce24d55e0ed84c19f9be49c5bdf1285c35d00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c025033a2ff615e022a19c3f9ce24d55e0ed84c19f9be49c5bdf1285c35d00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c025033a2ff615e022a19c3f9ce24d55e0ed84c19f9be49c5bdf1285c35d00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c025033a2ff615e022a19c3f9ce24d55e0ed84c19f9be49c5bdf1285c35d00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c025033a2ff615e022a19c3f9ce24d55e0ed84c19f9be49c5bdf1285c35d00/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:31 compute-0 podman[358997]: 2025-09-30 18:47:31.645771129 +0000 UTC m=+0.148789648 container init bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:47:31 compute-0 podman[358997]: 2025-09-30 18:47:31.659660639 +0000 UTC m=+0.162679168 container start bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:47:31 compute-0 podman[358997]: 2025-09-30 18:47:31.663741024 +0000 UTC m=+0.166759523 container attach bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:47:31 compute-0 nova_compute[265391]: 2025-09-30 18:47:31.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:31 compute-0 clever_morse[359015]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:47:31 compute-0 clever_morse[359015]: --> All data devices are unavailable
Sep 30 18:47:32 compute-0 systemd[1]: libpod-bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b.scope: Deactivated successfully.
Sep 30 18:47:32 compute-0 podman[358997]: 2025-09-30 18:47:32.003222663 +0000 UTC m=+0.506241152 container died bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:47:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:47:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:32.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:47:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c025033a2ff615e022a19c3f9ce24d55e0ed84c19f9be49c5bdf1285c35d00-merged.mount: Deactivated successfully.
Sep 30 18:47:32 compute-0 podman[358997]: 2025-09-30 18:47:32.048732243 +0000 UTC m=+0.551750742 container remove bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_morse, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 18:47:32 compute-0 systemd[1]: libpod-conmon-bda47d8ef9d7cdcc12b5c1a29e60ccfcb635ff5bd794b4a761b5edce47b0b45b.scope: Deactivated successfully.
Sep 30 18:47:32 compute-0 sudo[358888]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:32 compute-0 sudo[359045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:47:32 compute-0 sudo[359045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:32 compute-0 sudo[359045]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:32 compute-0 sudo[359070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:47:32 compute-0 sudo[359070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:32.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2076: 353 pgs: 353 active+clean; 167 MiB data, 456 MiB used, 40 GiB / 40 GiB avail; 375 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Sep 30 18:47:32 compute-0 podman[359135]: 2025-09-30 18:47:32.636917708 +0000 UTC m=+0.040421169 container create 059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hermann, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:47:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3701305756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:47:32 compute-0 systemd[1]: Started libpod-conmon-059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961.scope.
Sep 30 18:47:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:47:32 compute-0 podman[359135]: 2025-09-30 18:47:32.617920485 +0000 UTC m=+0.021423976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:47:32 compute-0 podman[359135]: 2025-09-30 18:47:32.722952357 +0000 UTC m=+0.126455838 container init 059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:47:32 compute-0 podman[359135]: 2025-09-30 18:47:32.732064864 +0000 UTC m=+0.135568325 container start 059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 18:47:32 compute-0 angry_hermann[359151]: 167 167
Sep 30 18:47:32 compute-0 podman[359135]: 2025-09-30 18:47:32.736394706 +0000 UTC m=+0.139898237 container attach 059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:47:32 compute-0 systemd[1]: libpod-059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961.scope: Deactivated successfully.
Sep 30 18:47:32 compute-0 conmon[359151]: conmon 059ba8e6b2c2367f5de9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961.scope/container/memory.events
Sep 30 18:47:32 compute-0 podman[359135]: 2025-09-30 18:47:32.738266034 +0000 UTC m=+0.141769495 container died 059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:47:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d375371b2c0544580321bd3bb99a6e8c5c6d2ef6bbd99b21f75ce22da7a302b-merged.mount: Deactivated successfully.
Sep 30 18:47:32 compute-0 podman[359135]: 2025-09-30 18:47:32.781133015 +0000 UTC m=+0.184636476 container remove 059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_hermann, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:47:32 compute-0 systemd[1]: libpod-conmon-059ba8e6b2c2367f5de90b3c4d627eb7dba487892effe2bc8d2bdcf6761a9961.scope: Deactivated successfully.
Sep 30 18:47:32 compute-0 podman[359175]: 2025-09-30 18:47:32.951290126 +0000 UTC m=+0.035481301 container create db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 18:47:32 compute-0 systemd[1]: Started libpod-conmon-db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25.scope.
Sep 30 18:47:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78c2b859679a417b0d0e8c06c785d11ddfd75f9d44f2624521c9b0a863afa53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78c2b859679a417b0d0e8c06c785d11ddfd75f9d44f2624521c9b0a863afa53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78c2b859679a417b0d0e8c06c785d11ddfd75f9d44f2624521c9b0a863afa53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78c2b859679a417b0d0e8c06c785d11ddfd75f9d44f2624521c9b0a863afa53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:33 compute-0 podman[359175]: 2025-09-30 18:47:33.027806699 +0000 UTC m=+0.111997964 container init db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:47:33 compute-0 podman[359175]: 2025-09-30 18:47:32.936131513 +0000 UTC m=+0.020322678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:47:33 compute-0 podman[359175]: 2025-09-30 18:47:33.037756457 +0000 UTC m=+0.121947622 container start db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:47:33 compute-0 podman[359175]: 2025-09-30 18:47:33.041461733 +0000 UTC m=+0.125652918 container attach db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]: {
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:     "0": [
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:         {
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "devices": [
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "/dev/loop3"
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             ],
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "lv_name": "ceph_lv0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "lv_size": "21470642176",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "name": "ceph_lv0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "tags": {
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.cluster_name": "ceph",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.crush_device_class": "",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.encrypted": "0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.osd_id": "0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.type": "block",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.vdo": "0",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:                 "ceph.with_tpm": "0"
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             },
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "type": "block",
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:             "vg_name": "ceph_vg0"
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:         }
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]:     ]
Sep 30 18:47:33 compute-0 dreamy_chandrasekhar[359191]: }
Sep 30 18:47:33 compute-0 systemd[1]: libpod-db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25.scope: Deactivated successfully.
Sep 30 18:47:33 compute-0 podman[359175]: 2025-09-30 18:47:33.31837714 +0000 UTC m=+0.402568315 container died db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Sep 30 18:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b78c2b859679a417b0d0e8c06c785d11ddfd75f9d44f2624521c9b0a863afa53-merged.mount: Deactivated successfully.
Sep 30 18:47:33 compute-0 podman[359175]: 2025-09-30 18:47:33.356157359 +0000 UTC m=+0.440348534 container remove db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chandrasekhar, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:47:33 compute-0 systemd[1]: libpod-conmon-db959d021df5ccd1d81ca8514aa0effb4a67024c1ac28f044588881a5a19cc25.scope: Deactivated successfully.
Sep 30 18:47:33 compute-0 sudo[359070]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:33 compute-0 sudo[359213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:47:33 compute-0 sudo[359213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:33 compute-0 sudo[359213]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:33 compute-0 sudo[359238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:47:33 compute-0 sudo[359238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:33 compute-0 ceph-mon[73755]: pgmap v2076: 353 pgs: 353 active+clean; 167 MiB data, 456 MiB used, 40 GiB / 40 GiB avail; 375 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Sep 30 18:47:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2048103537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:47:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:33.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:34.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:34 compute-0 podman[359306]: 2025-09-30 18:47:34.037742155 +0000 UTC m=+0.074000789 container create 65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:47:34 compute-0 systemd[1]: Started libpod-conmon-65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295.scope.
Sep 30 18:47:34 compute-0 podman[359306]: 2025-09-30 18:47:34.008282571 +0000 UTC m=+0.044541265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:47:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:47:34 compute-0 podman[359306]: 2025-09-30 18:47:34.125783637 +0000 UTC m=+0.162042311 container init 65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_cray, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:47:34 compute-0 podman[359306]: 2025-09-30 18:47:34.1316829 +0000 UTC m=+0.167941514 container start 65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_cray, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:47:34 compute-0 podman[359306]: 2025-09-30 18:47:34.13593287 +0000 UTC m=+0.172191544 container attach 65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_cray, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:47:34 compute-0 nifty_cray[359322]: 167 167
Sep 30 18:47:34 compute-0 systemd[1]: libpod-65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295.scope: Deactivated successfully.
Sep 30 18:47:34 compute-0 podman[359306]: 2025-09-30 18:47:34.138041514 +0000 UTC m=+0.174300188 container died 65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_cray, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:47:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1c6f2c7a9af2bbce5e2269ba40ae7e5852e499363b9ca1351998ab3a234bfa4-merged.mount: Deactivated successfully.
Sep 30 18:47:34 compute-0 podman[359306]: 2025-09-30 18:47:34.182208789 +0000 UTC m=+0.218467393 container remove 65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:47:34 compute-0 systemd[1]: libpod-conmon-65db65d0aa863f975cd104582608cb23e662837030734e60ef85f71b6defe295.scope: Deactivated successfully.
Sep 30 18:47:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:34.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:34 compute-0 podman[359346]: 2025-09-30 18:47:34.436208482 +0000 UTC m=+0.063731462 container create 572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:47:34 compute-0 systemd[1]: Started libpod-conmon-572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12.scope.
Sep 30 18:47:34 compute-0 podman[359346]: 2025-09-30 18:47:34.416752438 +0000 UTC m=+0.044275438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:47:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cdd8a4bae5daa34f4eaefac3fe77e51556d74086fe0d2652fa8a4fc56158da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cdd8a4bae5daa34f4eaefac3fe77e51556d74086fe0d2652fa8a4fc56158da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cdd8a4bae5daa34f4eaefac3fe77e51556d74086fe0d2652fa8a4fc56158da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cdd8a4bae5daa34f4eaefac3fe77e51556d74086fe0d2652fa8a4fc56158da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:47:34 compute-0 podman[359346]: 2025-09-30 18:47:34.542794865 +0000 UTC m=+0.170317895 container init 572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:47:34 compute-0 podman[359346]: 2025-09-30 18:47:34.555133145 +0000 UTC m=+0.182656145 container start 572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:47:34 compute-0 podman[359346]: 2025-09-30 18:47:34.559429296 +0000 UTC m=+0.186952256 container attach 572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:47:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2077: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 375 KiB/s rd, 4.3 MiB/s wr, 99 op/s
Sep 30 18:47:35 compute-0 lvm[359436]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:47:35 compute-0 lvm[359436]: VG ceph_vg0 finished
Sep 30 18:47:35 compute-0 vigorous_tesla[359362]: {}
Sep 30 18:47:35 compute-0 podman[359346]: 2025-09-30 18:47:35.248433573 +0000 UTC m=+0.875956553 container died 572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tesla, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:47:35 compute-0 systemd[1]: libpod-572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12.scope: Deactivated successfully.
Sep 30 18:47:35 compute-0 systemd[1]: libpod-572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12.scope: Consumed 1.031s CPU time.
Sep 30 18:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-35cdd8a4bae5daa34f4eaefac3fe77e51556d74086fe0d2652fa8a4fc56158da-merged.mount: Deactivated successfully.
Sep 30 18:47:35 compute-0 podman[359346]: 2025-09-30 18:47:35.294758574 +0000 UTC m=+0.922281574 container remove 572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:47:35 compute-0 systemd[1]: libpod-conmon-572e2c28eb541a22e9a582cc9dca5ef5000d5dfb2cbf132f208a57e45cf33d12.scope: Deactivated successfully.
Sep 30 18:47:35 compute-0 sudo[359238]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:47:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:47:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:47:35 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:47:35 compute-0 sudo[359451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:47:35 compute-0 sudo[359451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:35 compute-0 sudo[359451]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:35 compute-0 ceph-mon[73755]: pgmap v2077: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 375 KiB/s rd, 4.3 MiB/s wr, 99 op/s
Sep 30 18:47:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:47:35 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:47:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:36.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:36 compute-0 nova_compute[265391]: 2025-09-30 18:47:36.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:36.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:47:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922656506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:47:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:47:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922656506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:47:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2078: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 375 KiB/s rd, 4.3 MiB/s wr, 99 op/s
Sep 30 18:47:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3922656506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:47:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3922656506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:47:36 compute-0 nova_compute[265391]: 2025-09-30 18:47:36.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:37.267 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:47:37 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:37.268 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:47:37 compute-0 nova_compute[265391]: 2025-09-30 18:47:37.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:47:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:47:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:37.388Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:47:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:47:37 compute-0 ceph-mon[73755]: pgmap v2078: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 375 KiB/s rd, 4.3 MiB/s wr, 99 op/s
Sep 30 18:47:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:47:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:38.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:38.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2079: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 383 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Sep 30 18:47:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:38] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:47:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:38] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:47:38 compute-0 ceph-mon[73755]: pgmap v2079: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 383 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Sep 30 18:47:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:38.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:40.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:40.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2080: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 8.6 KiB/s rd, 29 KiB/s wr, 11 op/s
Sep 30 18:47:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:41 compute-0 nova_compute[265391]: 2025-09-30 18:47:41.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:41 compute-0 sudo[359484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:47:41 compute-0 ceph-mon[73755]: pgmap v2080: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 8.6 KiB/s rd, 29 KiB/s wr, 11 op/s
Sep 30 18:47:41 compute-0 sudo[359484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:47:41 compute-0 sudo[359484]: pam_unix(sudo:session): session closed for user root
Sep 30 18:47:41 compute-0 podman[359508]: 2025-09-30 18:47:41.751096523 +0000 UTC m=+0.059557855 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent)
Sep 30 18:47:41 compute-0 podman[359510]: 2025-09-30 18:47:41.768166826 +0000 UTC m=+0.067296316 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:47:41 compute-0 nova_compute[265391]: 2025-09-30 18:47:41.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:41 compute-0 podman[359509]: 2025-09-30 18:47:41.822718549 +0000 UTC m=+0.126983272 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller)
Sep 30 18:47:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:42.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:42.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2081: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 26 KiB/s wr, 10 op/s
Sep 30 18:47:43 compute-0 ceph-mon[73755]: pgmap v2081: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 26 KiB/s wr, 10 op/s
Sep 30 18:47:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:43.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:44.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2082: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:47:45 compute-0 ceph-mon[73755]: pgmap v2082: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Sep 30 18:47:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:46.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:46 compute-0 nova_compute[265391]: 2025-09-30 18:47:46.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:46.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2083: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:46 compute-0 nova_compute[265391]: 2025-09-30 18:47:46.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:47 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:47.269 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:47:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:47.388Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:47 compute-0 ceph-mon[73755]: pgmap v2083: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:47:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:48.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2084: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:47:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:48] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:47:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:48] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:47:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:48.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:49 compute-0 ceph-mon[73755]: pgmap v2084: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:47:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:50.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:50.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2085: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 65 op/s
Sep 30 18:47:50 compute-0 ceph-mon[73755]: pgmap v2085: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 65 op/s
Sep 30 18:47:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:51 compute-0 nova_compute[265391]: 2025-09-30 18:47:51.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:51 compute-0 nova_compute[265391]: 2025-09-30 18:47:51.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:52.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:47:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:47:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:47:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2086: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 65 op/s
Sep 30 18:47:53 compute-0 ceph-mon[73755]: pgmap v2086: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 65 op/s
Sep 30 18:47:53 compute-0 nova_compute[265391]: 2025-09-30 18:47:53.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.432417) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258073432486, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1800, "num_deletes": 251, "total_data_size": 3295055, "memory_usage": 3349152, "flush_reason": "Manual Compaction"}
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258073452144, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 3197779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50680, "largest_seqno": 52479, "table_properties": {"data_size": 3189620, "index_size": 4909, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17298, "raw_average_key_size": 20, "raw_value_size": 3173225, "raw_average_value_size": 3720, "num_data_blocks": 215, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759257896, "oldest_key_time": 1759257896, "file_creation_time": 1759258073, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 19795 microseconds, and 12143 cpu microseconds.
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.452209) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 3197779 bytes OK
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.452228) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.453925) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.453938) EVENT_LOG_v1 {"time_micros": 1759258073453934, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.453953) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 3287448, prev total WAL file size 3287448, number of live WAL files 2.
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.454834) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(3122KB)], [119(10211KB)]
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258073454899, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 13654768, "oldest_snapshot_seqno": -1}
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7457 keys, 11599652 bytes, temperature: kUnknown
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258073498793, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 11599652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11554960, "index_size": 24932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 197589, "raw_average_key_size": 26, "raw_value_size": 11426527, "raw_average_value_size": 1532, "num_data_blocks": 964, "num_entries": 7457, "num_filter_entries": 7457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258073, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.499010) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 11599652 bytes
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.500249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 310.6 rd, 263.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.0 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.9) write-amplify(3.6) OK, records in: 7973, records dropped: 516 output_compression: NoCompression
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.500263) EVENT_LOG_v1 {"time_micros": 1759258073500256, "job": 72, "event": "compaction_finished", "compaction_time_micros": 43959, "compaction_time_cpu_micros": 23531, "output_level": 6, "num_output_files": 1, "total_output_size": 11599652, "num_input_records": 7973, "num_output_records": 7457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258073500793, "job": 72, "event": "table_file_deletion", "file_number": 121}
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258073502488, "job": 72, "event": "table_file_deletion", "file_number": 119}
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.454664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.502515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.502519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.502520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.502522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:47:53 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:47:53.502523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:47:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:53.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:54.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:54 compute-0 nova_compute[265391]: 2025-09-30 18:47:54.279 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Check if temp file /var/lib/nova/instances/tmpfvxa598s exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:47:54 compute-0 nova_compute[265391]: 2025-09-30 18:47:54.284 2 DEBUG nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfvxa598s',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='656a0137-3214-4992-a68a-cdbedf0336f6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:47:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:54.341 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:47:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:54.342 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:47:54.342 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:54.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:54 compute-0 nova_compute[265391]: 2025-09-30 18:47:54.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:47:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2087: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Sep 30 18:47:55 compute-0 podman[359595]: 2025-09-30 18:47:55.54505302 +0000 UTC m=+0.063933638 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7)
Sep 30 18:47:55 compute-0 podman[359593]: 2025-09-30 18:47:55.553017046 +0000 UTC m=+0.074068820 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Sep 30 18:47:55 compute-0 podman[359594]: 2025-09-30 18:47:55.553890839 +0000 UTC m=+0.072084949 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:47:55 compute-0 ceph-mon[73755]: pgmap v2087: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Sep 30 18:47:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:56.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:47:56 compute-0 nova_compute[265391]: 2025-09-30 18:47:56.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:47:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:56.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:47:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2088: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:47:56 compute-0 nova_compute[265391]: 2025-09-30 18:47:56.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:47:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:57.389Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:47:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3877174777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:47:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:47:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3877174777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:47:57 compute-0 ceph-mon[73755]: pgmap v2088: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:47:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3877174777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:47:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3877174777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:47:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:47:58.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:58 compute-0 nova_compute[265391]: 2025-09-30 18:47:58.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:47:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:47:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:47:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:47:58.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:47:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2089: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Sep 30 18:47:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2680587084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:47:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:58] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:47:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:47:58] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:47:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:47:58.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:47:58 compute-0 nova_compute[265391]: 2025-09-30 18:47:58.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:47:58 compute-0 nova_compute[265391]: 2025-09-30 18:47:58.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:58 compute-0 nova_compute[265391]: 2025-09-30 18:47:58.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:58 compute-0 nova_compute[265391]: 2025-09-30 18:47:58.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:47:58 compute-0 nova_compute[265391]: 2025-09-30 18:47:58.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:47:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:47:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:47:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:47:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:47:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:47:59 compute-0 nova_compute[265391]: 2025-09-30 18:47:59.314 2 DEBUG nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Preparing to wait for external event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:47:59 compute-0 nova_compute[265391]: 2025-09-30 18:47:59.315 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:47:59 compute-0 nova_compute[265391]: 2025-09-30 18:47:59.316 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:47:59 compute-0 nova_compute[265391]: 2025-09-30 18:47:59.316 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:47:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:47:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375457157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:47:59 compute-0 nova_compute[265391]: 2025-09-30 18:47:59.379 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:47:59 compute-0 ceph-mon[73755]: pgmap v2089: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Sep 30 18:47:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2375457157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:47:59 compute-0 podman[276673]: time="2025-09-30T18:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:47:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:47:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10774 "" "Go-http-client/1.1"
Sep 30 18:48:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:00.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:00.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:00 compute-0 nova_compute[265391]: 2025-09-30 18:48:00.440 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:48:00 compute-0 nova_compute[265391]: 2025-09-30 18:48:00.440 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:48:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2090: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:48:00 compute-0 nova_compute[265391]: 2025-09-30 18:48:00.645 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:48:00 compute-0 nova_compute[265391]: 2025-09-30 18:48:00.647 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:48:00 compute-0 nova_compute[265391]: 2025-09-30 18:48:00.671 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:48:00 compute-0 nova_compute[265391]: 2025-09-30 18:48:00.672 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4117MB free_disk=39.90117645263672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:48:00 compute-0 nova_compute[265391]: 2025-09-30 18:48:00.672 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:00 compute-0 nova_compute[265391]: 2025-09-30 18:48:00.673 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:01 compute-0 nova_compute[265391]: 2025-09-30 18:48:01.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:01 compute-0 openstack_network_exporter[279566]: ERROR   18:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:48:01 compute-0 openstack_network_exporter[279566]: ERROR   18:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:48:01 compute-0 openstack_network_exporter[279566]: ERROR   18:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:48:01 compute-0 openstack_network_exporter[279566]: ERROR   18:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:48:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:48:01 compute-0 openstack_network_exporter[279566]: ERROR   18:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:48:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:48:01 compute-0 nova_compute[265391]: 2025-09-30 18:48:01.697 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Updating resource usage from migration 0f90f65f-b699-4d58-94bd-ef4a6ad3b688
Sep 30 18:48:01 compute-0 nova_compute[265391]: 2025-09-30 18:48:01.736 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration 0f90f65f-b699-4d58-94bd-ef4a6ad3b688 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:48:01 compute-0 nova_compute[265391]: 2025-09-30 18:48:01.737 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:48:01 compute-0 nova_compute[265391]: 2025-09-30 18:48:01.737 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:48:00 up  1:51,  0 user,  load average: 0.61, 0.81, 0.85\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_3359c464e0344756a39ce5c7088b9eba': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:48:01 compute-0 ceph-mon[73755]: pgmap v2090: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:48:01 compute-0 nova_compute[265391]: 2025-09-30 18:48:01.780 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:48:01 compute-0 nova_compute[265391]: 2025-09-30 18:48:01.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:01 compute-0 sudo[359681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:48:01 compute-0 sudo[359681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:01 compute-0 sudo[359681]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:02.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:48:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397991696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:02 compute-0 nova_compute[265391]: 2025-09-30 18:48:02.239 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:48:02 compute-0 nova_compute[265391]: 2025-09-30 18:48:02.245 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:48:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2091: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:48:02 compute-0 nova_compute[265391]: 2025-09-30 18:48:02.753 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:48:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3397991696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2449857165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:03 compute-0 nova_compute[265391]: 2025-09-30 18:48:03.266 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:48:03 compute-0 nova_compute[265391]: 2025-09-30 18:48:03.266 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.594s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:03 compute-0 ceph-mon[73755]: pgmap v2091: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 18:48:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:03.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:04.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:04 compute-0 nova_compute[265391]: 2025-09-30 18:48:04.200 2 DEBUG nova.compute.manager [req-5c24bbf5-f475-4c37-b3f1-84aa54acc0d3 req-8e6780e4-240c-44b7-8eda-71e4d6beb6a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:48:04 compute-0 nova_compute[265391]: 2025-09-30 18:48:04.200 2 DEBUG oslo_concurrency.lockutils [req-5c24bbf5-f475-4c37-b3f1-84aa54acc0d3 req-8e6780e4-240c-44b7-8eda-71e4d6beb6a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:04 compute-0 nova_compute[265391]: 2025-09-30 18:48:04.201 2 DEBUG oslo_concurrency.lockutils [req-5c24bbf5-f475-4c37-b3f1-84aa54acc0d3 req-8e6780e4-240c-44b7-8eda-71e4d6beb6a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:04 compute-0 nova_compute[265391]: 2025-09-30 18:48:04.201 2 DEBUG oslo_concurrency.lockutils [req-5c24bbf5-f475-4c37-b3f1-84aa54acc0d3 req-8e6780e4-240c-44b7-8eda-71e4d6beb6a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:04 compute-0 nova_compute[265391]: 2025-09-30 18:48:04.201 2 DEBUG nova.compute.manager [req-5c24bbf5-f475-4c37-b3f1-84aa54acc0d3 req-8e6780e4-240c-44b7-8eda-71e4d6beb6a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] No event matching network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 in dict_keys([('network-vif-plugged', 'd728eab4-88db-4811-b199-c75155b08c82')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:48:04 compute-0 nova_compute[265391]: 2025-09-30 18:48:04.202 2 DEBUG nova.compute.manager [req-5c24bbf5-f475-4c37-b3f1-84aa54acc0d3 req-8e6780e4-240c-44b7-8eda-71e4d6beb6a9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:48:04 compute-0 sshd-session[359731]: Connection closed by 114.66.3.37 port 56262
Sep 30 18:48:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:04.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2092: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Sep 30 18:48:04 compute-0 ceph-mon[73755]: pgmap v2092: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Sep 30 18:48:05 compute-0 nova_compute[265391]: 2025-09-30 18:48:05.267 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:05 compute-0 nova_compute[265391]: 2025-09-30 18:48:05.267 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:05 compute-0 nova_compute[265391]: 2025-09-30 18:48:05.268 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:05 compute-0 nova_compute[265391]: 2025-09-30 18:48:05.268 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:05 compute-0 nova_compute[265391]: 2025-09-30 18:48:05.268 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:48:05 compute-0 nova_compute[265391]: 2025-09-30 18:48:05.337 2 INFO nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Took 6.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:48:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:06.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.259 2 DEBUG nova.compute.manager [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.259 2 DEBUG oslo_concurrency.lockutils [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.260 2 DEBUG oslo_concurrency.lockutils [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.260 2 DEBUG oslo_concurrency.lockutils [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.260 2 DEBUG nova.compute.manager [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Processing event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.260 2 DEBUG nova.compute.manager [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-changed-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.260 2 DEBUG nova.compute.manager [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Refreshing instance network info cache due to event network-changed-d728eab4-88db-4811-b199-c75155b08c82. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.261 2 DEBUG oslo_concurrency.lockutils [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.261 2 DEBUG oslo_concurrency.lockutils [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.261 2 DEBUG nova.network.neutron [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Refreshing network info cache for port d728eab4-88db-4811-b199-c75155b08c82 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.263 2 DEBUG nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:48:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:06.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2093: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 17 KiB/s wr, 1 op/s
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.769 2 WARNING neutronclient.v2_0.client [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.774 2 DEBUG nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfvxa598s',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='656a0137-3214-4992-a68a-cdbedf0336f6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(0f90f65f-b699-4d58-94bd-ef4a6ad3b688),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.779 2 DEBUG nova.objects.instance [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 656a0137-3214-4992-a68a-cdbedf0336f6 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.780 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.782 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.782 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:48:06 compute-0 nova_compute[265391]: 2025-09-30 18:48:06.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:06 compute-0 ovn_controller[156242]: 2025-09-30T18:48:06Z|00313|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.285 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.285 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.296 2 DEBUG nova.virt.libvirt.vif [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:46:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-752609519',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-752609519',id=36,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:47:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3359c464e0344756a39ce5c7088b9eba',ramdisk_id='',reservation_id='r-9hll55u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-613400940',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:47:16Z,user_data=None,user_id='f560266d133f4f1ba4a908e3cdcfa59d',uuid=656a0137-3214-4992-a68a-cdbedf0336f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.297 2 DEBUG nova.network.os_vif_util [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.298 2 DEBUG nova.network.os_vif_util [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:57:7e,bridge_name='br-int',has_traffic_filtering=True,id=d728eab4-88db-4811-b199-c75155b08c82,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd728eab4-88') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.298 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:b3:57:7e"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <target dev="tapd728eab4-88"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]: </interface>
Sep 30 18:48:07 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.299 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <name>instance-00000024</name>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <uuid>656a0137-3214-4992-a68a-cdbedf0336f6</uuid>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-752609519</nova:name>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:47:09</nova:creationTime>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:user uuid="f560266d133f4f1ba4a908e3cdcfa59d">tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin</nova:user>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:project uuid="3359c464e0344756a39ce5c7088b9eba">tempest-TestExecuteZoneMigrationStrategy-613400940</nova:project>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:port uuid="d728eab4-88db-4811-b199-c75155b08c82">
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <system>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="serial">656a0137-3214-4992-a68a-cdbedf0336f6</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="uuid">656a0137-3214-4992-a68a-cdbedf0336f6</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </system>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <os>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </os>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <features>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </features>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/656a0137-3214-4992-a68a-cdbedf0336f6_disk">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </source>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/656a0137-3214-4992-a68a-cdbedf0336f6_disk.config">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </source>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:b3:57:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd728eab4-88"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/console.log" append="off"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </target>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/console.log" append="off"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </console>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </input>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <video>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </video>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]: </domain>
Sep 30 18:48:07 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.301 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <name>instance-00000024</name>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <uuid>656a0137-3214-4992-a68a-cdbedf0336f6</uuid>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-752609519</nova:name>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:47:09</nova:creationTime>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:user uuid="f560266d133f4f1ba4a908e3cdcfa59d">tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin</nova:user>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:project uuid="3359c464e0344756a39ce5c7088b9eba">tempest-TestExecuteZoneMigrationStrategy-613400940</nova:project>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:port uuid="d728eab4-88db-4811-b199-c75155b08c82">
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <system>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="serial">656a0137-3214-4992-a68a-cdbedf0336f6</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="uuid">656a0137-3214-4992-a68a-cdbedf0336f6</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </system>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <os>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </os>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <features>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </features>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/656a0137-3214-4992-a68a-cdbedf0336f6_disk">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </source>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/656a0137-3214-4992-a68a-cdbedf0336f6_disk.config">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </source>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:b3:57:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd728eab4-88"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/console.log" append="off"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </target>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/console.log" append="off"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </console>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </input>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <video>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </video>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]: </domain>
Sep 30 18:48:07 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.302 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <name>instance-00000024</name>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <uuid>656a0137-3214-4992-a68a-cdbedf0336f6</uuid>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-752609519</nova:name>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:47:09</nova:creationTime>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:user uuid="f560266d133f4f1ba4a908e3cdcfa59d">tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin</nova:user>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:project uuid="3359c464e0344756a39ce5c7088b9eba">tempest-TestExecuteZoneMigrationStrategy-613400940</nova:project>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <nova:port uuid="d728eab4-88db-4811-b199-c75155b08c82">
Sep 30 18:48:07 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <system>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="serial">656a0137-3214-4992-a68a-cdbedf0336f6</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="uuid">656a0137-3214-4992-a68a-cdbedf0336f6</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </system>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <os>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </os>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <features>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </features>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/656a0137-3214-4992-a68a-cdbedf0336f6_disk">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </source>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/656a0137-3214-4992-a68a-cdbedf0336f6_disk.config">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </source>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:b3:57:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd728eab4-88"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/console.log" append="off"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:48:07 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       </target>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6/console.log" append="off"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </console>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </input>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <video>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </video>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:48:07 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:48:07 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:48:07 compute-0 nova_compute[265391]: </domain>
Sep 30 18:48:07 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.303 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:48:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:48:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:48:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:07.390Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.396 2 WARNING neutronclient.v2_0.client [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:48:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.561 2 DEBUG nova.network.neutron [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Updated VIF entry in instance network info cache for port d728eab4-88db-4811-b199-c75155b08c82. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.561 2 DEBUG nova.network.neutron [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Updating instance_info_cache with network_info: [{"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:48:07 compute-0 ceph-mon[73755]: pgmap v2093: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 17 KiB/s wr, 1 op/s
Sep 30 18:48:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.788 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:48:07 compute-0 nova_compute[265391]: 2025-09-30 18:48:07.789 2 INFO nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:48:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 18:48:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:08.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 18:48:08 compute-0 nova_compute[265391]: 2025-09-30 18:48:08.069 2 DEBUG oslo_concurrency.lockutils [req-0687ffa9-3a2e-473d-8ee8-a0e90264821a req-e3951f29-f8f6-49d5-8422-5c79334d6426 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-656a0137-3214-4992-a68a-cdbedf0336f6" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022767699074526275 of space, bias 1.0, pg target 0.4553539814905255 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:48:08
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'backups', 'volumes', '.nfs', '.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:48:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:08.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2094: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 18 KiB/s wr, 7 op/s
Sep 30 18:48:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:08] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:48:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:08] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:48:08 compute-0 nova_compute[265391]: 2025-09-30 18:48:08.810 2 INFO nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:48:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:08.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:09 compute-0 nova_compute[265391]: 2025-09-30 18:48:09.313 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:48:09 compute-0 nova_compute[265391]: 2025-09-30 18:48:09.313 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:48:09 compute-0 ceph-mon[73755]: pgmap v2094: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 5.0 KiB/s rd, 18 KiB/s wr, 7 op/s
Sep 30 18:48:09 compute-0 nova_compute[265391]: 2025-09-30 18:48:09.819 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:48:09 compute-0 nova_compute[265391]: 2025-09-30 18:48:09.820 2 DEBUG nova.virt.libvirt.migration [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:48:09 compute-0 kernel: tapd728eab4-88 (unregistering): left promiscuous mode
Sep 30 18:48:09 compute-0 NetworkManager[45059]: <info>  [1759258089.9923] device (tapd728eab4-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:48:10 compute-0 ovn_controller[156242]: 2025-09-30T18:48:10Z|00314|binding|INFO|Releasing lport d728eab4-88db-4811-b199-c75155b08c82 from this chassis (sb_readonly=0)
Sep 30 18:48:10 compute-0 ovn_controller[156242]: 2025-09-30T18:48:10Z|00315|binding|INFO|Setting lport d728eab4-88db-4811-b199-c75155b08c82 down in Southbound
Sep 30 18:48:10 compute-0 ovn_controller[156242]: 2025-09-30T18:48:10Z|00316|binding|INFO|Removing iface tapd728eab4-88 ovn-installed in OVS
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.011 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:57:7e 10.100.0.14'], port_security=['fa:16:3e:b3:57:7e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '656a0137-3214-4992-a68a-cdbedf0336f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4658d55-a8f9-48f1-846d-61df3d830821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3359c464e0344756a39ce5c7088b9eba', 'neutron:revision_number': '10', 'neutron:security_group_ids': '3a57c776-d79c-4096-859e-411dcf78cfa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0884332-fe68-47c8-9c8c-5c6a7c53f7f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=d728eab4-88db-4811-b199-c75155b08c82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.012 166158 INFO neutron.agent.ovn.metadata.agent [-] Port d728eab4-88db-4811-b199-c75155b08c82 in datapath f4658d55-a8f9-48f1-846d-61df3d830821 unbound from our chassis
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.013 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4658d55-a8f9-48f1-846d-61df3d830821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.015 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0da306-13b5-4e42-ad03-0a8b3498d32c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.015 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 namespace which is not needed anymore
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:10 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000024.scope: Deactivated successfully.
Sep 30 18:48:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:10.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:10 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000024.scope: Consumed 14.309s CPU time.
Sep 30 18:48:10 compute-0 systemd-machined[219917]: Machine qemu-28-instance-00000024 terminated.
Sep 30 18:48:10 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[358621]: [NOTICE]   (358640) : haproxy version is 3.0.5-8e879a5
Sep 30 18:48:10 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[358621]: [NOTICE]   (358640) : path to executable is /usr/sbin/haproxy
Sep 30 18:48:10 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[358621]: [WARNING]  (358640) : Exiting Master process...
Sep 30 18:48:10 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[358621]: [ALERT]    (358640) : Current worker (358652) exited with code 143 (Terminated)
Sep 30 18:48:10 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[358621]: [WARNING]  (358640) : All workers exited. Exiting... (0)
Sep 30 18:48:10 compute-0 podman[359766]: 2025-09-30 18:48:10.134851702 +0000 UTC m=+0.026609540 container kill d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 18:48:10 compute-0 systemd[1]: libpod-d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad.scope: Deactivated successfully.
Sep 30 18:48:10 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 656a0137-3214-4992-a68a-cdbedf0336f6_disk: No such file or directory
Sep 30 18:48:10 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 656a0137-3214-4992-a68a-cdbedf0336f6_disk: No such file or directory
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.178 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.179 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.179 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:48:10 compute-0 podman[359781]: 2025-09-30 18:48:10.179862769 +0000 UTC m=+0.029643889 container died d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:48:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-dddba190a93339fd9c7fb102870ba5499b724d50782140107a0ddd1626fd7bda-merged.mount: Deactivated successfully.
Sep 30 18:48:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad-userdata-shm.mount: Deactivated successfully.
Sep 30 18:48:10 compute-0 podman[359781]: 2025-09-30 18:48:10.227052962 +0000 UTC m=+0.076834072 container cleanup d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4)
Sep 30 18:48:10 compute-0 systemd[1]: libpod-conmon-d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad.scope: Deactivated successfully.
Sep 30 18:48:10 compute-0 podman[359784]: 2025-09-30 18:48:10.252018409 +0000 UTC m=+0.088488934 container remove d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.258 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1e3e3b-f27d-4f1c-b36d-b29f7adda0d6]: (4, ("Tue Sep 30 06:48:10 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 (d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad)\nd43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad\nTue Sep 30 06:48:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 (d43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad)\nd43328785802a36d5ca5459b881e9b193e62f9b26d2699313004c4d148b8a6ad\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.259 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[70a9158a-7934-4220-8537-0c49016865f3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.260 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.260 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e4659b-f087-470d-9562-ef98260c9a3a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.261 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4658d55-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:48:10 compute-0 kernel: tapf4658d55-a0: left promiscuous mode
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.286 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[77ce8492-1db4-4f7e-9101-c5b7523a9b80]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.315 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[44941a03-7ac6-435b-ba12-d09d850eb983]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.316 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb6c0f3-77e2-404b-acb0-56f660ab5723]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.322 2 DEBUG nova.virt.libvirt.guest [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '656a0137-3214-4992-a68a-cdbedf0336f6' (instance-00000024) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.322 2 INFO nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Migration operation has completed
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.322 2 INFO nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] _post_live_migration() is started..
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.334 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1951bc79-89c8-465d-8f44-727dd7055b70]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663274, 'reachable_time': 33432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359827, 'error': None, 'target': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.335 2 WARNING neutronclient.v2_0.client [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:48:10 compute-0 nova_compute[265391]: 2025-09-30 18:48:10.336 2 WARNING neutronclient.v2_0.client [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.337 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:48:10 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:10.337 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[8f70df42-fa89-43b4-b3dd-b6c550e5b892]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:48:10 compute-0 systemd[1]: run-netns-ovnmeta\x2df4658d55\x2da8f9\x2d48f1\x2d846d\x2d61df3d830821.mount: Deactivated successfully.
Sep 30 18:48:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:10.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2095: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Sep 30 18:48:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.668 2 DEBUG nova.compute.manager [req-11f7547d-f169-485c-9930-d6218512ded1 req-e228e87a-8e13-4233-b418-6e9b8002ff6a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.668 2 DEBUG oslo_concurrency.lockutils [req-11f7547d-f169-485c-9930-d6218512ded1 req-e228e87a-8e13-4233-b418-6e9b8002ff6a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.668 2 DEBUG oslo_concurrency.lockutils [req-11f7547d-f169-485c-9930-d6218512ded1 req-e228e87a-8e13-4233-b418-6e9b8002ff6a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.668 2 DEBUG oslo_concurrency.lockutils [req-11f7547d-f169-485c-9930-d6218512ded1 req-e228e87a-8e13-4233-b418-6e9b8002ff6a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.669 2 DEBUG nova.compute.manager [req-11f7547d-f169-485c-9930-d6218512ded1 req-e228e87a-8e13-4233-b418-6e9b8002ff6a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] No waiting events found dispatching network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.669 2 DEBUG nova.compute.manager [req-11f7547d-f169-485c-9930-d6218512ded1 req-e228e87a-8e13-4233-b418-6e9b8002ff6a 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:48:11 compute-0 ceph-mon[73755]: pgmap v2095: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.918 2 DEBUG nova.network.neutron [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port d728eab4-88db-4811-b199-c75155b08c82 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.919 2 DEBUG nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.920 2 DEBUG nova.virt.libvirt.vif [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:46:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-752609519',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-752609519',id=36,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:47:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3359c464e0344756a39ce5c7088b9eba',ramdisk_id='',reservation_id='r-9hll55u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-613400940',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:47:49Z,user_data=None,user_id='f560266d133f4f1ba4a908e3cdcfa59d',uuid=656a0137-3214-4992-a68a-cdbedf0336f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.921 2 DEBUG nova.network.os_vif_util [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "d728eab4-88db-4811-b199-c75155b08c82", "address": "fa:16:3e:b3:57:7e", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd728eab4-88", "ovs_interfaceid": "d728eab4-88db-4811-b199-c75155b08c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.922 2 DEBUG nova.network.os_vif_util [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:57:7e,bridge_name='br-int',has_traffic_filtering=True,id=d728eab4-88db-4811-b199-c75155b08c82,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd728eab4-88') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.922 2 DEBUG os_vif [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:57:7e,bridge_name='br-int',has_traffic_filtering=True,id=d728eab4-88db-4811-b199-c75155b08c82,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd728eab4-88') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.926 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd728eab4-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.933 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=0d00f60e-6597-4de5-a8f7-bba6f911b803) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.937 2 INFO os_vif [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:57:7e,bridge_name='br-int',has_traffic_filtering=True,id=d728eab4-88db-4811-b199-c75155b08c82,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd728eab4-88')
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.938 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.938 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.939 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.939 2 DEBUG nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.940 2 INFO nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Deleting instance files /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6_del
Sep 30 18:48:11 compute-0 nova_compute[265391]: 2025-09-30 18:48:11.940 2 INFO nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Deletion of /var/lib/nova/instances/656a0137-3214-4992-a68a-cdbedf0336f6_del complete
Sep 30 18:48:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:12.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:12.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:12 compute-0 podman[359835]: 2025-09-30 18:48:12.538786408 +0000 UTC m=+0.066342440 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:48:12 compute-0 podman[359833]: 2025-09-30 18:48:12.547295179 +0000 UTC m=+0.076083043 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:48:12 compute-0 podman[359834]: 2025-09-30 18:48:12.595418586 +0000 UTC m=+0.121372907 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:48:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2096: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Sep 30 18:48:12 compute-0 sshd-session[359829]: Invalid user hacluster from 80.94.95.116 port 33610
Sep 30 18:48:12 compute-0 sshd-session[359829]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:48:12 compute-0 sshd-session[359829]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.116
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.726 2 DEBUG nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.726 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.727 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.727 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.727 2 DEBUG nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] No waiting events found dispatching network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.727 2 WARNING nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received unexpected event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 for instance with vm_state active and task_state migrating.
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.727 2 DEBUG nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.727 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.727 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.728 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.728 2 DEBUG nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] No waiting events found dispatching network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.728 2 DEBUG nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-unplugged-d728eab4-88db-4811-b199-c75155b08c82 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.728 2 DEBUG nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.728 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.728 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.728 2 DEBUG oslo_concurrency.lockutils [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.728 2 DEBUG nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] No waiting events found dispatching network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:48:13 compute-0 nova_compute[265391]: 2025-09-30 18:48:13.729 2 WARNING nova.compute.manager [req-9b3b9a22-c017-447b-83e9-d0e33f31e813 req-62ce2c65-0b17-4a19-8214-c81915705de9 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Received unexpected event network-vif-plugged-d728eab4-88db-4811-b199-c75155b08c82 for instance with vm_state active and task_state migrating.
Sep 30 18:48:13 compute-0 ceph-mon[73755]: pgmap v2096: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 4.7 KiB/s rd, 4.7 KiB/s wr, 5 op/s
Sep 30 18:48:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:13.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:14.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:48:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:14.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:48:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2097: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 4.7 KiB/s wr, 6 op/s
Sep 30 18:48:14 compute-0 ceph-mon[73755]: pgmap v2097: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 4.7 KiB/s wr, 6 op/s
Sep 30 18:48:14 compute-0 sshd-session[359829]: Failed password for invalid user hacluster from 80.94.95.116 port 33610 ssh2
Sep 30 18:48:15 compute-0 sshd-session[359829]: Connection closed by invalid user hacluster 80.94.95.116 port 33610 [preauth]
Sep 30 18:48:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:16.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:16.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2098: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Sep 30 18:48:16 compute-0 nova_compute[265391]: 2025-09-30 18:48:16.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:16 compute-0 nova_compute[265391]: 2025-09-30 18:48:16.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:17.391Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:17 compute-0 ceph-mon[73755]: pgmap v2098: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Sep 30 18:48:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:18.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:18.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2099: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Sep 30 18:48:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:18] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:48:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:18] "GET /metrics HTTP/1.1" 200 46722 "" "Prometheus/2.51.0"
Sep 30 18:48:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:18.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:19 compute-0 ceph-mon[73755]: pgmap v2099: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 5.2 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Sep 30 18:48:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:20.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:20 compute-0 nova_compute[265391]: 2025-09-30 18:48:20.472 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:20 compute-0 nova_compute[265391]: 2025-09-30 18:48:20.472 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:20 compute-0 nova_compute[265391]: 2025-09-30 18:48:20.472 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "656a0137-3214-4992-a68a-cdbedf0336f6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2100: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:48:20 compute-0 sshd-session[359828]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:48:20 compute-0 sshd-session[359828]: banner exchange: Connection from 115.190.39.222 port 33978: Connection timed out
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.035 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.035 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.035 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.035 2 DEBUG nova.compute.resource_tracker [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.036 2 DEBUG oslo_concurrency.processutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:48:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:48:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994145042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.426 2 DEBUG oslo_concurrency.processutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.580 2 WARNING nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.582 2 DEBUG oslo_concurrency.processutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.603 2 DEBUG oslo_concurrency.processutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.604 2 DEBUG nova.compute.resource_tracker [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4242MB free_disk=39.901123046875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.604 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.604 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:21 compute-0 ceph-mon[73755]: pgmap v2100: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:48:21 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2994145042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:21 compute-0 sudo[359930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:48:21 compute-0 nova_compute[265391]: 2025-09-30 18:48:21.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:21 compute-0 sudo[359930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:21 compute-0 sudo[359930]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:22.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:48:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:48:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:22.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:22 compute-0 nova_compute[265391]: 2025-09-30 18:48:22.622 2 DEBUG nova.compute.resource_tracker [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 656a0137-3214-4992-a68a-cdbedf0336f6 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:48:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2101: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:48:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:48:23 compute-0 nova_compute[265391]: 2025-09-30 18:48:23.129 2 DEBUG nova.compute.resource_tracker [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:48:23 compute-0 nova_compute[265391]: 2025-09-30 18:48:23.174 2 DEBUG nova.compute.resource_tracker [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration 0f90f65f-b699-4d58-94bd-ef4a6ad3b688 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:48:23 compute-0 nova_compute[265391]: 2025-09-30 18:48:23.174 2 DEBUG nova.compute.resource_tracker [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:48:23 compute-0 nova_compute[265391]: 2025-09-30 18:48:23.174 2 DEBUG nova.compute.resource_tracker [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:48:21 up  1:51,  0 user,  load average: 0.58, 0.79, 0.84\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:48:23 compute-0 nova_compute[265391]: 2025-09-30 18:48:23.219 2 DEBUG oslo_concurrency.processutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:48:23 compute-0 nova_compute[265391]: 2025-09-30 18:48:23.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:23 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:48:23 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997633198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:23 compute-0 nova_compute[265391]: 2025-09-30 18:48:23.709 2 DEBUG oslo_concurrency.processutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:48:23 compute-0 nova_compute[265391]: 2025-09-30 18:48:23.717 2 DEBUG nova.compute.provider_tree [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:48:23 compute-0 ceph-mon[73755]: pgmap v2101: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:48:23 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/997633198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:23.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:24.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:24 compute-0 nova_compute[265391]: 2025-09-30 18:48:24.226 2 DEBUG nova.scheduler.client.report [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:48:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2102: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:48:24 compute-0 nova_compute[265391]: 2025-09-30 18:48:24.734 2 DEBUG nova.compute.resource_tracker [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:48:24 compute-0 nova_compute[265391]: 2025-09-30 18:48:24.735 2 DEBUG oslo_concurrency.lockutils [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.131s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:24 compute-0 nova_compute[265391]: 2025-09-30 18:48:24.753 2 INFO nova.compute.manager [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:48:25 compute-0 ceph-mon[73755]: pgmap v2102: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:48:25 compute-0 nova_compute[265391]: 2025-09-30 18:48:25.842 2 INFO nova.scheduler.client.report [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration 0f90f65f-b699-4d58-94bd-ef4a6ad3b688
Sep 30 18:48:25 compute-0 nova_compute[265391]: 2025-09-30 18:48:25.842 2 DEBUG nova.virt.libvirt.driver [None req-9c1f2a99-b127-49c4-a2e1-55cce65836c2 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 656a0137-3214-4992-a68a-cdbedf0336f6] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:48:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:26.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:26.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:26 compute-0 podman[359982]: 2025-09-30 18:48:26.554751107 +0000 UTC m=+0.078228959 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Sep 30 18:48:26 compute-0 podman[359983]: 2025-09-30 18:48:26.558465483 +0000 UTC m=+0.086318708 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Sep 30 18:48:26 compute-0 podman[359981]: 2025-09-30 18:48:26.559927491 +0000 UTC m=+0.086221435 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:48:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2103: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:48:26 compute-0 ceph-mon[73755]: pgmap v2103: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:48:26 compute-0 nova_compute[265391]: 2025-09-30 18:48:26.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:26 compute-0 nova_compute[265391]: 2025-09-30 18:48:26.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:27.392Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:48:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 26K writes, 98K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 26K writes, 9158 syncs, 2.90 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4299 writes, 16K keys, 4299 commit groups, 1.0 writes per commit group, ingest: 19.72 MB, 0.03 MB/s
                                           Interval WAL: 4299 writes, 1695 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:48:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:28.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2104: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:48:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:28] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:48:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:28] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:48:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:28.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:29 compute-0 ceph-mon[73755]: pgmap v2104: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:48:29 compute-0 podman[276673]: time="2025-09-30T18:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:48:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:48:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10322 "" "Go-http-client/1.1"
Sep 30 18:48:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:30.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2105: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:48:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:31 compute-0 openstack_network_exporter[279566]: ERROR   18:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:48:31 compute-0 openstack_network_exporter[279566]: ERROR   18:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:48:31 compute-0 openstack_network_exporter[279566]: ERROR   18:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:48:31 compute-0 openstack_network_exporter[279566]: ERROR   18:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:48:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:48:31 compute-0 openstack_network_exporter[279566]: ERROR   18:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:48:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:48:31 compute-0 ceph-mon[73755]: pgmap v2105: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:48:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1388053340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:31 compute-0 nova_compute[265391]: 2025-09-30 18:48:31.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:31 compute-0 nova_compute[265391]: 2025-09-30 18:48:31.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:32.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:32.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2106: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:48:33 compute-0 ceph-mon[73755]: pgmap v2106: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:48:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:33.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:48:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:34.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:48:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:34.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2107: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:48:35 compute-0 sudo[360050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:48:35 compute-0 sudo[360050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:35 compute-0 sudo[360050]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:35 compute-0 ceph-mon[73755]: pgmap v2107: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 30 op/s
Sep 30 18:48:35 compute-0 sudo[360075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:48:35 compute-0 sudo[360075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:36.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:36 compute-0 sudo[360075]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:36.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2151263999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2151263999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:48:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2108: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 9.3 KiB/s wr, 30 op/s
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:48:36 compute-0 sudo[360133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:48:36 compute-0 sudo[360133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:36 compute-0 sudo[360133]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:36 compute-0 sudo[360158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:48:36 compute-0 sudo[360158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2151263999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2151263999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: pgmap v2108: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 9.3 KiB/s wr, 30 op/s
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:48:36 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:48:36 compute-0 nova_compute[265391]: 2025-09-30 18:48:36.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:36 compute-0 nova_compute[265391]: 2025-09-30 18:48:36.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:37 compute-0 podman[360223]: 2025-09-30 18:48:37.203814781 +0000 UTC m=+0.058782145 container create 559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:48:37 compute-0 systemd[1]: Started libpod-conmon-559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd.scope.
Sep 30 18:48:37 compute-0 podman[360223]: 2025-09-30 18:48:37.172855119 +0000 UTC m=+0.027822533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:48:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:48:37 compute-0 podman[360223]: 2025-09-30 18:48:37.330804962 +0000 UTC m=+0.185772406 container init 559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:48:37 compute-0 podman[360223]: 2025-09-30 18:48:37.341437008 +0000 UTC m=+0.196404382 container start 559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_northcutt, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:48:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:48:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:48:37 compute-0 podman[360223]: 2025-09-30 18:48:37.345909534 +0000 UTC m=+0.200876938 container attach 559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:48:37 compute-0 cool_northcutt[360240]: 167 167
Sep 30 18:48:37 compute-0 systemd[1]: libpod-559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd.scope: Deactivated successfully.
Sep 30 18:48:37 compute-0 podman[360223]: 2025-09-30 18:48:37.354206479 +0000 UTC m=+0.209173883 container died 559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_northcutt, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:48:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:37.393Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f80fd2d7838e91438caccbc5444f5c31b2743f1c2533ee85a334b1e046ca544-merged.mount: Deactivated successfully.
Sep 30 18:48:37 compute-0 podman[360223]: 2025-09-30 18:48:37.42329458 +0000 UTC m=+0.278261954 container remove 559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 18:48:37 compute-0 systemd[1]: libpod-conmon-559226eab9cbe6f9300139f2712f384d2fe98c8b1a3807c70540efcf295b54fd.scope: Deactivated successfully.
Sep 30 18:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:48:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:48:37 compute-0 podman[360265]: 2025-09-30 18:48:37.601978211 +0000 UTC m=+0.051760823 container create 1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:48:37 compute-0 systemd[1]: Started libpod-conmon-1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50.scope.
Sep 30 18:48:37 compute-0 podman[360265]: 2025-09-30 18:48:37.580466473 +0000 UTC m=+0.030249185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:48:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b038cab939a32b24a14103602d2354a4ec7c3891e96204c2fd35f18b15df8ab7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b038cab939a32b24a14103602d2354a4ec7c3891e96204c2fd35f18b15df8ab7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b038cab939a32b24a14103602d2354a4ec7c3891e96204c2fd35f18b15df8ab7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b038cab939a32b24a14103602d2354a4ec7c3891e96204c2fd35f18b15df8ab7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b038cab939a32b24a14103602d2354a4ec7c3891e96204c2fd35f18b15df8ab7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:37 compute-0 podman[360265]: 2025-09-30 18:48:37.699303683 +0000 UTC m=+0.149086295 container init 1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:48:37 compute-0 podman[360265]: 2025-09-30 18:48:37.706371697 +0000 UTC m=+0.156154309 container start 1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:48:37 compute-0 podman[360265]: 2025-09-30 18:48:37.710113344 +0000 UTC m=+0.159895956 container attach 1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:48:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:48:38 compute-0 stupefied_jennings[360281]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:48:38 compute-0 stupefied_jennings[360281]: --> All data devices are unavailable
Sep 30 18:48:38 compute-0 systemd[1]: libpod-1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50.scope: Deactivated successfully.
Sep 30 18:48:38 compute-0 podman[360265]: 2025-09-30 18:48:38.06339306 +0000 UTC m=+0.513175682 container died 1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b038cab939a32b24a14103602d2354a4ec7c3891e96204c2fd35f18b15df8ab7-merged.mount: Deactivated successfully.
Sep 30 18:48:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:38.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:38 compute-0 podman[360265]: 2025-09-30 18:48:38.108963681 +0000 UTC m=+0.558746333 container remove 1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:48:38 compute-0 systemd[1]: libpod-conmon-1c24d8a5bea2ba436a677e8cae6034924178c84f8c53f79e2e06aedffddc6d50.scope: Deactivated successfully.
Sep 30 18:48:38 compute-0 sudo[360158]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:38 compute-0 sudo[360308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:48:38 compute-0 sudo[360308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:38 compute-0 sudo[360308]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:38 compute-0 sudo[360333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:48:38 compute-0 sudo[360333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:38.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2109: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 39 KiB/s rd, 10 KiB/s wr, 58 op/s
Sep 30 18:48:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:48:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:48:38 compute-0 podman[360399]: 2025-09-30 18:48:38.795194907 +0000 UTC m=+0.040999813 container create 3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:48:38 compute-0 ceph-mon[73755]: pgmap v2109: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 39 KiB/s rd, 10 KiB/s wr, 58 op/s
Sep 30 18:48:38 compute-0 systemd[1]: Started libpod-conmon-3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526.scope.
Sep 30 18:48:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:48:38 compute-0 podman[360399]: 2025-09-30 18:48:38.777438897 +0000 UTC m=+0.023243843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:48:38 compute-0 podman[360399]: 2025-09-30 18:48:38.887088979 +0000 UTC m=+0.132893925 container init 3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:48:38 compute-0 podman[360399]: 2025-09-30 18:48:38.898598707 +0000 UTC m=+0.144403613 container start 3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_roentgen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:48:38 compute-0 podman[360399]: 2025-09-30 18:48:38.902871998 +0000 UTC m=+0.148676944 container attach 3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:48:38 compute-0 sad_roentgen[360415]: 167 167
Sep 30 18:48:38 compute-0 systemd[1]: libpod-3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526.scope: Deactivated successfully.
Sep 30 18:48:38 compute-0 podman[360399]: 2025-09-30 18:48:38.907925419 +0000 UTC m=+0.153730315 container died 3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:48:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:38.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6243aa17e7cf0e99e028cf2c913cc95a3b98f34ef3bf2775588b8560a8e8611b-merged.mount: Deactivated successfully.
Sep 30 18:48:38 compute-0 podman[360399]: 2025-09-30 18:48:38.951895599 +0000 UTC m=+0.197700495 container remove 3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_roentgen, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:48:38 compute-0 systemd[1]: libpod-conmon-3c69db437f27de4e3613384f3b34a87b62ff9591ba4376261e2bc639cba38526.scope: Deactivated successfully.
Sep 30 18:48:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:39 compute-0 podman[360440]: 2025-09-30 18:48:39.127294515 +0000 UTC m=+0.046015724 container create 3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_elbakyan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 18:48:39 compute-0 systemd[1]: Started libpod-conmon-3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7.scope.
Sep 30 18:48:39 compute-0 podman[360440]: 2025-09-30 18:48:39.109127944 +0000 UTC m=+0.027849173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:48:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5cd7fcd8b7a9479406d2d53ab16f839eaa1b54843cafc9215498d541d2d3cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5cd7fcd8b7a9479406d2d53ab16f839eaa1b54843cafc9215498d541d2d3cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5cd7fcd8b7a9479406d2d53ab16f839eaa1b54843cafc9215498d541d2d3cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5cd7fcd8b7a9479406d2d53ab16f839eaa1b54843cafc9215498d541d2d3cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:39 compute-0 podman[360440]: 2025-09-30 18:48:39.22819539 +0000 UTC m=+0.146916619 container init 3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_elbakyan, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:48:39 compute-0 podman[360440]: 2025-09-30 18:48:39.235889149 +0000 UTC m=+0.154610348 container start 3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 18:48:39 compute-0 podman[360440]: 2025-09-30 18:48:39.238779294 +0000 UTC m=+0.157500593 container attach 3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]: {
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:     "0": [
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:         {
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "devices": [
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "/dev/loop3"
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             ],
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "lv_name": "ceph_lv0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "lv_size": "21470642176",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "name": "ceph_lv0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "tags": {
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.cluster_name": "ceph",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.crush_device_class": "",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.encrypted": "0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.osd_id": "0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.type": "block",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.vdo": "0",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:                 "ceph.with_tpm": "0"
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             },
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "type": "block",
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:             "vg_name": "ceph_vg0"
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:         }
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]:     ]
Sep 30 18:48:39 compute-0 tender_elbakyan[360457]: }
Sep 30 18:48:39 compute-0 systemd[1]: libpod-3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7.scope: Deactivated successfully.
Sep 30 18:48:39 compute-0 podman[360440]: 2025-09-30 18:48:39.62832239 +0000 UTC m=+0.547043609 container died 3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_elbakyan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd5cd7fcd8b7a9479406d2d53ab16f839eaa1b54843cafc9215498d541d2d3cd-merged.mount: Deactivated successfully.
Sep 30 18:48:39 compute-0 podman[360440]: 2025-09-30 18:48:39.667163186 +0000 UTC m=+0.585884395 container remove 3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:48:39 compute-0 systemd[1]: libpod-conmon-3e5f2e388e7d56b073b2494e1a9369edb1ccc41e070b8bb5cff844212e8416c7.scope: Deactivated successfully.
Sep 30 18:48:39 compute-0 sudo[360333]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:39 compute-0 sudo[360478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:48:39 compute-0 sudo[360478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:39 compute-0 sudo[360478]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:39 compute-0 sudo[360503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:48:39 compute-0 sudo[360503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:40 compute-0 podman[360570]: 2025-09-30 18:48:40.320167451 +0000 UTC m=+0.054919544 container create 7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:48:40 compute-0 systemd[1]: Started libpod-conmon-7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c.scope.
Sep 30 18:48:40 compute-0 podman[360570]: 2025-09-30 18:48:40.291268032 +0000 UTC m=+0.026020185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:48:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:48:40 compute-0 podman[360570]: 2025-09-30 18:48:40.409568378 +0000 UTC m=+0.144320441 container init 7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_varahamihira, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:48:40 compute-0 podman[360570]: 2025-09-30 18:48:40.419078425 +0000 UTC m=+0.153830498 container start 7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_varahamihira, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 18:48:40 compute-0 podman[360570]: 2025-09-30 18:48:40.422970996 +0000 UTC m=+0.157723049 container attach 7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:48:40 compute-0 recursing_varahamihira[360586]: 167 167
Sep 30 18:48:40 compute-0 systemd[1]: libpod-7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c.scope: Deactivated successfully.
Sep 30 18:48:40 compute-0 podman[360591]: 2025-09-30 18:48:40.4752323 +0000 UTC m=+0.033697714 container died 7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:48:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac93bf46822c9176614fdff9885606f169e705edf98e8dbfb1fdbe32d55f1c3b-merged.mount: Deactivated successfully.
Sep 30 18:48:40 compute-0 podman[360591]: 2025-09-30 18:48:40.512582938 +0000 UTC m=+0.071048302 container remove 7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 18:48:40 compute-0 systemd[1]: libpod-conmon-7cdd62db98857ce23be399c68469dc641118dbadc48938129274e5fdadce2c0c.scope: Deactivated successfully.
Sep 30 18:48:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2110: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:40 compute-0 podman[360613]: 2025-09-30 18:48:40.763481461 +0000 UTC m=+0.066294789 container create e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_franklin, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:48:40 compute-0 systemd[1]: Started libpod-conmon-e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1.scope.
Sep 30 18:48:40 compute-0 podman[360613]: 2025-09-30 18:48:40.736006399 +0000 UTC m=+0.038819737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:48:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8971390e2f07b01e9c5533b3b066d0678c0b9207399782d5b0445abba2178/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8971390e2f07b01e9c5533b3b066d0678c0b9207399782d5b0445abba2178/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8971390e2f07b01e9c5533b3b066d0678c0b9207399782d5b0445abba2178/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd8971390e2f07b01e9c5533b3b066d0678c0b9207399782d5b0445abba2178/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:48:40 compute-0 podman[360613]: 2025-09-30 18:48:40.880019212 +0000 UTC m=+0.182832540 container init e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_franklin, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:48:40 compute-0 podman[360613]: 2025-09-30 18:48:40.891471419 +0000 UTC m=+0.194284717 container start e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:48:40 compute-0 podman[360613]: 2025-09-30 18:48:40.895622876 +0000 UTC m=+0.198436184 container attach e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:48:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:41 compute-0 lvm[360705]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:48:41 compute-0 lvm[360705]: VG ceph_vg0 finished
Sep 30 18:48:41 compute-0 ceph-mon[73755]: pgmap v2110: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:41 compute-0 ecstatic_franklin[360630]: {}
Sep 30 18:48:41 compute-0 systemd[1]: libpod-e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1.scope: Deactivated successfully.
Sep 30 18:48:41 compute-0 systemd[1]: libpod-e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1.scope: Consumed 1.323s CPU time.
Sep 30 18:48:41 compute-0 podman[360613]: 2025-09-30 18:48:41.670747226 +0000 UTC m=+0.973560524 container died e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bd8971390e2f07b01e9c5533b3b066d0678c0b9207399782d5b0445abba2178-merged.mount: Deactivated successfully.
Sep 30 18:48:41 compute-0 podman[360613]: 2025-09-30 18:48:41.727083656 +0000 UTC m=+1.029896984 container remove e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_franklin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:48:41 compute-0 systemd[1]: libpod-conmon-e184b6fca5b7d21981badfbcc76be20f6bfdaac0980688962865727e73f6bdf1.scope: Deactivated successfully.
Sep 30 18:48:41 compute-0 sudo[360503]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:48:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:48:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:48:41 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:48:41 compute-0 nova_compute[265391]: 2025-09-30 18:48:41.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:41 compute-0 sudo[360720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:48:41 compute-0 sudo[360720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:41 compute-0 sudo[360720]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:41 compute-0 nova_compute[265391]: 2025-09-30 18:48:41.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:42 compute-0 sudo[360746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:48:42 compute-0 sudo[360746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:48:42 compute-0 sudo[360746]: pam_unix(sudo:session): session closed for user root
Sep 30 18:48:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:42.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:42.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2111: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:48:42 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:48:42 compute-0 ceph-mon[73755]: pgmap v2111: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:43 compute-0 podman[360771]: 2025-09-30 18:48:43.52790975 +0000 UTC m=+0.060600481 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent)
Sep 30 18:48:43 compute-0 podman[360773]: 2025-09-30 18:48:43.542657563 +0000 UTC m=+0.069368739 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:48:43 compute-0 podman[360772]: 2025-09-30 18:48:43.587403122 +0000 UTC m=+0.107230100 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 18:48:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:43.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:44.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:44.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2112: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:45 compute-0 ceph-mon[73755]: pgmap v2112: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:46.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:46.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2113: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:46 compute-0 nova_compute[265391]: 2025-09-30 18:48:46.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:46 compute-0 nova_compute[265391]: 2025-09-30 18:48:46.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:47.395Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:47 compute-0 ceph-mon[73755]: pgmap v2113: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:48.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:48.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2114: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:48:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:48:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:48.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:49 compute-0 ceph-mon[73755]: pgmap v2114: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:48:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:50.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:50.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2115: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:48:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:51 compute-0 ceph-mon[73755]: pgmap v2115: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:48:51 compute-0 nova_compute[265391]: 2025-09-30 18:48:51.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:51 compute-0 nova_compute[265391]: 2025-09-30 18:48:51.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:48:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:52.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:48:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:48:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:48:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:52.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2116: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:48:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:48:53 compute-0 ceph-mon[73755]: pgmap v2116: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:48:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:53.925Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:54.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:54.343 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:54.344 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:48:54.344 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:54 compute-0 nova_compute[265391]: 2025-09-30 18:48:54.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2117: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:48:55 compute-0 nova_compute[265391]: 2025-09-30 18:48:55.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:55 compute-0 ceph-mon[73755]: pgmap v2117: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:48:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:56.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:48:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:48:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:48:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2118: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:48:56 compute-0 nova_compute[265391]: 2025-09-30 18:48:56.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:56 compute-0 nova_compute[265391]: 2025-09-30 18:48:56.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:48:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:57.396Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:57 compute-0 podman[360852]: 2025-09-30 18:48:57.530277337 +0000 UTC m=+0.067503230 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=iscsid)
Sep 30 18:48:57 compute-0 podman[360851]: 2025-09-30 18:48:57.533571183 +0000 UTC m=+0.075237701 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, io.buildah.version=1.41.4)
Sep 30 18:48:57 compute-0 podman[360853]: 2025-09-30 18:48:57.548034377 +0000 UTC m=+0.079522222 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Sep 30 18:48:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:48:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3074177764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:48:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:48:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3074177764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:48:57 compute-0 ceph-mon[73755]: pgmap v2118: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:48:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3074177764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:48:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3074177764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:48:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:48:58.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:48:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:48:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:48:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:48:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2119: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:48:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4288130120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:48:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:58] "GET /metrics HTTP/1.1" 200 46710 "" "Prometheus/2.51.0"
Sep 30 18:48:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:48:58] "GET /metrics HTTP/1.1" 200 46710 "" "Prometheus/2.51.0"
Sep 30 18:48:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:48:58.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:48:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:48:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:48:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:48:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:48:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.136 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.137 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.641 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Sep 30 18:48:59 compute-0 podman[276673]: time="2025-09-30T18:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:48:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:48:59 compute-0 ceph-mon[73755]: pgmap v2119: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:48:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10316 "" "Go-http-client/1.1"
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.944 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.944 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:48:59 compute-0 nova_compute[265391]: 2025-09-30 18:48:59.945 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:00.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.199 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.200 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.206 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.206 2 INFO nova.compute.claims [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Claim successful on node compute-0.ctlplane.example.com
Sep 30 18:49:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:49:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1850080561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.443 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:00.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.573 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.574 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2120: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.593 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.593 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4304MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:49:00 compute-0 nova_compute[265391]: 2025-09-30 18:49:00.593 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2510765074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1850080561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:00 compute-0 ceph-mon[73755]: pgmap v2120: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.271 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:01 compute-0 openstack_network_exporter[279566]: ERROR   18:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:49:01 compute-0 openstack_network_exporter[279566]: ERROR   18:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:49:01 compute-0 openstack_network_exporter[279566]: ERROR   18:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:49:01 compute-0 openstack_network_exporter[279566]: ERROR   18:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:49:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:49:01 compute-0 openstack_network_exporter[279566]: ERROR   18:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:49:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:49:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:49:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207412588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.795 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.801 2 DEBUG nova.compute.provider_tree [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:49:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1207412588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.12/site-packages/ovs/reconnect.py:117
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.12/site-packages/ovs/reconnect.py:519
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:01 compute-0 nova_compute[265391]: 2025-09-30 18:49:01.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.12/site-packages/ovs/reconnect.py:519
Sep 30 18:49:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:49:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:02.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:49:02 compute-0 sudo[360962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:49:02 compute-0 sudo[360962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:02 compute-0 sudo[360962]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:02 compute-0 nova_compute[265391]: 2025-09-30 18:49:02.312 2 DEBUG nova.scheduler.client.report [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:49:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2121: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:49:02 compute-0 ceph-mon[73755]: pgmap v2121: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:49:02 compute-0 nova_compute[265391]: 2025-09-30 18:49:02.827 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.627s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:02 compute-0 nova_compute[265391]: 2025-09-30 18:49:02.828 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Sep 30 18:49:02 compute-0 nova_compute[265391]: 2025-09-30 18:49:02.829 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 2.236s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.342 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.342 2 DEBUG nova.network.neutron [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.343 2 WARNING neutronclient.v2_0.client [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.343 2 WARNING neutronclient.v2_0.client [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.852 2 DEBUG nova.network.neutron [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Successfully created port: 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.856 2 INFO nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.882 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Instance 67db1f19-3436-4e1e-bf63-266846e1380d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.882 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.882 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:49:00 up  1:52,  0 user,  load average: 0.54, 0.75, 0.82\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_3359c464e0344756a39ce5c7088b9eba': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:49:03 compute-0 nova_compute[265391]: 2025-09-30 18:49:03.917 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:03.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:49:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:03.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:49:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045675100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:04 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:04.350 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:49:04 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:04.351 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:04 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:04.352 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.356 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.361 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.371 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Sep 30 18:49:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1045675100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.455 2 DEBUG nova.network.neutron [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Successfully updated port: 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.515 2 DEBUG nova.compute.manager [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-changed-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.515 2 DEBUG nova.compute.manager [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Refreshing instance network info cache due to event network-changed-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.515 2 DEBUG oslo_concurrency.lockutils [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.516 2 DEBUG oslo_concurrency.lockutils [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.516 2 DEBUG nova.network.neutron [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Refreshing network info cache for port 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:49:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2122: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.871 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:49:04 compute-0 nova_compute[265391]: 2025-09-30 18:49:04.961 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.022 2 WARNING neutronclient.v2_0.client [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.387 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.388 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.558s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.389 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.391 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.391 2 INFO nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Creating image(s)
Sep 30 18:49:05 compute-0 ceph-mon[73755]: pgmap v2122: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.420 2 DEBUG nova.storage.rbd_utils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 67db1f19-3436-4e1e-bf63-266846e1380d_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.443 2 DEBUG nova.storage.rbd_utils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 67db1f19-3436-4e1e-bf63-266846e1380d_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.468 2 DEBUG nova.storage.rbd_utils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 67db1f19-3436-4e1e-bf63-266846e1380d_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.471 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.554 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.555 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "cb2d580238c9b109feae7f1462613dc547671457" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.556 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.556 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "cb2d580238c9b109feae7f1462613dc547671457" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.578 2 DEBUG nova.storage.rbd_utils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 67db1f19-3436-4e1e-bf63-266846e1380d_disk does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.582 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 67db1f19-3436-4e1e-bf63-266846e1380d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.852 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb2d580238c9b109feae7f1462613dc547671457 67db1f19-3436-4e1e-bf63-266846e1380d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:05 compute-0 nova_compute[265391]: 2025-09-30 18:49:05.919 2 DEBUG nova.storage.rbd_utils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] resizing rbd image 67db1f19-3436-4e1e-bf63-266846e1380d_disk to 1073741824 resize /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:288
Sep 30 18:49:06 compute-0 nova_compute[265391]: 2025-09-30 18:49:06.030 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Sep 30 18:49:06 compute-0 nova_compute[265391]: 2025-09-30 18:49:06.031 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Ensure instance console log exists: /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Sep 30 18:49:06 compute-0 nova_compute[265391]: 2025-09-30 18:49:06.032 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:06 compute-0 nova_compute[265391]: 2025-09-30 18:49:06.032 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:06 compute-0 nova_compute[265391]: 2025-09-30 18:49:06.033 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:06 compute-0 nova_compute[265391]: 2025-09-30 18:49:06.072 2 DEBUG nova.network.neutron [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:49:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:06.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2123: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:49:07 compute-0 nova_compute[265391]: 2025-09-30 18:49:07.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:49:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:49:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:07.396Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:49:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:49:07 compute-0 nova_compute[265391]: 2025-09-30 18:49:07.478 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:49:07 compute-0 nova_compute[265391]: 2025-09-30 18:49:07.478 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:49:07 compute-0 nova_compute[265391]: 2025-09-30 18:49:07.478 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:49:07 compute-0 nova_compute[265391]: 2025-09-30 18:49:07.479 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:49:07 compute-0 nova_compute[265391]: 2025-09-30 18:49:07.479 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:49:07 compute-0 nova_compute[265391]: 2025-09-30 18:49:07.511 2 DEBUG nova.network.neutron [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:49:07 compute-0 ceph-mon[73755]: pgmap v2123: 353 pgs: 353 active+clean; 41 MiB data, 399 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:49:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:49:08 compute-0 nova_compute[265391]: 2025-09-30 18:49:08.021 2 DEBUG oslo_concurrency.lockutils [req-5060b6cb-0757-48e0-9a61-5755c1f5ba95 req-6eba6a07-f1ce-40ce-8e94-071f05e8168e 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:49:08 compute-0 nova_compute[265391]: 2025-09-30 18:49:08.022 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquired lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:49:08 compute-0 nova_compute[265391]: 2025-09-30 18:49:08.022 2 DEBUG nova.network.neutron [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Sep 30 18:49:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:08.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:49:08
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'images', 'vms', 'default.rgw.log', '.rgw.root', 'volumes']
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:49:08 compute-0 nova_compute[265391]: 2025-09-30 18:49:08.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:49:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2124: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:08] "GET /metrics HTTP/1.1" 200 46704 "" "Prometheus/2.51.0"
Sep 30 18:49:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:08] "GET /metrics HTTP/1.1" 200 46704 "" "Prometheus/2.51.0"
Sep 30 18:49:08 compute-0 nova_compute[265391]: 2025-09-30 18:49:08.833 2 DEBUG nova.network.neutron [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Sep 30 18:49:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:08.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:09 compute-0 nova_compute[265391]: 2025-09-30 18:49:09.219 2 WARNING neutronclient.v2_0.client [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:49:09 compute-0 nova_compute[265391]: 2025-09-30 18:49:09.476 2 DEBUG nova.network.neutron [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Updating instance_info_cache with network_info: [{"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:49:09 compute-0 ceph-mon[73755]: pgmap v2124: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:09 compute-0 nova_compute[265391]: 2025-09-30 18:49:09.986 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Releasing lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:49:09 compute-0 nova_compute[265391]: 2025-09-30 18:49:09.987 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Instance network_info: |[{"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Sep 30 18:49:09 compute-0 nova_compute[265391]: 2025-09-30 18:49:09.990 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Start _get_guest_xml network_info=[{"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'image_id': '5b99cbca-b655-4be5-8343-cf504005c42e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Sep 30 18:49:09 compute-0 nova_compute[265391]: 2025-09-30 18:49:09.996 2 WARNING nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:09.999 2 DEBUG nova.virt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='5b99cbca-b655-4be5-8343-cf504005c42e', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteZoneMigrationStrategy-server-983808530', uuid='67db1f19-3436-4e1e-bf63-266846e1380d'), owner=OwnerMeta(userid='f560266d133f4f1ba4a908e3cdcfa59d', username='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin', projectid='3359c464e0344756a39ce5c7088b9eba', projectname='tempest-TestExecuteZoneMigrationStrategy-613400940'), image=ImageMeta(id='5b99cbca-b655-4be5-8343-cf504005c42e', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20250919142712.b99a882.el10', creation_time=1759258149.9996197) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.004 2 DEBUG nova.virt.libvirt.host [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.005 2 DEBUG nova.virt.libvirt.host [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.008 2 DEBUG nova.virt.libvirt.host [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.009 2 DEBUG nova.virt.libvirt.host [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.010 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.010 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T18:04:50Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c83dc7f1-0795-47db-adcb-fb90be11684a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T18:04:51Z,direct_url=<?>,disk_format='qcow2',id=5b99cbca-b655-4be5-8343-cf504005c42e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e2dde567e5c4b1c9802c64cfc281b6d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T18:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.011 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.011 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.012 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.012 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.012 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.013 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.013 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.014 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.014 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.015 2 DEBUG nova.virt.hardware [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.020 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:10.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:49:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196521684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.481 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.507 2 DEBUG nova.storage.rbd_utils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 67db1f19-3436-4e1e-bf63-266846e1380d_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.511 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:10.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2125: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4196521684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:49:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:49:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626535892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.958 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.961 2 DEBUG nova.virt.libvirt.vif [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-983808530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-983808530',id=38,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3359c464e0344756a39ce5c7088b9eba',ramdisk_id='',reservation_id='r-wltr50iq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-613400940',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:49:04Z,user_data=None,user_id='f560266d133f4f1ba4a908e3cdcfa59d',uuid=67db1f19-3436-4e1e-bf63-266846e1380d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.962 2 DEBUG nova.network.os_vif_util [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Converting VIF {"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.963 2 DEBUG nova.network.os_vif_util [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d1:75,bridge_name='br-int',has_traffic_filtering=True,id=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dab1b31-af') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:49:10 compute-0 nova_compute[265391]: 2025-09-30 18:49:10.965 2 DEBUG nova.objects.instance [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lazy-loading 'pci_devices' on Instance uuid 67db1f19-3436-4e1e-bf63-266846e1380d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:49:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.480 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] End _get_guest_xml xml=<domain type="kvm">
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <uuid>67db1f19-3436-4e1e-bf63-266846e1380d</uuid>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <name>instance-00000026</name>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <memory>131072</memory>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <vcpu>1</vcpu>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-983808530</nova:name>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:49:09</nova:creationTime>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:49:11 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:49:11 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:user uuid="f560266d133f4f1ba4a908e3cdcfa59d">tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin</nova:user>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:project uuid="3359c464e0344756a39ce5c7088b9eba">tempest-TestExecuteZoneMigrationStrategy-613400940</nova:project>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <nova:port uuid="2dab1b31-affc-4fc3-9d5e-698d2cd44d6e">
Sep 30 18:49:11 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <system>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <entry name="serial">67db1f19-3436-4e1e-bf63-266846e1380d</entry>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <entry name="uuid">67db1f19-3436-4e1e-bf63-266846e1380d</entry>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </system>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <os>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   </os>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <features>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <vmcoreinfo/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   </features>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <cpu mode="host-model" match="exact">
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/67db1f19-3436-4e1e-bf63-266846e1380d_disk">
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       </source>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <driver type="raw" cache="none"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/67db1f19-3436-4e1e-bf63-266846e1380d_disk.config">
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       </source>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:49:11 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <interface type="ethernet">
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <mac address="fa:16:3e:2d:d1:75"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <mtu size="1442"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <target dev="tap2dab1b31-af"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </interface>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <serial type="pty">
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/console.log" append="off"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <video>
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <model type="virtio"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </video>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <controller type="usb" index="0"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:49:11 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:49:11 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:49:11 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:49:11 compute-0 nova_compute[265391]: </domain>
Sep 30 18:49:11 compute-0 nova_compute[265391]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.481 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Preparing to wait for external event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.482 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.482 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.482 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.484 2 DEBUG nova.virt.libvirt.vif [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='',created_at=2025-09-30T18:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-983808530',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-983808530',id=38,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3359c464e0344756a39ce5c7088b9eba',ramdisk_id='',reservation_id='r-wltr50iq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-613400940',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T18:49:04Z,user_data=None,user_id='f560266d133f4f1ba4a908e3cdcfa59d',uuid=67db1f19-3436-4e1e-bf63-266846e1380d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.484 2 DEBUG nova.network.os_vif_util [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Converting VIF {"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.485 2 DEBUG nova.network.os_vif_util [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d1:75,bridge_name='br-int',has_traffic_filtering=True,id=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dab1b31-af') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.485 2 DEBUG os_vif [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d1:75,bridge_name='br-int',has_traffic_filtering=True,id=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dab1b31-af') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.488 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '94ffd401-e8a1-59ae-b94d-40c8e86307d3', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2dab1b31-af, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap2dab1b31-af, col_values=(('qos', UUID('412dca28-9826-4b12-b810-8d317e295575')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap2dab1b31-af, col_values=(('external_ids', {'iface-id': '2dab1b31-affc-4fc3-9d5e-698d2cd44d6e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:d1:75', 'vm-uuid': '67db1f19-3436-4e1e-bf63-266846e1380d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:11 compute-0 NetworkManager[45059]: <info>  [1759258151.5354] manager: (tap2dab1b31-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:11 compute-0 nova_compute[265391]: 2025-09-30 18:49:11.542 2 INFO os_vif [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d1:75,bridge_name='br-int',has_traffic_filtering=True,id=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dab1b31-af')
Sep 30 18:49:11 compute-0 ceph-mon[73755]: pgmap v2125: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1626535892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:49:12 compute-0 nova_compute[265391]: 2025-09-30 18:49:12.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:12.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2126: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:12.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:13 compute-0 nova_compute[265391]: 2025-09-30 18:49:13.082 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:49:13 compute-0 nova_compute[265391]: 2025-09-30 18:49:13.083 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Sep 30 18:49:13 compute-0 nova_compute[265391]: 2025-09-30 18:49:13.083 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] No VIF found with MAC fa:16:3e:2d:d1:75, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Sep 30 18:49:13 compute-0 nova_compute[265391]: 2025-09-30 18:49:13.084 2 INFO nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Using config drive
Sep 30 18:49:13 compute-0 nova_compute[265391]: 2025-09-30 18:49:13.122 2 DEBUG nova.storage.rbd_utils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 67db1f19-3436-4e1e-bf63-266846e1380d_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:49:13 compute-0 nova_compute[265391]: 2025-09-30 18:49:13.641 2 WARNING neutronclient.v2_0.client [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:49:13 compute-0 ceph-mon[73755]: pgmap v2126: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:13 compute-0 nova_compute[265391]: 2025-09-30 18:49:13.879 2 INFO nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Creating config drive at /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/disk.config
Sep 30 18:49:13 compute-0 nova_compute[265391]: 2025-09-30 18:49:13.889 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpxwb_nzuq execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:13.927Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:49:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:13.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:49:13 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 18:49:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.015 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20250919142712.b99a882.el10 -quiet -J -r -V config-2 /tmp/tmpxwb_nzuq" returned: 0 in 0.126s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:14 compute-0 podman[361275]: 2025-09-30 18:49:14.030024221 +0000 UTC m=+0.061477525 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:49:14 compute-0 podman[361277]: 2025-09-30 18:49:14.043312705 +0000 UTC m=+0.066268239 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.050 2 DEBUG nova.storage.rbd_utils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] rbd image 67db1f19-3436-4e1e-bf63-266846e1380d_disk.config does not exist __init__ /usr/lib/python3.12/site-packages/nova/storage/rbd_utils.py:80
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.054 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/disk.config 67db1f19-3436-4e1e-bf63-266846e1380d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:49:14 compute-0 podman[361276]: 2025-09-30 18:49:14.082118301 +0000 UTC m=+0.108481973 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.license=GPLv2)
Sep 30 18:49:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:14.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.221 2 DEBUG oslo_concurrency.processutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/disk.config 67db1f19-3436-4e1e-bf63-266846e1380d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.221 2 INFO nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Deleting local config drive /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/disk.config because it was imported into RBD.
Sep 30 18:49:14 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 18:49:14 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 18:49:14 compute-0 kernel: tap2dab1b31-af: entered promiscuous mode
Sep 30 18:49:14 compute-0 NetworkManager[45059]: <info>  [1759258154.3266] manager: (tap2dab1b31-af): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Sep 30 18:49:14 compute-0 ovn_controller[156242]: 2025-09-30T18:49:14Z|00317|binding|INFO|Claiming lport 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for this chassis.
Sep 30 18:49:14 compute-0 ovn_controller[156242]: 2025-09-30T18:49:14Z|00318|binding|INFO|2dab1b31-affc-4fc3-9d5e-698d2cd44d6e: Claiming fa:16:3e:2d:d1:75 10.100.0.6
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.336 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:d1:75 10.100.0.6'], port_security=['fa:16:3e:2d:d1:75 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '67db1f19-3436-4e1e-bf63-266846e1380d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4658d55-a8f9-48f1-846d-61df3d830821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3359c464e0344756a39ce5c7088b9eba', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3a57c776-d79c-4096-859e-411dcf78cfa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0884332-fe68-47c8-9c8c-5c6a7c53f7f5, chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.337 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e in datapath f4658d55-a8f9-48f1-846d-61df3d830821 bound to our chassis
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.338 166158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4658d55-a8f9-48f1-846d-61df3d830821
Sep 30 18:49:14 compute-0 ovn_controller[156242]: 2025-09-30T18:49:14Z|00319|binding|INFO|Setting lport 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e ovn-installed in OVS
Sep 30 18:49:14 compute-0 ovn_controller[156242]: 2025-09-30T18:49:14Z|00320|binding|INFO|Setting lport 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e up in Southbound
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.358 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3996869c-abf2-4139-baf2-7020345eb3ec]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.359 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4658d55-a1 in ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Sep 30 18:49:14 compute-0 systemd-udevd[361408]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.361 292173 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4658d55-a0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.361 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a622210c-f940-4f86-8563-213dd74d688c]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.362 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a19a94-f595-40a1-8608-7307d55d49d3]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 NetworkManager[45059]: <info>  [1759258154.3730] device (tap2dab1b31-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 18:49:14 compute-0 NetworkManager[45059]: <info>  [1759258154.3744] device (tap2dab1b31-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 18:49:14 compute-0 systemd-machined[219917]: New machine qemu-29-instance-00000026.
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.382 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[14a123b8-e55b-48c3-a43f-dead90f6e2d2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000026.
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.389 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[5f75621c-fb52-446d-be3f-d957df636e7e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.423 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[315596f3-9f3a-4889-9bd7-91b7d434c3e9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.427 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[56c2c6ea-5804-4225-a9ec-3aee3082cf89]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 NetworkManager[45059]: <info>  [1759258154.4291] manager: (tapf4658d55-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Sep 30 18:49:14 compute-0 systemd-udevd[361412]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.462 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[40a9bdeb-33b0-413c-a42f-1a7ee635ca6e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.464 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[ee674274-3bf8-4893-a688-ac66a0b587bb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 NetworkManager[45059]: <info>  [1759258154.4887] device (tapf4658d55-a0): carrier: link connected
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.496 297688 DEBUG oslo.privsep.daemon [-] privsep: reply[239c0f5e-6e1c-4896-b814-698f68ed4344]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.518 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[33751fd2-48d6-4204-9198-ff6308903b96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4658d55-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:a8:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675318, 'reachable_time': 39616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361441, 'error': None, 'target': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.536 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[627fe4f4-422f-448c-aa11-cfd0e11d53d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:a899'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675318, 'tstamp': 675318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361442, 'error': None, 'target': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.550 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[2a274844-da4e-4128-9111-8378241ffe48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4658d55-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:a8:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675318, 'reachable_time': 39616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361443, 'error': None, 'target': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2127: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:14.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.586 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2d0d2d-8b7f-49f3-a16d-c55735e617cc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.652 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[71e1e7fc-d28f-4a0e-968b-71602e8805e1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.652 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4658d55-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.653 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.653 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4658d55-a0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:14 compute-0 NetworkManager[45059]: <info>  [1759258154.6556] manager: (tapf4658d55-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:14 compute-0 kernel: tapf4658d55-a0: entered promiscuous mode
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.658 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4658d55-a0, col_values=(('external_ids', {'iface-id': '862fbe9e-132a-4b8a-83f6-7b020c6192ad'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:14 compute-0 ovn_controller[156242]: 2025-09-30T18:49:14Z|00321|binding|INFO|Releasing lport 862fbe9e-132a-4b8a-83f6-7b020c6192ad from this chassis (sb_readonly=0)
Sep 30 18:49:14 compute-0 nova_compute[265391]: 2025-09-30 18:49:14.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.679 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[140d96ab-03bf-47f5-b14e-80988a9c51a4]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.680 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.680 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.680 166158 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for f4658d55-a8f9-48f1-846d-61df3d830821 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.680 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.680 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[89aacaaf-a16d-4ec8-8fbe-512aa5c186bc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.681 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.681 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[42edcb49-08d4-4a73-b01a-cfb346950d2c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.681 166158 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: global
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     log         /dev/log local0 debug
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     log-tag     haproxy-metadata-proxy-f4658d55-a8f9-48f1-846d-61df3d830821
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     user        root
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     group       root
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     maxconn     1024
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     pidfile     /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     daemon
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: defaults
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     log global
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     mode http
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     option httplog
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     option dontlognull
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     option http-server-close
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     option forwardfor
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     retries                 3
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     timeout http-request    30s
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     timeout connect         30s
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     timeout client          32s
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     timeout server          32s
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     timeout http-keep-alive 30s
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: listen listener
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     bind 169.254.169.254:80
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:     http-request add-header X-OVN-Network-ID f4658d55-a8f9-48f1-846d-61df3d830821
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Sep 30 18:49:14 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:14.682 166158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'env', 'PROCESS_TAG=haproxy-f4658d55-a8f9-48f1-846d-61df3d830821', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4658d55-a8f9-48f1-846d-61df3d830821.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Sep 30 18:49:15 compute-0 podman[361517]: 2025-09-30 18:49:15.084491221 +0000 UTC m=+0.065103029 container create 7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:49:15 compute-0 systemd[1]: Started libpod-conmon-7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267.scope.
Sep 30 18:49:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:49:15 compute-0 podman[361517]: 2025-09-30 18:49:15.056880345 +0000 UTC m=+0.037492173 image pull 8925e336c8a9c8e5b40d98e4715ad60992d6f21ad0a398170a686c34f922c024 38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Sep 30 18:49:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7447b4c959a18d109737b2675c88120c32aaba9c1b61fe493cd5c11d7f324576/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:15 compute-0 podman[361517]: 2025-09-30 18:49:15.161289001 +0000 UTC m=+0.141900819 container init 7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930)
Sep 30 18:49:15 compute-0 podman[361517]: 2025-09-30 18:49:15.16663866 +0000 UTC m=+0.147250468 container start 7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 18:49:15 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[361532]: [NOTICE]   (361536) : New worker (361538) forked
Sep 30 18:49:15 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[361532]: [NOTICE]   (361536) : Loading success.
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.323 2 DEBUG nova.compute.manager [req-7f2c782d-b2ad-4b7f-b201-5919b766219e req-433463f1-807f-4bcb-a276-298fe88a4c39 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.324 2 DEBUG oslo_concurrency.lockutils [req-7f2c782d-b2ad-4b7f-b201-5919b766219e req-433463f1-807f-4bcb-a276-298fe88a4c39 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.324 2 DEBUG oslo_concurrency.lockutils [req-7f2c782d-b2ad-4b7f-b201-5919b766219e req-433463f1-807f-4bcb-a276-298fe88a4c39 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.325 2 DEBUG oslo_concurrency.lockutils [req-7f2c782d-b2ad-4b7f-b201-5919b766219e req-433463f1-807f-4bcb-a276-298fe88a4c39 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.325 2 DEBUG nova.compute.manager [req-7f2c782d-b2ad-4b7f-b201-5919b766219e req-433463f1-807f-4bcb-a276-298fe88a4c39 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Processing event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.326 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.330 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.335 2 INFO nova.virt.libvirt.driver [-] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Instance spawned successfully.
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.335 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Sep 30 18:49:15 compute-0 ceph-mon[73755]: pgmap v2127: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.851 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.852 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.852 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.853 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.853 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:49:15 compute-0 nova_compute[265391]: 2025-09-30 18:49:15.854 2 DEBUG nova.virt.libvirt.driver [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Sep 30 18:49:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:16.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:16 compute-0 nova_compute[265391]: 2025-09-30 18:49:16.365 2 INFO nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Took 10.98 seconds to spawn the instance on the hypervisor.
Sep 30 18:49:16 compute-0 nova_compute[265391]: 2025-09-30 18:49:16.365 2 DEBUG nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Sep 30 18:49:16 compute-0 nova_compute[265391]: 2025-09-30 18:49:16.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2128: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:16.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:16 compute-0 nova_compute[265391]: 2025-09-30 18:49:16.892 2 INFO nova.compute.manager [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Took 16.73 seconds to build instance.
Sep 30 18:49:17 compute-0 nova_compute[265391]: 2025-09-30 18:49:17.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:17 compute-0 nova_compute[265391]: 2025-09-30 18:49:17.396 2 DEBUG oslo_concurrency.lockutils [None req-921383f1-ea9d-4bbc-8710-a77b99a188d6 f560266d133f4f1ba4a908e3cdcfa59d 3359c464e0344756a39ce5c7088b9eba - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.260s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:17.397Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:49:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:17.398Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:49:17 compute-0 nova_compute[265391]: 2025-09-30 18:49:17.400 2 DEBUG nova.compute.manager [req-62219fec-ebbf-4055-9b27-8fc22e90747f req-e6e02811-23f9-4113-a4c3-9f18a6199bf4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:49:17 compute-0 nova_compute[265391]: 2025-09-30 18:49:17.400 2 DEBUG oslo_concurrency.lockutils [req-62219fec-ebbf-4055-9b27-8fc22e90747f req-e6e02811-23f9-4113-a4c3-9f18a6199bf4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:17 compute-0 nova_compute[265391]: 2025-09-30 18:49:17.401 2 DEBUG oslo_concurrency.lockutils [req-62219fec-ebbf-4055-9b27-8fc22e90747f req-e6e02811-23f9-4113-a4c3-9f18a6199bf4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:17 compute-0 nova_compute[265391]: 2025-09-30 18:49:17.401 2 DEBUG oslo_concurrency.lockutils [req-62219fec-ebbf-4055-9b27-8fc22e90747f req-e6e02811-23f9-4113-a4c3-9f18a6199bf4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:17 compute-0 nova_compute[265391]: 2025-09-30 18:49:17.401 2 DEBUG nova.compute.manager [req-62219fec-ebbf-4055-9b27-8fc22e90747f req-e6e02811-23f9-4113-a4c3-9f18a6199bf4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] No waiting events found dispatching network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:49:17 compute-0 nova_compute[265391]: 2025-09-30 18:49:17.401 2 WARNING nova.compute.manager [req-62219fec-ebbf-4055-9b27-8fc22e90747f req-e6e02811-23f9-4113-a4c3-9f18a6199bf4 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received unexpected event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for instance with vm_state active and task_state None.
Sep 30 18:49:17 compute-0 ceph-mon[73755]: pgmap v2128: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:49:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:18.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2129: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:49:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:18.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:18] "GET /metrics HTTP/1.1" 200 46704 "" "Prometheus/2.51.0"
Sep 30 18:49:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:18] "GET /metrics HTTP/1.1" 200 46704 "" "Prometheus/2.51.0"
Sep 30 18:49:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:18.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:19 compute-0 ceph-mon[73755]: pgmap v2129: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 18:49:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:20.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2130: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:49:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:20.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:20 compute-0 ceph-mon[73755]: pgmap v2130: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:49:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:21 compute-0 nova_compute[265391]: 2025-09-30 18:49:21.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:22 compute-0 nova_compute[265391]: 2025-09-30 18:49:22.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:22.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:22 compute-0 sudo[361555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:49:22 compute-0 sudo[361555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:22 compute-0 sudo[361555]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:49:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:49:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:49:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2131: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:49:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:22.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:23 compute-0 ceph-mon[73755]: pgmap v2131: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:49:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:23.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2132: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:49:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:24.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:25 compute-0 ceph-mon[73755]: pgmap v2132: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:49:25 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/758503153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:26.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2133: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:49:26 compute-0 nova_compute[265391]: 2025-09-30 18:49:26.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:26.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:27 compute-0 nova_compute[265391]: 2025-09-30 18:49:27.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:27.398Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:27 compute-0 ovn_controller[156242]: 2025-09-30T18:49:27Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:d1:75 10.100.0.6
Sep 30 18:49:27 compute-0 ovn_controller[156242]: 2025-09-30T18:49:27Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:d1:75 10.100.0.6
Sep 30 18:49:27 compute-0 ceph-mon[73755]: pgmap v2133: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Sep 30 18:49:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:28.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:28 compute-0 podman[361587]: 2025-09-30 18:49:28.515493288 +0000 UTC m=+0.051334831 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest)
Sep 30 18:49:28 compute-0 podman[361586]: 2025-09-30 18:49:28.521310959 +0000 UTC m=+0.061087174 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:49:28 compute-0 podman[361588]: 2025-09-30 18:49:28.533374962 +0000 UTC m=+0.065560510 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:49:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2134: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 18:49:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:28.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:28] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:49:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:28] "GET /metrics HTTP/1.1" 200 46725 "" "Prometheus/2.51.0"
Sep 30 18:49:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:28.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:29 compute-0 ceph-mon[73755]: pgmap v2134: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 18:49:29 compute-0 podman[276673]: time="2025-09-30T18:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:49:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:49:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10768 "" "Go-http-client/1.1"
Sep 30 18:49:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:30.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2135: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:49:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:30.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:31 compute-0 openstack_network_exporter[279566]: ERROR   18:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:49:31 compute-0 openstack_network_exporter[279566]: ERROR   18:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:49:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:49:31 compute-0 openstack_network_exporter[279566]: ERROR   18:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:49:31 compute-0 openstack_network_exporter[279566]: ERROR   18:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:49:31 compute-0 openstack_network_exporter[279566]: ERROR   18:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:49:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:49:31 compute-0 nova_compute[265391]: 2025-09-30 18:49:31.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:31 compute-0 ceph-mon[73755]: pgmap v2135: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:49:32 compute-0 nova_compute[265391]: 2025-09-30 18:49:32.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:32.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2136: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:49:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:32.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:33 compute-0 ceph-mon[73755]: pgmap v2136: 353 pgs: 353 active+clean; 121 MiB data, 442 MiB used, 40 GiB / 40 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:49:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3376486055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:49:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2880843357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:49:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:33.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:34.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2137: 353 pgs: 353 active+clean; 167 MiB data, 463 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:49:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:34.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:35 compute-0 ceph-mon[73755]: pgmap v2137: 353 pgs: 353 active+clean; 167 MiB data, 463 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:49:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:36.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:49:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3251208783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:49:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:49:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3251208783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:49:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2138: 353 pgs: 353 active+clean; 167 MiB data, 463 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:49:36 compute-0 nova_compute[265391]: 2025-09-30 18:49:36.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:36.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3251208783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:49:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3251208783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:49:37 compute-0 nova_compute[265391]: 2025-09-30 18:49:37.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:49:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:49:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:37.399Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:49:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:49:37 compute-0 ceph-mon[73755]: pgmap v2138: 353 pgs: 353 active+clean; 167 MiB data, 463 MiB used, 40 GiB / 40 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 18:49:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:49:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:38.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2139: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Sep 30 18:49:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:38.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:49:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:38] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:49:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:38.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:39 compute-0 ceph-mon[73755]: pgmap v2139: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Sep 30 18:49:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:40.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2140: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:49:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:40.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:40 compute-0 ceph-mon[73755]: pgmap v2140: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:49:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:41 compute-0 nova_compute[265391]: 2025-09-30 18:49:41.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:42 compute-0 nova_compute[265391]: 2025-09-30 18:49:42.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:42.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:42 compute-0 sudo[361659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:49:42 compute-0 sudo[361659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:42 compute-0 sudo[361659]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:42 compute-0 sudo[361684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 18:49:42 compute-0 sudo[361684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:42 compute-0 sudo[361709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:49:42 compute-0 sudo[361709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:42 compute-0 sudo[361709]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2141: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:49:42 compute-0 sudo[361684]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:49:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:49:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:42.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:42 compute-0 sudo[361756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:49:42 compute-0 sudo[361756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:42 compute-0 sudo[361756]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:42 compute-0 sudo[361781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:49:42 compute-0 sudo[361781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:49:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:49:42 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:43 compute-0 sudo[361781]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:43 compute-0 ceph-mon[73755]: pgmap v2141: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 18:49:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:43 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:49:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:49:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:49:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:49:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:49:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2142: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Sep 30 18:49:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:49:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:49:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:49:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:49:43 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:49:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:49:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:49:43 compute-0 sudo[361841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:49:43 compute-0 sudo[361841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:43 compute-0 sudo[361841]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:43.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:43 compute-0 sudo[361866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:49:43 compute-0 sudo[361866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:44.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:44 compute-0 podman[361934]: 2025-09-30 18:49:44.392506831 +0000 UTC m=+0.060217602 container create 4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:49:44 compute-0 systemd[1]: Started libpod-conmon-4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334.scope.
Sep 30 18:49:44 compute-0 podman[361934]: 2025-09-30 18:49:44.361787254 +0000 UTC m=+0.029498085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:49:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:49:44 compute-0 podman[361934]: 2025-09-30 18:49:44.49319237 +0000 UTC m=+0.160903181 container init 4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:49:44 compute-0 podman[361934]: 2025-09-30 18:49:44.500399267 +0000 UTC m=+0.168109998 container start 4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_margulis, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:49:44 compute-0 podman[361934]: 2025-09-30 18:49:44.503624141 +0000 UTC m=+0.171334892 container attach 4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:49:44 compute-0 romantic_margulis[361958]: 167 167
Sep 30 18:49:44 compute-0 systemd[1]: libpod-4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334.scope: Deactivated successfully.
Sep 30 18:49:44 compute-0 podman[361948]: 2025-09-30 18:49:44.505724775 +0000 UTC m=+0.076916645 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:49:44 compute-0 conmon[361958]: conmon 4f43ae408c9dca5aa833 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334.scope/container/memory.events
Sep 30 18:49:44 compute-0 podman[361952]: 2025-09-30 18:49:44.520410516 +0000 UTC m=+0.081255727 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:49:44 compute-0 podman[362013]: 2025-09-30 18:49:44.547779785 +0000 UTC m=+0.022714260 container died 4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:49:44 compute-0 podman[361951]: 2025-09-30 18:49:44.568542053 +0000 UTC m=+0.121685095 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250930)
Sep 30 18:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ddf24f0aab412e64d557d6b7de731f7f13eb96cb88bf48efb1e72476054076f-merged.mount: Deactivated successfully.
Sep 30 18:49:44 compute-0 podman[362013]: 2025-09-30 18:49:44.599284 +0000 UTC m=+0.074218455 container remove 4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_margulis, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:49:44 compute-0 systemd[1]: libpod-conmon-4f43ae408c9dca5aa833ef4a3f21259d861c7353908bc9faa51c1dba3812a334.scope: Deactivated successfully.
Sep 30 18:49:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:44.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:49:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:49:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:49:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:49:44 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:49:44 compute-0 podman[362046]: 2025-09-30 18:49:44.824962249 +0000 UTC m=+0.055074998 container create c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 18:49:44 compute-0 systemd[1]: Started libpod-conmon-c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff.scope.
Sep 30 18:49:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7b0030108382e7ccd91da9bac6cd9cc6dbe178cfdd95cfdc50d1bf4d632e32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7b0030108382e7ccd91da9bac6cd9cc6dbe178cfdd95cfdc50d1bf4d632e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7b0030108382e7ccd91da9bac6cd9cc6dbe178cfdd95cfdc50d1bf4d632e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7b0030108382e7ccd91da9bac6cd9cc6dbe178cfdd95cfdc50d1bf4d632e32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed7b0030108382e7ccd91da9bac6cd9cc6dbe178cfdd95cfdc50d1bf4d632e32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:44 compute-0 podman[362046]: 2025-09-30 18:49:44.901905623 +0000 UTC m=+0.132018392 container init c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:49:44 compute-0 podman[362046]: 2025-09-30 18:49:44.809465708 +0000 UTC m=+0.039578477 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:49:44 compute-0 podman[362046]: 2025-09-30 18:49:44.910430584 +0000 UTC m=+0.140543333 container start c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:49:44 compute-0 podman[362046]: 2025-09-30 18:49:44.913395011 +0000 UTC m=+0.143507760 container attach c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:49:45 compute-0 fervent_bartik[362063]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:49:45 compute-0 fervent_bartik[362063]: --> All data devices are unavailable
Sep 30 18:49:45 compute-0 systemd[1]: libpod-c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff.scope: Deactivated successfully.
Sep 30 18:49:45 compute-0 podman[362046]: 2025-09-30 18:49:45.261568955 +0000 UTC m=+0.491681704 container died c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed7b0030108382e7ccd91da9bac6cd9cc6dbe178cfdd95cfdc50d1bf4d632e32-merged.mount: Deactivated successfully.
Sep 30 18:49:45 compute-0 podman[362046]: 2025-09-30 18:49:45.303379489 +0000 UTC m=+0.533492278 container remove c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:49:45 compute-0 systemd[1]: libpod-conmon-c150bf2a7ad6c02b619b223f9da500620d09241543252e461b274f37f2318cff.scope: Deactivated successfully.
Sep 30 18:49:45 compute-0 sudo[361866]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:45 compute-0 sudo[362091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:49:45 compute-0 sudo[362091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:45 compute-0 sudo[362091]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:45 compute-0 sudo[362116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:49:45 compute-0 sudo[362116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:45 compute-0 ceph-mon[73755]: pgmap v2142: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Sep 30 18:49:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2143: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 80 op/s
Sep 30 18:49:45 compute-0 podman[362183]: 2025-09-30 18:49:45.857545862 +0000 UTC m=+0.036640591 container create a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:49:45 compute-0 systemd[1]: Started libpod-conmon-a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72.scope.
Sep 30 18:49:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:49:45 compute-0 podman[362183]: 2025-09-30 18:49:45.841837595 +0000 UTC m=+0.020932324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:49:45 compute-0 podman[362183]: 2025-09-30 18:49:45.937687509 +0000 UTC m=+0.116782268 container init a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:49:45 compute-0 podman[362183]: 2025-09-30 18:49:45.943856879 +0000 UTC m=+0.122951628 container start a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:49:45 compute-0 podman[362183]: 2025-09-30 18:49:45.948360666 +0000 UTC m=+0.127455375 container attach a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:49:45 compute-0 agitated_franklin[362201]: 167 167
Sep 30 18:49:45 compute-0 systemd[1]: libpod-a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72.scope: Deactivated successfully.
Sep 30 18:49:45 compute-0 podman[362183]: 2025-09-30 18:49:45.950856711 +0000 UTC m=+0.129951420 container died a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9da0d4bc9ec5913eeed306aebe24d8a5eba4272ed8fec45e112e8adf5a8b3989-merged.mount: Deactivated successfully.
Sep 30 18:49:45 compute-0 podman[362183]: 2025-09-30 18:49:45.990267412 +0000 UTC m=+0.169362121 container remove a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:49:46 compute-0 systemd[1]: libpod-conmon-a03a454d8d168a4f9267331a5a7ce254e8fe7077ff2ab6e3a7d10a8dc4e2fa72.scope: Deactivated successfully.
Sep 30 18:49:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:49:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:46.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:49:46 compute-0 podman[362224]: 2025-09-30 18:49:46.236927195 +0000 UTC m=+0.054597126 container create 08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:49:46 compute-0 systemd[1]: Started libpod-conmon-08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89.scope.
Sep 30 18:49:46 compute-0 podman[362224]: 2025-09-30 18:49:46.211745712 +0000 UTC m=+0.029415643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:49:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:49:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc59707cf628860db11850e6e01fe5c825bd00c6339badbc74c51a4482151e72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc59707cf628860db11850e6e01fe5c825bd00c6339badbc74c51a4482151e72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc59707cf628860db11850e6e01fe5c825bd00c6339badbc74c51a4482151e72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc59707cf628860db11850e6e01fe5c825bd00c6339badbc74c51a4482151e72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:46 compute-0 podman[362224]: 2025-09-30 18:49:46.351260979 +0000 UTC m=+0.168930950 container init 08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Sep 30 18:49:46 compute-0 podman[362224]: 2025-09-30 18:49:46.36287808 +0000 UTC m=+0.180547971 container start 08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Sep 30 18:49:46 compute-0 podman[362224]: 2025-09-30 18:49:46.366478663 +0000 UTC m=+0.184148594 container attach 08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:49:46 compute-0 nova_compute[265391]: 2025-09-30 18:49:46.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:46.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]: {
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:     "0": [
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:         {
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "devices": [
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "/dev/loop3"
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             ],
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "lv_name": "ceph_lv0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "lv_size": "21470642176",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "name": "ceph_lv0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "tags": {
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.cluster_name": "ceph",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.crush_device_class": "",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.encrypted": "0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.osd_id": "0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.type": "block",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.vdo": "0",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:                 "ceph.with_tpm": "0"
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             },
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "type": "block",
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:             "vg_name": "ceph_vg0"
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:         }
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]:     ]
Sep 30 18:49:46 compute-0 suspicious_leavitt[362240]: }
Sep 30 18:49:46 compute-0 systemd[1]: libpod-08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89.scope: Deactivated successfully.
Sep 30 18:49:46 compute-0 podman[362224]: 2025-09-30 18:49:46.704686039 +0000 UTC m=+0.522356000 container died 08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_leavitt, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:49:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc59707cf628860db11850e6e01fe5c825bd00c6339badbc74c51a4482151e72-merged.mount: Deactivated successfully.
Sep 30 18:49:46 compute-0 podman[362224]: 2025-09-30 18:49:46.761872681 +0000 UTC m=+0.579542582 container remove 08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:49:46 compute-0 systemd[1]: libpod-conmon-08dca5e52a569e8fdb02b94814fd497eb4ae53d3350bbcd8e3710e1e25789f89.scope: Deactivated successfully.
Sep 30 18:49:46 compute-0 sudo[362116]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:46 compute-0 sudo[362261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:49:46 compute-0 sudo[362261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:46 compute-0 sudo[362261]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:46 compute-0 sudo[362286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:49:46 compute-0 sudo[362286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:47 compute-0 nova_compute[265391]: 2025-09-30 18:49:47.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:47.399Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:47 compute-0 podman[362354]: 2025-09-30 18:49:47.400329599 +0000 UTC m=+0.061936846 container create 73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 18:49:47 compute-0 systemd[1]: Started libpod-conmon-73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054.scope.
Sep 30 18:49:47 compute-0 podman[362354]: 2025-09-30 18:49:47.368935985 +0000 UTC m=+0.030543292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:49:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:49:47 compute-0 podman[362354]: 2025-09-30 18:49:47.489744895 +0000 UTC m=+0.151352212 container init 73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:49:47 compute-0 podman[362354]: 2025-09-30 18:49:47.497527817 +0000 UTC m=+0.159135034 container start 73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_panini, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:49:47 compute-0 podman[362354]: 2025-09-30 18:49:47.501325255 +0000 UTC m=+0.162932472 container attach 73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:49:47 compute-0 zealous_panini[362370]: 167 167
Sep 30 18:49:47 compute-0 systemd[1]: libpod-73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054.scope: Deactivated successfully.
Sep 30 18:49:47 compute-0 podman[362354]: 2025-09-30 18:49:47.504048596 +0000 UTC m=+0.165655813 container died 73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a155530232920895a4cb62331faf5ae27b8d457e3d9c7c391efd34a0f0015790-merged.mount: Deactivated successfully.
Sep 30 18:49:47 compute-0 podman[362354]: 2025-09-30 18:49:47.553631041 +0000 UTC m=+0.215238248 container remove 73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_panini, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:49:47 compute-0 systemd[1]: libpod-conmon-73521830276fffcaddee8c8bb1d917db671f8fa846dc155a9fd810b7504c6054.scope: Deactivated successfully.
Sep 30 18:49:47 compute-0 ceph-mon[73755]: pgmap v2143: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 80 op/s
Sep 30 18:49:47 compute-0 podman[362394]: 2025-09-30 18:49:47.779782163 +0000 UTC m=+0.060699375 container create 5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:49:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2144: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 80 op/s
Sep 30 18:49:47 compute-0 systemd[1]: Started libpod-conmon-5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435.scope.
Sep 30 18:49:47 compute-0 podman[362394]: 2025-09-30 18:49:47.758410839 +0000 UTC m=+0.039328051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:49:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9272a0c0680b000eddd1d6981354be5058e4ab7e8078c919c3f59e99ac4c6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9272a0c0680b000eddd1d6981354be5058e4ab7e8078c919c3f59e99ac4c6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9272a0c0680b000eddd1d6981354be5058e4ab7e8078c919c3f59e99ac4c6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9272a0c0680b000eddd1d6981354be5058e4ab7e8078c919c3f59e99ac4c6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:49:47 compute-0 podman[362394]: 2025-09-30 18:49:47.882624258 +0000 UTC m=+0.163541460 container init 5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:49:47 compute-0 podman[362394]: 2025-09-30 18:49:47.895479881 +0000 UTC m=+0.176397093 container start 5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:49:47 compute-0 podman[362394]: 2025-09-30 18:49:47.900236055 +0000 UTC m=+0.181153267 container attach 5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elion, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:49:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:48.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:48 compute-0 lvm[362486]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:49:48 compute-0 lvm[362486]: VG ceph_vg0 finished
Sep 30 18:49:48 compute-0 priceless_elion[362412]: {}
Sep 30 18:49:48 compute-0 systemd[1]: libpod-5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435.scope: Deactivated successfully.
Sep 30 18:49:48 compute-0 podman[362394]: 2025-09-30 18:49:48.598853022 +0000 UTC m=+0.879770234 container died 5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 18:49:48 compute-0 systemd[1]: libpod-5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435.scope: Consumed 1.070s CPU time.
Sep 30 18:49:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f9272a0c0680b000eddd1d6981354be5058e4ab7e8078c919c3f59e99ac4c6d-merged.mount: Deactivated successfully.
Sep 30 18:49:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:48.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:48 compute-0 podman[362394]: 2025-09-30 18:49:48.643069248 +0000 UTC m=+0.923986430 container remove 5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:49:48 compute-0 systemd[1]: libpod-conmon-5eeba77ff60a13c1d710d2ead6aee617f85fd7b5c9feb68f80e07827420ff435.scope: Deactivated successfully.
Sep 30 18:49:48 compute-0 sudo[362286]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:49:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:49:48 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:49:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:48] "GET /metrics HTTP/1.1" 200 46726 "" "Prometheus/2.51.0"
Sep 30 18:49:48 compute-0 sudo[362501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:49:48 compute-0 sudo[362501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:49:48 compute-0 sudo[362501]: pam_unix(sudo:session): session closed for user root
Sep 30 18:49:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:48.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:49 compute-0 ceph-mon[73755]: pgmap v2144: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 80 op/s
Sep 30 18:49:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:49 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:49:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2145: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 74 op/s
Sep 30 18:49:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:50.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:50.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:51 compute-0 sshd-session[361898]: Received disconnect from 154.125.120.7 port 36156:11: Bye Bye [preauth]
Sep 30 18:49:51 compute-0 sshd-session[361898]: Disconnected from 154.125.120.7 port 36156 [preauth]
Sep 30 18:49:51 compute-0 nova_compute[265391]: 2025-09-30 18:49:51.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:51 compute-0 ceph-mon[73755]: pgmap v2145: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 74 op/s
Sep 30 18:49:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2146: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 74 op/s
Sep 30 18:49:52 compute-0 nova_compute[265391]: 2025-09-30 18:49:52.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:52.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:49:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:49:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:52.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:49:53 compute-0 nova_compute[265391]: 2025-09-30 18:49:53.508 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Check if temp file /var/lib/nova/instances/tmp_10ptqj0 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Sep 30 18:49:53 compute-0 nova_compute[265391]: 2025-09-30 18:49:53.513 2 DEBUG nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp_10ptqj0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='67db1f19-3436-4e1e-bf63-266846e1380d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Sep 30 18:49:53 compute-0 ceph-mon[73755]: pgmap v2146: 353 pgs: 353 active+clean; 167 MiB data, 464 MiB used, 40 GiB / 40 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 74 op/s
Sep 30 18:49:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2147: 353 pgs: 353 active+clean; 200 MiB data, 488 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 137 op/s
Sep 30 18:49:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:53.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:54.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:54.345 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:54.346 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:49:54.346 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:54.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:54 compute-0 ceph-mon[73755]: pgmap v2147: 353 pgs: 353 active+clean; 200 MiB data, 488 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 137 op/s
Sep 30 18:49:55 compute-0 nova_compute[265391]: 2025-09-30 18:49:55.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:49:55 compute-0 nova_compute[265391]: 2025-09-30 18:49:55.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:49:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2148: 353 pgs: 353 active+clean; 200 MiB data, 488 MiB used, 40 GiB / 40 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Sep 30 18:49:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:49:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:56.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:49:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:49:56 compute-0 nova_compute[265391]: 2025-09-30 18:49:56.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:56.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:56 compute-0 ceph-mon[73755]: pgmap v2148: 353 pgs: 353 active+clean; 200 MiB data, 488 MiB used, 40 GiB / 40 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Sep 30 18:49:57 compute-0 nova_compute[265391]: 2025-09-30 18:49:57.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:49:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:57.400Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:49:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3530658459' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:49:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:49:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3530658459' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:49:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2149: 353 pgs: 353 active+clean; 200 MiB data, 488 MiB used, 40 GiB / 40 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Sep 30 18:49:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3530658459' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:49:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3530658459' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:49:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:49:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:49:58.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:49:58 compute-0 nova_compute[265391]: 2025-09-30 18:49:58.315 2 DEBUG nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Preparing to wait for external event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Sep 30 18:49:58 compute-0 nova_compute[265391]: 2025-09-30 18:49:58.315 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:49:58 compute-0 nova_compute[265391]: 2025-09-30 18:49:58.316 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:49:58 compute-0 nova_compute[265391]: 2025-09-30 18:49:58.316 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:49:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:49:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:49:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:49:58.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:49:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:58] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:49:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:49:58] "GET /metrics HTTP/1.1" 200 46721 "" "Prometheus/2.51.0"
Sep 30 18:49:58 compute-0 ceph-mon[73755]: pgmap v2149: 353 pgs: 353 active+clean; 200 MiB data, 488 MiB used, 40 GiB / 40 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Sep 30 18:49:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4283978356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:49:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:49:58.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:49:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:49:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:49:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:49:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:49:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:49:59 compute-0 podman[362539]: 2025-09-30 18:49:59.528194554 +0000 UTC m=+0.060436847 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4)
Sep 30 18:49:59 compute-0 podman[362540]: 2025-09-30 18:49:59.528188274 +0000 UTC m=+0.059989356 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Sep 30 18:49:59 compute-0 podman[362538]: 2025-09-30 18:49:59.561064216 +0000 UTC m=+0.090273421 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, tcib_managed=true)
Sep 30 18:49:59 compute-0 podman[276673]: time="2025-09-30T18:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:49:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42601 "" "Go-http-client/1.1"
Sep 30 18:49:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10769 "" "Go-http-client/1.1"
Sep 30 18:49:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2150: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 18:49:59 compute-0 ceph-mon[73755]: pgmap v2150: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 18:50:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 18:50:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:00.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:00 compute-0 nova_compute[265391]: 2025-09-30 18:50:00.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:00.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:00 compute-0 ceph-mon[73755]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 18:50:00 compute-0 nova_compute[265391]: 2025-09-30 18:50:00.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:00 compute-0 nova_compute[265391]: 2025-09-30 18:50:00.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:00 compute-0 nova_compute[265391]: 2025-09-30 18:50:00.939 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:00 compute-0 nova_compute[265391]: 2025-09-30 18:50:00.939 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:50:00 compute-0 nova_compute[265391]: 2025-09-30 18:50:00.939 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:50:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:50:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476484408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:01 compute-0 nova_compute[265391]: 2025-09-30 18:50:01.359 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:50:01 compute-0 openstack_network_exporter[279566]: ERROR   18:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:50:01 compute-0 openstack_network_exporter[279566]: ERROR   18:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:50:01 compute-0 openstack_network_exporter[279566]: ERROR   18:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:50:01 compute-0 openstack_network_exporter[279566]: ERROR   18:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:50:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:50:01 compute-0 openstack_network_exporter[279566]: ERROR   18:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:50:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:50:01 compute-0 nova_compute[265391]: 2025-09-30 18:50:01.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2151: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 18:50:01 compute-0 nova_compute[265391]: 2025-09-30 18:50:01.943 2 DEBUG nova.compute.manager [req-5228a631-7662-40a3-a213-5a037ccb389c req-178f2100-17c2-41a1-ae09-d292c095e223 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:01 compute-0 nova_compute[265391]: 2025-09-30 18:50:01.944 2 DEBUG oslo_concurrency.lockutils [req-5228a631-7662-40a3-a213-5a037ccb389c req-178f2100-17c2-41a1-ae09-d292c095e223 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:01 compute-0 nova_compute[265391]: 2025-09-30 18:50:01.944 2 DEBUG oslo_concurrency.lockutils [req-5228a631-7662-40a3-a213-5a037ccb389c req-178f2100-17c2-41a1-ae09-d292c095e223 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:01 compute-0 nova_compute[265391]: 2025-09-30 18:50:01.944 2 DEBUG oslo_concurrency.lockutils [req-5228a631-7662-40a3-a213-5a037ccb389c req-178f2100-17c2-41a1-ae09-d292c095e223 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:01 compute-0 nova_compute[265391]: 2025-09-30 18:50:01.944 2 DEBUG nova.compute.manager [req-5228a631-7662-40a3-a213-5a037ccb389c req-178f2100-17c2-41a1-ae09-d292c095e223 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] No event matching network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e in dict_keys([('network-vif-plugged', '2dab1b31-affc-4fc3-9d5e-698d2cd44d6e')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Sep 30 18:50:01 compute-0 nova_compute[265391]: 2025-09-30 18:50:01.945 2 DEBUG nova.compute.manager [req-5228a631-7662-40a3-a213-5a037ccb389c req-178f2100-17c2-41a1-ae09-d292c095e223 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:50:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1476484408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:01 compute-0 ceph-mon[73755]: pgmap v2151: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 18:50:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:02.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.416 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.416 2 DEBUG nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12136
Sep 30 18:50:02 compute-0 sudo[362625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:50:02 compute-0 sudo[362625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:02 compute-0 sudo[362625]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.622 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.623 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:50:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:02.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.666 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.042s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.666 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4095MB free_disk=39.90116500854492GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.667 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.667 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:02 compute-0 nova_compute[265391]: 2025-09-30 18:50:02.832 2 INFO nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Took 4.52 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Sep 30 18:50:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2857037570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.698 2 INFO nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Updating resource usage from migration fea3269e-5d28-46ac-b65f-701e0a6ebefa
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.726 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Migration fea3269e-5d28-46ac-b65f-701e0a6ebefa is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.727 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.727 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=39GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:50:02 up  1:53,  0 user,  load average: 0.80, 0.78, 0.83\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_migrating': '1', 'num_os_type_None': '1', 'num_proj_3359c464e0344756a39ce5c7088b9eba': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.742 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.757 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.757 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.769 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.789 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:50:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2152: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 18:50:03 compute-0 nova_compute[265391]: 2025-09-30 18:50:03.819 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:50:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:03.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.008 2 DEBUG nova.compute.manager [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.008 2 DEBUG oslo_concurrency.lockutils [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.009 2 DEBUG oslo_concurrency.lockutils [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.009 2 DEBUG oslo_concurrency.lockutils [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.009 2 DEBUG nova.compute.manager [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Processing event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.009 2 DEBUG nova.compute.manager [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-changed-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.010 2 DEBUG nova.compute.manager [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Refreshing instance network info cache due to event network-changed-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.010 2 DEBUG oslo_concurrency.lockutils [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.010 2 DEBUG oslo_concurrency.lockutils [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquired lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.010 2 DEBUG nova.network.neutron [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Refreshing network info cache for port 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.013 2 DEBUG nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Sep 30 18:50:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:04.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:50:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710904971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.277 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.282 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.521 2 WARNING neutronclient.v2_0.client [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.526 2 DEBUG nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=37888,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp_10ptqj0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='67db1f19-3436-4e1e-bf63-266846e1380d',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(fea3269e-5d28-46ac-b65f-701e0a6ebefa),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.529 2 DEBUG nova.objects.instance [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lazy-loading 'migration_context' on Instance uuid 67db1f19-3436-4e1e-bf63-266846e1380d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.530 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.532 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.532 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:50:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:04.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:04 compute-0 nova_compute[265391]: 2025-09-30 18:50:04.791 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:50:05 compute-0 ceph-mon[73755]: pgmap v2152: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 18:50:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3710904971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.035 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.035 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.042 2 DEBUG nova.virt.libvirt.vif [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-983808530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-983808530',id=38,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:49:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3359c464e0344756a39ce5c7088b9eba',ramdisk_id='',reservation_id='r-wltr50iq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-613400940',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:49:16Z,user_data=None,user_id='f560266d133f4f1ba4a908e3cdcfa59d',uuid=67db1f19-3436-4e1e-bf63-266846e1380d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.042 2 DEBUG nova.network.os_vif_util [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.043 2 DEBUG nova.network.os_vif_util [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d1:75,bridge_name='br-int',has_traffic_filtering=True,id=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dab1b31-af') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.043 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Updating guest XML with vif config: <interface type="ethernet">
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <mac address="fa:16:3e:2d:d1:75"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <model type="virtio"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <mtu size="1442"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <target dev="tap2dab1b31-af"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]: </interface>
Sep 30 18:50:05 compute-0 nova_compute[265391]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.044 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <name>instance-00000026</name>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <uuid>67db1f19-3436-4e1e-bf63-266846e1380d</uuid>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-983808530</nova:name>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:49:09</nova:creationTime>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:user uuid="f560266d133f4f1ba4a908e3cdcfa59d">tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin</nova:user>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:project uuid="3359c464e0344756a39ce5c7088b9eba">tempest-TestExecuteZoneMigrationStrategy-613400940</nova:project>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:port uuid="2dab1b31-affc-4fc3-9d5e-698d2cd44d6e">
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <system>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="serial">67db1f19-3436-4e1e-bf63-266846e1380d</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="uuid">67db1f19-3436-4e1e-bf63-266846e1380d</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </system>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <os>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </os>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <features>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </features>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/67db1f19-3436-4e1e-bf63-266846e1380d_disk">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </source>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/67db1f19-3436-4e1e-bf63-266846e1380d_disk.config">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </source>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:2d:d1:75"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2dab1b31-af"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/console.log" append="off"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </target>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/console.log" append="off"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </console>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </input>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <video>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </video>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]: </domain>
Sep 30 18:50:05 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.046 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <name>instance-00000026</name>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <uuid>67db1f19-3436-4e1e-bf63-266846e1380d</uuid>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-983808530</nova:name>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:49:09</nova:creationTime>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:user uuid="f560266d133f4f1ba4a908e3cdcfa59d">tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin</nova:user>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:project uuid="3359c464e0344756a39ce5c7088b9eba">tempest-TestExecuteZoneMigrationStrategy-613400940</nova:project>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:port uuid="2dab1b31-affc-4fc3-9d5e-698d2cd44d6e">
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <system>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="serial">67db1f19-3436-4e1e-bf63-266846e1380d</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="uuid">67db1f19-3436-4e1e-bf63-266846e1380d</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </system>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <os>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </os>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <features>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </features>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/67db1f19-3436-4e1e-bf63-266846e1380d_disk">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </source>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/67db1f19-3436-4e1e-bf63-266846e1380d_disk.config">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </source>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:2d:d1:75"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2dab1b31-af"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/console.log" append="off"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </target>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/console.log" append="off"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </console>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </input>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <video>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </video>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]: </domain>
Sep 30 18:50:05 compute-0 nova_compute[265391]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.047 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] _update_pci_xml output xml=<domain type="kvm">
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <name>instance-00000026</name>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <uuid>67db1f19-3436-4e1e-bf63-266846e1380d</uuid>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <metadata>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:package version="32.1.0-0.20250919142712.b99a882.el10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-983808530</nova:name>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:creationTime>2025-09-30 18:49:09</nova:creationTime>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:flavor name="m1.nano" id="c83dc7f1-0795-47db-adcb-fb90be11684a">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:memory>128</nova:memory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:disk>1</nova:disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:swap>0</nova:swap>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:vcpus>1</nova:vcpus>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:extraSpecs>
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:extraSpecs>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:flavor>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:image uuid="5b99cbca-b655-4be5-8343-cf504005c42e">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:containerFormat>bare</nova:containerFormat>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:diskFormat>qcow2</nova:diskFormat>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:minDisk>1</nova:minDisk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:minRam>0</nova:minRam>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:properties>
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:property name="hw_rng_model">virtio</nova:property>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:properties>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:image>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:owner>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:user uuid="f560266d133f4f1ba4a908e3cdcfa59d">tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin</nova:user>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:project uuid="3359c464e0344756a39ce5c7088b9eba">tempest-TestExecuteZoneMigrationStrategy-613400940</nova:project>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:owner>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:root type="image" uuid="5b99cbca-b655-4be5-8343-cf504005c42e"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <nova:ports>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <nova:port uuid="2dab1b31-affc-4fc3-9d5e-698d2cd44d6e">
Sep 30 18:50:05 compute-0 nova_compute[265391]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         </nova:port>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </nova:ports>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </nova:instance>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </metadata>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <memory unit="KiB">131072</memory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <currentMemory unit="KiB">131072</currentMemory>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <vcpu placement="static">1</vcpu>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <resource>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <partition>/machine</partition>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </resource>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <sysinfo type="smbios">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <system>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="manufacturer">RDO</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="product">OpenStack Compute</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="version">32.1.0-0.20250919142712.b99a882.el10</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="serial">67db1f19-3436-4e1e-bf63-266846e1380d</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="uuid">67db1f19-3436-4e1e-bf63-266846e1380d</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <entry name="family">Virtual Machine</entry>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </system>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </sysinfo>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <os>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <boot dev="hd"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <smbios mode="sysinfo"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </os>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <features>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <acpi/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <apic/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <vmcoreinfo state="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </features>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <cpu mode="host-model" check="partial">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </cpu>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <clock offset="utc">
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <timer name="hpet" present="no"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </clock>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_poweroff>destroy</on_poweroff>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_reboot>restart</on_reboot>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <on_crash>destroy</on_crash>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <devices>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <disk type="network" device="disk">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/67db1f19-3436-4e1e-bf63-266846e1380d_disk">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </source>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target dev="vda" bus="virtio"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <disk type="network" device="cdrom">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <driver name="qemu" type="raw" cache="none"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <auth username="openstack">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <secret type="ceph" uuid="63d32c6a-fa18-54ed-8711-9a3915cc367b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </auth>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <source protocol="rbd" name="vms/67db1f19-3436-4e1e-bf63-266846e1380d_disk.config">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.100" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <host name="192.168.122.101" port="6789"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </source>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target dev="sda" bus="sata"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <readonly/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </disk>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="0" model="pcie-root"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="1" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="1" port="0x10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="2" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="2" port="0x11"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="3" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="3" port="0x12"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="4" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="4" port="0x13"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="5" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="5" port="0x14"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="6" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="6" port="0x15"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="7" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="7" port="0x16"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="8" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="8" port="0x17"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="9" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="9" port="0x18"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="10" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="10" port="0x19"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="11" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="11" port="0x1a"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="12" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="12" port="0x1b"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="13" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="13" port="0x1c"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="14" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="14" port="0x1d"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="15" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="15" port="0x1e"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="16" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="16" port="0x1f"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="17" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="17" port="0x20"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="18" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="18" port="0x21"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="19" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="19" port="0x22"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="20" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="20" port="0x23"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="21" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="21" port="0x24"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="22" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="22" port="0x25"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="23" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="23" port="0x26"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="24" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="24" port="0x27"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="25" model="pcie-root-port">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-root-port"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target chassis="25" port="0x28"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model name="pcie-pci-bridge"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="usb" index="0" model="piix3-uhci">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <controller type="sata" index="0">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </controller>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <interface type="ethernet"><mac address="fa:16:3e:2d:d1:75"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2dab1b31-af"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </interface><serial type="pty">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/console.log" append="off"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target type="isa-serial" port="0">
Sep 30 18:50:05 compute-0 nova_compute[265391]:         <model name="isa-serial"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       </target>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </serial>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <console type="pty">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <log file="/var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d/console.log" append="off"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <target type="serial" port="0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </console>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <input type="tablet" bus="usb">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="usb" bus="0" port="1"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </input>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <input type="mouse" bus="ps2"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <listen type="address" address="::"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </graphics>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <video>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <model type="virtio" heads="1" primary="yes"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </video>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <stats period="10"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </memballoon>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     <rng model="virtio">
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <backend model="random">/dev/urandom</backend>
Sep 30 18:50:05 compute-0 nova_compute[265391]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]:     </rng>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   </devices>
Sep 30 18:50:05 compute-0 nova_compute[265391]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Sep 30 18:50:05 compute-0 nova_compute[265391]: </domain>
Sep 30 18:50:05 compute-0 nova_compute[265391]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.048 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.101 2 WARNING neutronclient.v2_0.client [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.230 2 DEBUG nova.network.neutron [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Updated VIF entry in instance network info cache for port 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.230 2 DEBUG nova.network.neutron [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Updating instance_info_cache with network_info: [{"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.302 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.302 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.635s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.537 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Current None elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.538 2 INFO nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Increasing downtime to 50 ms after 0 sec elapsed time
Sep 30 18:50:05 compute-0 nova_compute[265391]: 2025-09-30 18:50:05.736 2 DEBUG oslo_concurrency.lockutils [req-7c006239-7f35-4f3f-a834-36d1967821ed req-53a15a66-2650-458f-bf5c-1e0b784a15dd 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Releasing lock "refresh_cache-67db1f19-3436-4e1e-bf63-266846e1380d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Sep 30 18:50:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2153: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:50:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:06.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:06 compute-0 nova_compute[265391]: 2025-09-30 18:50:06.302 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:06 compute-0 nova_compute[265391]: 2025-09-30 18:50:06.303 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:06 compute-0 nova_compute[265391]: 2025-09-30 18:50:06.303 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:06 compute-0 nova_compute[265391]: 2025-09-30 18:50:06.303 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:06 compute-0 nova_compute[265391]: 2025-09-30 18:50:06.303 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:50:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:06 compute-0 nova_compute[265391]: 2025-09-30 18:50:06.556 2 INFO nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Sep 30 18:50:06 compute-0 nova_compute[265391]: 2025-09-30 18:50:06.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:06.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:07 compute-0 ceph-mon[73755]: pgmap v2153: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.059 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.059 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:50:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:50:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:07.401Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:50:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.563 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.563 2 DEBUG nova.virt.libvirt.migration [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Sep 30 18:50:07 compute-0 kernel: tap2dab1b31-af (unregistering): left promiscuous mode
Sep 30 18:50:07 compute-0 NetworkManager[45059]: <info>  [1759258207.7012] device (tap2dab1b31-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:07 compute-0 ovn_controller[156242]: 2025-09-30T18:50:07Z|00322|binding|INFO|Releasing lport 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e from this chassis (sb_readonly=0)
Sep 30 18:50:07 compute-0 ovn_controller[156242]: 2025-09-30T18:50:07Z|00323|binding|INFO|Setting lport 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e down in Southbound
Sep 30 18:50:07 compute-0 ovn_controller[156242]: 2025-09-30T18:50:07Z|00324|binding|INFO|Removing iface tap2dab1b31-af ovn-installed in OVS
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.717 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:d1:75 10.100.0.6'], port_security=['fa:16:3e:2d:d1:75 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '81ab3fff-d6d4-4262-9f24-1b212876e52c'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '67db1f19-3436-4e1e-bf63-266846e1380d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4658d55-a8f9-48f1-846d-61df3d830821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3359c464e0344756a39ce5c7088b9eba', 'neutron:revision_number': '10', 'neutron:security_group_ids': '3a57c776-d79c-4096-859e-411dcf78cfa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0884332-fe68-47c8-9c8c-5c6a7c53f7f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>], logical_port=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa11b328530>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.717 166158 INFO neutron.agent.ovn.metadata.agent [-] Port 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e in datapath f4658d55-a8f9-48f1-846d-61df3d830821 unbound from our chassis
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.719 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4658d55-a8f9-48f1-846d-61df3d830821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.720 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbc5fe1-a614-4661-8d62-9072ee2be6dd]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.720 166158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 namespace which is not needed anymore
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:07 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000026.scope: Deactivated successfully.
Sep 30 18:50:07 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000026.scope: Consumed 14.474s CPU time.
Sep 30 18:50:07 compute-0 systemd-machined[219917]: Machine qemu-29-instance-00000026 terminated.
Sep 30 18:50:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2154: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:50:07 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[361532]: [NOTICE]   (361536) : haproxy version is 3.0.5-8e879a5
Sep 30 18:50:07 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[361532]: [NOTICE]   (361536) : path to executable is /usr/sbin/haproxy
Sep 30 18:50:07 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[361532]: [WARNING]  (361536) : Exiting Master process...
Sep 30 18:50:07 compute-0 podman[362706]: 2025-09-30 18:50:07.837439987 +0000 UTC m=+0.035632054 container kill 7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 18:50:07 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[361532]: [ALERT]    (361536) : Current worker (361538) exited with code 143 (Terminated)
Sep 30 18:50:07 compute-0 neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821[361532]: [WARNING]  (361536) : All workers exited. Exiting... (0)
Sep 30 18:50:07 compute-0 systemd[1]: libpod-7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267.scope: Deactivated successfully.
Sep 30 18:50:07 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_selinux on 67db1f19-3436-4e1e-bf63-266846e1380d_disk: No such file or directory
Sep 30 18:50:07 compute-0 virtqemud[265263]: Unable to get XATTR trusted.libvirt.security.ref_dac on 67db1f19-3436-4e1e-bf63-266846e1380d_disk: No such file or directory
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:07 compute-0 podman[362721]: 2025-09-30 18:50:07.883356277 +0000 UTC m=+0.027781721 container died 7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.884 2 DEBUG nova.compute.manager [req-e4202884-1b19-4d64-9141-f197aa4f2218 req-9cb55d4f-4b63-49ec-9966-efb8a8bad0af 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.884 2 DEBUG oslo_concurrency.lockutils [req-e4202884-1b19-4d64-9141-f197aa4f2218 req-9cb55d4f-4b63-49ec-9966-efb8a8bad0af 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.884 2 DEBUG oslo_concurrency.lockutils [req-e4202884-1b19-4d64-9141-f197aa4f2218 req-9cb55d4f-4b63-49ec-9966-efb8a8bad0af 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.884 2 DEBUG oslo_concurrency.lockutils [req-e4202884-1b19-4d64-9141-f197aa4f2218 req-9cb55d4f-4b63-49ec-9966-efb8a8bad0af 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.885 2 DEBUG nova.compute.manager [req-e4202884-1b19-4d64-9141-f197aa4f2218 req-9cb55d4f-4b63-49ec-9966-efb8a8bad0af 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] No waiting events found dispatching network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.885 2 DEBUG nova.compute.manager [req-e4202884-1b19-4d64-9141-f197aa4f2218 req-9cb55d4f-4b63-49ec-9966-efb8a8bad0af 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:50:07 compute-0 virtqemud[265263]: Cannot recv data: Input/output error
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.888 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.888 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.889 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Sep 30 18:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7447b4c959a18d109737b2675c88120c32aaba9c1b61fe493cd5c11d7f324576-merged.mount: Deactivated successfully.
Sep 30 18:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267-userdata-shm.mount: Deactivated successfully.
Sep 30 18:50:07 compute-0 podman[362721]: 2025-09-30 18:50:07.925686825 +0000 UTC m=+0.070112269 container cleanup 7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:50:07 compute-0 systemd[1]: libpod-conmon-7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267.scope: Deactivated successfully.
Sep 30 18:50:07 compute-0 podman[362723]: 2025-09-30 18:50:07.943014994 +0000 UTC m=+0.081148205 container remove 7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267 (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.949 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[74ca7435-4bd9-43e1-96c6-f1b25459d63c]: (4, ("Tue Sep 30 06:50:07 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 (7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267)\n7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267\nTue Sep 30 06:50:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 (7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267)\n7d86d798a50dff7b2f760fd8aed9f9c370fcce14e2bc420d19c47338d1a05267\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.950 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a8940a0a-a343-414c-8f0a-be3f56ef6b61]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.951 166158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4658d55-a8f9-48f1-846d-61df3d830821.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.951 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[b538cc86-95fe-45c5-8862-578e3b511cc1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.952 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4658d55-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:07 compute-0 kernel: tapf4658d55-a0: left promiscuous mode
Sep 30 18:50:07 compute-0 nova_compute[265391]: 2025-09-30 18:50:07.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:07 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:07.973 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8542c9-e12e-48c1-9ab4-44336b8004e3]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:08.003 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[949316e9-7350-49b5-8d02-2c543f68fe33]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:08.004 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[d8856292-8522-4703-86d3-baac2ccff0f0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:08.021 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[53c17c24-d06b-4a3b-929e-8d5df2a41c33]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675311, 'reachable_time': 33720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362762, 'error': None, 'target': 'ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:08.023 166383 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4658d55-a8f9-48f1-846d-61df3d830821 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Sep 30 18:50:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:08.023 166383 DEBUG oslo.privsep.daemon [-] privsep: reply[2b306379-3305-42c0-bb3e-4c97609fc549]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:50:08 compute-0 systemd[1]: run-netns-ovnmeta\x2df4658d55\x2da8f9\x2d48f1\x2d846d\x2d61df3d830821.mount: Deactivated successfully.
Sep 30 18:50:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.065 2 DEBUG nova.virt.libvirt.guest [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '67db1f19-3436-4e1e-bf63-266846e1380d' (instance-00000026) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.065 2 INFO nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Migration operation has completed
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.066 2 INFO nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] _post_live_migration() is started..
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.080 2 WARNING neutronclient.v2_0.client [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.080 2 WARNING neutronclient.v2_0.client [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Sep 30 18:50:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:50:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:08.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:50:08
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['volumes', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002276006818958781 of space, bias 1.0, pg target 0.45520136379175624 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:50:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:08.353 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:08.355 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:50:08 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:08.356 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.570 2 DEBUG nova.network.neutron [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Activated binding for port 2dab1b31-affc-4fc3-9d5e-698d2cd44d6e and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.571 2 DEBUG nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.572 2 DEBUG nova.virt.libvirt.vif [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-09-30T18:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-983808530',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-983808530',id=38,image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-09-30T18:49:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3359c464e0344756a39ce5c7088b9eba',ramdisk_id='',reservation_id='r-wltr50iq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,manager,reader,admin',image_base_image_ref='5b99cbca-b655-4be5-8343-cf504005c42e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-613400940',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-613400940-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T18:49:48Z,user_data=None,user_id='f560266d133f4f1ba4a908e3cdcfa59d',uuid=67db1f19-3436-4e1e-bf63-266846e1380d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.572 2 DEBUG nova.network.os_vif_util [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converting VIF {"id": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "address": "fa:16:3e:2d:d1:75", "network": {"id": "f4658d55-a8f9-48f1-846d-61df3d830821", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2093820932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67cbb3b670e445a4b97abcc92749d126", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dab1b31-af", "ovs_interfaceid": "2dab1b31-affc-4fc3-9d5e-698d2cd44d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.573 2 DEBUG nova.network.os_vif_util [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d1:75,bridge_name='br-int',has_traffic_filtering=True,id=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dab1b31-af') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.573 2 DEBUG os_vif [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d1:75,bridge_name='br-int',has_traffic_filtering=True,id=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dab1b31-af') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dab1b31-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=412dca28-9826-4b12-b810-8d317e295575) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.617 2 INFO os_vif [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:d1:75,bridge_name='br-int',has_traffic_filtering=True,id=2dab1b31-affc-4fc3-9d5e-698d2cd44d6e,network=Network(f4658d55-a8f9-48f1-846d-61df3d830821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dab1b31-af')
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.618 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.618 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.618 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.618 2 DEBUG nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.619 2 INFO nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Deleting instance files /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d_del
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.619 2 INFO nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Deletion of /var/lib/nova/instances/67db1f19-3436-4e1e-bf63-266846e1380d_del complete
Sep 30 18:50:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:08.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:08] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:50:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:08] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:50:08 compute-0 nova_compute[265391]: 2025-09-30 18:50:08.935 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:50:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:08.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:09 compute-0 ceph-mon[73755]: pgmap v2154: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 1.2 KiB/s rd, 15 KiB/s wr, 1 op/s
Sep 30 18:50:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2155: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 5.8 KiB/s rd, 15 KiB/s wr, 7 op/s
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.976 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.976 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.976 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.976 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.977 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] No waiting events found dispatching network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.977 2 WARNING nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received unexpected event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for instance with vm_state active and task_state migrating.
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.977 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.977 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.977 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.978 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.978 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] No waiting events found dispatching network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.978 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.978 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.978 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.979 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.979 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.979 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] No waiting events found dispatching network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.979 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-unplugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.979 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.979 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.980 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.980 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.980 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] No waiting events found dispatching network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.980 2 WARNING nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received unexpected event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for instance with vm_state active and task_state migrating.
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.980 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.981 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.981 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.981 2 DEBUG oslo_concurrency.lockutils [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.981 2 DEBUG nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] No waiting events found dispatching network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Sep 30 18:50:09 compute-0 nova_compute[265391]: 2025-09-30 18:50:09.981 2 WARNING nova.compute.manager [req-95d277d3-9fb7-4746-92ca-c2ed1715e8dc req-a67d556a-aa01-4207-a155-94addba26729 44d5b77b33de420994797a8a6d5d109d faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Received unexpected event network-vif-plugged-2dab1b31-affc-4fc3-9d5e-698d2cd44d6e for instance with vm_state active and task_state migrating.
Sep 30 18:50:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:10.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:10.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:11 compute-0 ceph-mon[73755]: pgmap v2155: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 5.8 KiB/s rd, 15 KiB/s wr, 7 op/s
Sep 30 18:50:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2156: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 5 op/s
Sep 30 18:50:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:12.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:12 compute-0 nova_compute[265391]: 2025-09-30 18:50:12.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:12.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:13 compute-0 ceph-mon[73755]: pgmap v2156: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 5 op/s
Sep 30 18:50:13 compute-0 nova_compute[265391]: 2025-09-30 18:50:13.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:13 compute-0 nova_compute[265391]: 2025-09-30 18:50:13.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2157: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1023 B/s wr, 19 op/s
Sep 30 18:50:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:13.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:14.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:15 compute-0 ceph-mon[73755]: pgmap v2157: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1023 B/s wr, 19 op/s
Sep 30 18:50:15 compute-0 podman[362773]: 2025-09-30 18:50:15.581790268 +0000 UTC m=+0.093793252 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:50:15 compute-0 podman[362771]: 2025-09-30 18:50:15.617390021 +0000 UTC m=+0.140518043 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 18:50:15 compute-0 podman[362772]: 2025-09-30 18:50:15.628563151 +0000 UTC m=+0.147573056 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:50:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2158: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1023 B/s wr, 19 op/s
Sep 30 18:50:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:16.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:16.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:17 compute-0 ceph-mon[73755]: pgmap v2158: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1023 B/s wr, 19 op/s
Sep 30 18:50:17 compute-0 nova_compute[265391]: 2025-09-30 18:50:17.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:17.402Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2159: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1023 B/s wr, 19 op/s
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.156 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.156 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.156 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "67db1f19-3436-4e1e-bf63-266846e1380d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:18.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.670 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.670 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.671 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.671 2 DEBUG nova.compute.resource_tracker [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:50:18 compute-0 nova_compute[265391]: 2025-09-30 18:50:18.672 2 DEBUG oslo_concurrency.processutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:50:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:18.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:18] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:50:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:18] "GET /metrics HTTP/1.1" 200 46724 "" "Prometheus/2.51.0"
Sep 30 18:50:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:18.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:50:19 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831620129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:19 compute-0 nova_compute[265391]: 2025-09-30 18:50:19.141 2 DEBUG oslo_concurrency.processutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:50:19 compute-0 ceph-mon[73755]: pgmap v2159: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1023 B/s wr, 19 op/s
Sep 30 18:50:19 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1831620129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:19 compute-0 nova_compute[265391]: 2025-09-30 18:50:19.302 2 WARNING nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:50:19 compute-0 nova_compute[265391]: 2025-09-30 18:50:19.303 2 DEBUG oslo_concurrency.processutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:50:19 compute-0 nova_compute[265391]: 2025-09-30 18:50:19.325 2 DEBUG oslo_concurrency.processutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:50:19 compute-0 nova_compute[265391]: 2025-09-30 18:50:19.326 2 DEBUG nova.compute.resource_tracker [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4265MB free_disk=39.901153564453125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:50:19 compute-0 nova_compute[265391]: 2025-09-30 18:50:19.326 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:19 compute-0 nova_compute[265391]: 2025-09-30 18:50:19.327 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2160: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 77 KiB/s rd, 9.1 KiB/s wr, 127 op/s
Sep 30 18:50:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:20.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:20 compute-0 nova_compute[265391]: 2025-09-30 18:50:20.348 2 DEBUG nova.compute.resource_tracker [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration for instance 67db1f19-3436-4e1e-bf63-266846e1380d refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Sep 30 18:50:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:20.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:20 compute-0 nova_compute[265391]: 2025-09-30 18:50:20.856 2 DEBUG nova.compute.resource_tracker [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Sep 30 18:50:20 compute-0 nova_compute[265391]: 2025-09-30 18:50:20.880 2 DEBUG nova.compute.resource_tracker [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Migration fea3269e-5d28-46ac-b65f-701e0a6ebefa is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Sep 30 18:50:20 compute-0 nova_compute[265391]: 2025-09-30 18:50:20.881 2 DEBUG nova.compute.resource_tracker [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:50:20 compute-0 nova_compute[265391]: 2025-09-30 18:50:20.881 2 DEBUG nova.compute.resource_tracker [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:50:19 up  1:53,  0 user,  load average: 0.64, 0.75, 0.81\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:50:20 compute-0 nova_compute[265391]: 2025-09-30 18:50:20.916 2 DEBUG oslo_concurrency.processutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:50:21 compute-0 ceph-mon[73755]: pgmap v2160: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 77 KiB/s rd, 9.1 KiB/s wr, 127 op/s
Sep 30 18:50:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:50:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968907927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:21 compute-0 nova_compute[265391]: 2025-09-30 18:50:21.352 2 DEBUG oslo_concurrency.processutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:50:21 compute-0 nova_compute[265391]: 2025-09-30 18:50:21.357 2 DEBUG nova.compute.provider_tree [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:50:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2161: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 9.1 KiB/s wr, 121 op/s
Sep 30 18:50:21 compute-0 nova_compute[265391]: 2025-09-30 18:50:21.865 2 DEBUG nova.scheduler.client.report [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:50:22 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3968907927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:22 compute-0 nova_compute[265391]: 2025-09-30 18:50:22.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:50:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:50:22 compute-0 nova_compute[265391]: 2025-09-30 18:50:22.376 2 DEBUG nova.compute.resource_tracker [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:50:22 compute-0 nova_compute[265391]: 2025-09-30 18:50:22.378 2 DEBUG oslo_concurrency.lockutils [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.051s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:22 compute-0 nova_compute[265391]: 2025-09-30 18:50:22.398 2 INFO nova.compute.manager [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Sep 30 18:50:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:22.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:22 compute-0 sudo[362893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:50:22 compute-0 sudo[362893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:22 compute-0 sudo[362893]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:23 compute-0 ceph-mon[73755]: pgmap v2161: 353 pgs: 353 active+clean; 200 MiB data, 489 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 9.1 KiB/s wr, 121 op/s
Sep 30 18:50:23 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:50:23 compute-0 nova_compute[265391]: 2025-09-30 18:50:23.458 2 INFO nova.scheduler.client.report [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] Deleted allocation for migration fea3269e-5d28-46ac-b65f-701e0a6ebefa
Sep 30 18:50:23 compute-0 nova_compute[265391]: 2025-09-30 18:50:23.461 2 DEBUG nova.virt.libvirt.driver [None req-e39aaae2-dba1-4794-ad95-6450eb5961cb 23f48a6ade4f417e868e98b2d7f359bb faf843bff6f4482db6a9e1e7c4cd2a24 - - default default] [instance: 67db1f19-3436-4e1e-bf63-266846e1380d] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Sep 30 18:50:23 compute-0 nova_compute[265391]: 2025-09-30 18:50:23.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2162: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 10 KiB/s wr, 149 op/s
Sep 30 18:50:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:23.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:24.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:25 compute-0 ceph-mon[73755]: pgmap v2162: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 91 KiB/s rd, 10 KiB/s wr, 149 op/s
Sep 30 18:50:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2163: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 83 KiB/s rd, 9.2 KiB/s wr, 135 op/s
Sep 30 18:50:25 compute-0 nova_compute[265391]: 2025-09-30 18:50:25.935 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:26.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:26 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/268221078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:50:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:26 compute-0 nova_compute[265391]: 2025-09-30 18:50:26.449 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:27 compute-0 ceph-mon[73755]: pgmap v2163: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 83 KiB/s rd, 9.2 KiB/s wr, 135 op/s
Sep 30 18:50:27 compute-0 nova_compute[265391]: 2025-09-30 18:50:27.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:27.404Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2164: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 83 KiB/s rd, 9.2 KiB/s wr, 135 op/s
Sep 30 18:50:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:28.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:28 compute-0 nova_compute[265391]: 2025-09-30 18:50:28.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:28] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:50:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:28] "GET /metrics HTTP/1.1" 200 46728 "" "Prometheus/2.51.0"
Sep 30 18:50:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:28.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:29 compute-0 ceph-mon[73755]: pgmap v2164: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 83 KiB/s rd, 9.2 KiB/s wr, 135 op/s
Sep 30 18:50:29 compute-0 podman[276673]: time="2025-09-30T18:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:50:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:50:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10320 "" "Go-http-client/1.1"
Sep 30 18:50:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2165: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 83 KiB/s rd, 9.2 KiB/s wr, 136 op/s
Sep 30 18:50:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:30.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:30 compute-0 podman[362927]: 2025-09-30 18:50:30.555830188 +0000 UTC m=+0.078283180 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Sep 30 18:50:30 compute-0 podman[362926]: 2025-09-30 18:50:30.558323073 +0000 UTC m=+0.084742017 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:50:30 compute-0 podman[362928]: 2025-09-30 18:50:30.564662657 +0000 UTC m=+0.087825367 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:50:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:30.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:31 compute-0 ceph-mon[73755]: pgmap v2165: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 83 KiB/s rd, 9.2 KiB/s wr, 136 op/s
Sep 30 18:50:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:31 compute-0 openstack_network_exporter[279566]: ERROR   18:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:50:31 compute-0 openstack_network_exporter[279566]: ERROR   18:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:50:31 compute-0 openstack_network_exporter[279566]: ERROR   18:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:50:31 compute-0 openstack_network_exporter[279566]: ERROR   18:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:50:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:50:31 compute-0 openstack_network_exporter[279566]: ERROR   18:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:50:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:50:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2166: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:32.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:32 compute-0 nova_compute[265391]: 2025-09-30 18:50:32.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:32.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:33 compute-0 ceph-mon[73755]: pgmap v2166: 353 pgs: 353 active+clean; 121 MiB data, 444 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:33 compute-0 nova_compute[265391]: 2025-09-30 18:50:33.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2167: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Sep 30 18:50:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:33.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:34.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:34.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:35 compute-0 ceph-mon[73755]: pgmap v2167: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Sep 30 18:50:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2168: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:36.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:50:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2920653784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:50:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:50:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2920653784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:50:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:36.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:37 compute-0 nova_compute[265391]: 2025-09-30 18:50:37.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:37 compute-0 ceph-mon[73755]: pgmap v2168: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2920653784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:50:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2920653784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:50:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:50:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:50:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:37.405Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:50:37 compute-0 nova_compute[265391]: 2025-09-30 18:50:37.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:37 compute-0 nova_compute[265391]: 2025-09-30 18:50:37.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:50:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:50:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2169: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:50:38 compute-0 nova_compute[265391]: 2025-09-30 18:50:38.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:38] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:50:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:38] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:50:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:38.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:39 compute-0 ceph-mon[73755]: pgmap v2169: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2170: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:41 compute-0 ceph-mon[73755]: pgmap v2170: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2171: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:42 compute-0 nova_compute[265391]: 2025-09-30 18:50:42.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:42 compute-0 sudo[363001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:50:42 compute-0 sudo[363001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:42 compute-0 sudo[363001]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:43 compute-0 ceph-mon[73755]: pgmap v2171: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:43 compute-0 nova_compute[265391]: 2025-09-30 18:50:43.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2172: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:43.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:44.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:44.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:45 compute-0 ceph-mon[73755]: pgmap v2172: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 18:50:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2173: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:50:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:46 compute-0 podman[363032]: 2025-09-30 18:50:46.564421274 +0000 UTC m=+0.077950372 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:50:46 compute-0 podman[363030]: 2025-09-30 18:50:46.590817558 +0000 UTC m=+0.114398326 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:50:46 compute-0 podman[363031]: 2025-09-30 18:50:46.614205684 +0000 UTC m=+0.131418477 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Sep 30 18:50:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:47 compute-0 nova_compute[265391]: 2025-09-30 18:50:47.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:47.405Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:47 compute-0 ceph-mon[73755]: pgmap v2173: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:50:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2174: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:50:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:48.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:48 compute-0 nova_compute[265391]: 2025-09-30 18:50:48.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:48.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:48] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:50:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:48] "GET /metrics HTTP/1.1" 200 46708 "" "Prometheus/2.51.0"
Sep 30 18:50:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:48.947Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:49 compute-0 sudo[363097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:50:49 compute-0 sudo[363097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:49 compute-0 sudo[363097]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:49 compute-0 sudo[363122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:50:49 compute-0 sudo[363122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:49 compute-0 ceph-mon[73755]: pgmap v2174: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:50:49 compute-0 sudo[363122]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2175: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:50:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:50:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:50:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:50:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:50:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2176: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:50:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:50:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:50:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:50:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:50:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:50:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:50:49 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:50:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:50:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:50:50 compute-0 sudo[363181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:50:50 compute-0 sudo[363181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:50 compute-0 sudo[363181]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:50 compute-0 sudo[363206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:50:50 compute-0 sudo[363206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:50:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:50:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:50:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:50:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:50:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:50:50 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:50:50 compute-0 podman[363273]: 2025-09-30 18:50:50.500924252 +0000 UTC m=+0.056222938 container create 23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:50:50 compute-0 systemd[1]: Started libpod-conmon-23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624.scope.
Sep 30 18:50:50 compute-0 podman[363273]: 2025-09-30 18:50:50.470132174 +0000 UTC m=+0.025430900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:50:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:50:50 compute-0 podman[363273]: 2025-09-30 18:50:50.611922638 +0000 UTC m=+0.167221314 container init 23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:50:50 compute-0 podman[363273]: 2025-09-30 18:50:50.61892038 +0000 UTC m=+0.174219036 container start 23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:50:50 compute-0 podman[363273]: 2025-09-30 18:50:50.622553684 +0000 UTC m=+0.177852360 container attach 23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:50:50 compute-0 tender_perlman[363289]: 167 167
Sep 30 18:50:50 compute-0 systemd[1]: libpod-23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624.scope: Deactivated successfully.
Sep 30 18:50:50 compute-0 conmon[363289]: conmon 23dd06f5d346cc4573ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624.scope/container/memory.events
Sep 30 18:50:50 compute-0 podman[363273]: 2025-09-30 18:50:50.630306325 +0000 UTC m=+0.185605541 container died 23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecd0e1885840da808c188268c79cd426db88c927b28003a8893fba4ed911e7a2-merged.mount: Deactivated successfully.
Sep 30 18:50:50 compute-0 podman[363273]: 2025-09-30 18:50:50.672839757 +0000 UTC m=+0.228138453 container remove 23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 18:50:50 compute-0 systemd[1]: libpod-conmon-23dd06f5d346cc4573efa9ab09b489ef25b7c053b76b457cf5c8c49760c7b624.scope: Deactivated successfully.
Sep 30 18:50:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:50.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:50 compute-0 podman[363313]: 2025-09-30 18:50:50.862809692 +0000 UTC m=+0.070138630 container create 8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:50:50 compute-0 systemd[1]: Started libpod-conmon-8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67.scope.
Sep 30 18:50:50 compute-0 podman[363313]: 2025-09-30 18:50:50.836244173 +0000 UTC m=+0.043573211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:50:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fa9354f90f637638a61da41d810734ae53ef4afa6ca70ad1914bb7e8069015/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fa9354f90f637638a61da41d810734ae53ef4afa6ca70ad1914bb7e8069015/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fa9354f90f637638a61da41d810734ae53ef4afa6ca70ad1914bb7e8069015/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fa9354f90f637638a61da41d810734ae53ef4afa6ca70ad1914bb7e8069015/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fa9354f90f637638a61da41d810734ae53ef4afa6ca70ad1914bb7e8069015/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:50 compute-0 podman[363313]: 2025-09-30 18:50:50.957556638 +0000 UTC m=+0.164885596 container init 8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:50:50 compute-0 podman[363313]: 2025-09-30 18:50:50.973657255 +0000 UTC m=+0.180986193 container start 8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:50:50 compute-0 podman[363313]: 2025-09-30 18:50:50.977091414 +0000 UTC m=+0.184420372 container attach 8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_margulis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:50:51 compute-0 flamboyant_margulis[363330]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:50:51 compute-0 flamboyant_margulis[363330]: --> All data devices are unavailable
Sep 30 18:50:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:51 compute-0 systemd[1]: libpod-8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67.scope: Deactivated successfully.
Sep 30 18:50:51 compute-0 podman[363313]: 2025-09-30 18:50:51.339144148 +0000 UTC m=+0.546473096 container died 8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2fa9354f90f637638a61da41d810734ae53ef4afa6ca70ad1914bb7e8069015-merged.mount: Deactivated successfully.
Sep 30 18:50:51 compute-0 podman[363313]: 2025-09-30 18:50:51.387639685 +0000 UTC m=+0.594968643 container remove 8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:50:51 compute-0 systemd[1]: libpod-conmon-8f28adc385d8fc41ce38c9443b73e3e720d8d04c2d6ea7b2313fe7b56b057c67.scope: Deactivated successfully.
Sep 30 18:50:51 compute-0 sudo[363206]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:51 compute-0 ceph-mon[73755]: pgmap v2175: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:50:51 compute-0 ceph-mon[73755]: pgmap v2176: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:51 compute-0 sudo[363357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:50:51 compute-0 sudo[363357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:51 compute-0 sudo[363357]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:51 compute-0 sudo[363382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:50:51 compute-0 sudo[363382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2177: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:52 compute-0 podman[363449]: 2025-09-30 18:50:52.016843772 +0000 UTC m=+0.049537615 container create 96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:50:52 compute-0 systemd[1]: Started libpod-conmon-96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d.scope.
Sep 30 18:50:52 compute-0 podman[363449]: 2025-09-30 18:50:51.994577385 +0000 UTC m=+0.027271278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:50:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:50:52 compute-0 podman[363449]: 2025-09-30 18:50:52.109088973 +0000 UTC m=+0.141782896 container init 96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:50:52 compute-0 podman[363449]: 2025-09-30 18:50:52.115394526 +0000 UTC m=+0.148088369 container start 96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:50:52 compute-0 podman[363449]: 2025-09-30 18:50:52.118731673 +0000 UTC m=+0.151425566 container attach 96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:50:52 compute-0 upbeat_wilbur[363465]: 167 167
Sep 30 18:50:52 compute-0 systemd[1]: libpod-96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d.scope: Deactivated successfully.
Sep 30 18:50:52 compute-0 podman[363470]: 2025-09-30 18:50:52.161118961 +0000 UTC m=+0.021975230 container died 96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef66ef8970644bd0c8c3607ad4cb780ad60013b6f8e33092dfa306e4cf822d91-merged.mount: Deactivated successfully.
Sep 30 18:50:52 compute-0 podman[363470]: 2025-09-30 18:50:52.198452639 +0000 UTC m=+0.059308888 container remove 96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:50:52 compute-0 systemd[1]: libpod-conmon-96d2c3b2821a145fc06a65c9526534dff3b717935900693a35c114bc7a3dfc2d.scope: Deactivated successfully.
Sep 30 18:50:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:52.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:52 compute-0 nova_compute[265391]: 2025-09-30 18:50:52.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:50:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:50:52 compute-0 podman[363492]: 2025-09-30 18:50:52.437431733 +0000 UTC m=+0.060548810 container create 64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:50:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:50:52 compute-0 systemd[1]: Started libpod-conmon-64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363.scope.
Sep 30 18:50:52 compute-0 podman[363492]: 2025-09-30 18:50:52.416545312 +0000 UTC m=+0.039662469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:50:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7605e1080df42800eda5c071314eeaca3f3a053081b77df5f45ef10f1e0052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7605e1080df42800eda5c071314eeaca3f3a053081b77df5f45ef10f1e0052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7605e1080df42800eda5c071314eeaca3f3a053081b77df5f45ef10f1e0052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7605e1080df42800eda5c071314eeaca3f3a053081b77df5f45ef10f1e0052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:52 compute-0 podman[363492]: 2025-09-30 18:50:52.542058895 +0000 UTC m=+0.165176052 container init 64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:50:52 compute-0 podman[363492]: 2025-09-30 18:50:52.547927897 +0000 UTC m=+0.171045004 container start 64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:50:52 compute-0 podman[363492]: 2025-09-30 18:50:52.552263509 +0000 UTC m=+0.175380616 container attach 64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 18:50:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:50:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:52.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]: {
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:     "0": [
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:         {
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "devices": [
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "/dev/loop3"
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             ],
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "lv_name": "ceph_lv0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "lv_size": "21470642176",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "name": "ceph_lv0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "tags": {
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.cluster_name": "ceph",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.crush_device_class": "",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.encrypted": "0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.osd_id": "0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.type": "block",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.vdo": "0",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:                 "ceph.with_tpm": "0"
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             },
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "type": "block",
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:             "vg_name": "ceph_vg0"
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:         }
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]:     ]
Sep 30 18:50:52 compute-0 ecstatic_hofstadter[363508]: }
Sep 30 18:50:52 compute-0 systemd[1]: libpod-64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363.scope: Deactivated successfully.
Sep 30 18:50:52 compute-0 podman[363492]: 2025-09-30 18:50:52.879960143 +0000 UTC m=+0.503077320 container died 64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-de7605e1080df42800eda5c071314eeaca3f3a053081b77df5f45ef10f1e0052-merged.mount: Deactivated successfully.
Sep 30 18:50:52 compute-0 podman[363492]: 2025-09-30 18:50:52.931809186 +0000 UTC m=+0.554926293 container remove 64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:50:52 compute-0 systemd[1]: libpod-conmon-64911173a1adf77db8c7dee7bf495c6e189bb49e2050782b47796e4b6c9bf363.scope: Deactivated successfully.
Sep 30 18:50:52 compute-0 sudo[363382]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:53 compute-0 sudo[363531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:50:53 compute-0 sudo[363531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:53 compute-0 sudo[363531]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:53 compute-0 sudo[363556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:50:53 compute-0 sudo[363556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:53 compute-0 nova_compute[265391]: 2025-09-30 18:50:53.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:53 compute-0 ceph-mon[73755]: pgmap v2177: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:53 compute-0 podman[363622]: 2025-09-30 18:50:53.50835312 +0000 UTC m=+0.043536930 container create 31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_herschel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:50:53 compute-0 systemd[1]: Started libpod-conmon-31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0.scope.
Sep 30 18:50:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:50:53 compute-0 podman[363622]: 2025-09-30 18:50:53.492442867 +0000 UTC m=+0.027626697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:50:53 compute-0 podman[363622]: 2025-09-30 18:50:53.588874077 +0000 UTC m=+0.124057897 container init 31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_herschel, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:50:53 compute-0 podman[363622]: 2025-09-30 18:50:53.596674799 +0000 UTC m=+0.131858619 container start 31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:50:53 compute-0 podman[363622]: 2025-09-30 18:50:53.600738214 +0000 UTC m=+0.135922034 container attach 31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:50:53 compute-0 beautiful_herschel[363639]: 167 167
Sep 30 18:50:53 compute-0 systemd[1]: libpod-31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0.scope: Deactivated successfully.
Sep 30 18:50:53 compute-0 podman[363622]: 2025-09-30 18:50:53.60366271 +0000 UTC m=+0.138846520 container died 31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ee2329510d2561f590dcfcb5ad702da5e943015f5e9070e3694752129eb7b19-merged.mount: Deactivated successfully.
Sep 30 18:50:53 compute-0 nova_compute[265391]: 2025-09-30 18:50:53.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:53 compute-0 podman[363622]: 2025-09-30 18:50:53.643532163 +0000 UTC m=+0.178715973 container remove 31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:50:53 compute-0 systemd[1]: libpod-conmon-31e33e6dcedaa7ef23f1285cbac55b055bba6fd8908fd4bf29e964f7aaabe1b0.scope: Deactivated successfully.
Sep 30 18:50:53 compute-0 podman[363664]: 2025-09-30 18:50:53.857174501 +0000 UTC m=+0.050691085 container create ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_poitras, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:50:53 compute-0 systemd[1]: Started libpod-conmon-ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87.scope.
Sep 30 18:50:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:50:53 compute-0 podman[363664]: 2025-09-30 18:50:53.834665177 +0000 UTC m=+0.028181811 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553fd6ad98be00a1071a6f7eaec0e2a4db51c293a2386e91a71f52325e595961/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553fd6ad98be00a1071a6f7eaec0e2a4db51c293a2386e91a71f52325e595961/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553fd6ad98be00a1071a6f7eaec0e2a4db51c293a2386e91a71f52325e595961/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/553fd6ad98be00a1071a6f7eaec0e2a4db51c293a2386e91a71f52325e595961/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:50:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:53.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2178: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:53 compute-0 podman[363664]: 2025-09-30 18:50:53.947872722 +0000 UTC m=+0.141389316 container init ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 18:50:53 compute-0 podman[363664]: 2025-09-30 18:50:53.96208637 +0000 UTC m=+0.155602954 container start ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:50:53 compute-0 podman[363664]: 2025-09-30 18:50:53.965836947 +0000 UTC m=+0.159353541 container attach ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 18:50:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:54.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:54.347 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:50:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:54.347 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:50:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:50:54.347 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:50:54 compute-0 lvm[363755]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:50:54 compute-0 lvm[363755]: VG ceph_vg0 finished
Sep 30 18:50:54 compute-0 relaxed_poitras[363680]: {}
Sep 30 18:50:54 compute-0 systemd[1]: libpod-ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87.scope: Deactivated successfully.
Sep 30 18:50:54 compute-0 podman[363664]: 2025-09-30 18:50:54.67395715 +0000 UTC m=+0.867473734 container died ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:50:54 compute-0 systemd[1]: libpod-ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87.scope: Consumed 1.120s CPU time.
Sep 30 18:50:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-553fd6ad98be00a1071a6f7eaec0e2a4db51c293a2386e91a71f52325e595961-merged.mount: Deactivated successfully.
Sep 30 18:50:54 compute-0 podman[363664]: 2025-09-30 18:50:54.713969267 +0000 UTC m=+0.907485851 container remove ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_poitras, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:50:54 compute-0 systemd[1]: libpod-conmon-ae222a917245cf119e7aed25d1010f9fee3e79617fdf343755bcdb0a04562c87.scope: Deactivated successfully.
Sep 30 18:50:54 compute-0 sudo[363556]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:50:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:54.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:50:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:50:54 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:50:54 compute-0 sudo[363767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:50:54 compute-0 sudo[363767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:50:54 compute-0 sudo[363767]: pam_unix(sudo:session): session closed for user root
Sep 30 18:50:55 compute-0 ceph-mon[73755]: pgmap v2178: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:55 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:50:55 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:50:55 compute-0 nova_compute[265391]: 2025-09-30 18:50:55.936 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2179: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:56.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:50:56 compute-0 nova_compute[265391]: 2025-09-30 18:50:56.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:50:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:56.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:57 compute-0 nova_compute[265391]: 2025-09-30 18:50:57.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:57.406Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:50:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/912244844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:50:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:50:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/912244844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:50:57 compute-0 ceph-mon[73755]: pgmap v2179: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/912244844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:50:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/912244844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:50:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2180: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:50:58.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:58 compute-0 nova_compute[265391]: 2025-09-30 18:50:58.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:50:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:50:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:50:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:50:58.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:50:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:58] "GET /metrics HTTP/1.1" 200 46705 "" "Prometheus/2.51.0"
Sep 30 18:50:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:50:58] "GET /metrics HTTP/1.1" 200 46705 "" "Prometheus/2.51.0"
Sep 30 18:50:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:50:58.948Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:50:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:50:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:50:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:50:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:50:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:50:59 compute-0 podman[276673]: time="2025-09-30T18:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:50:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:50:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10311 "" "Go-http-client/1.1"
Sep 30 18:50:59 compute-0 ceph-mon[73755]: pgmap v2180: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:50:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2181: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:51:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:00.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:00 compute-0 nova_compute[265391]: 2025-09-30 18:51:00.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:51:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:51:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:00.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:51:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2988664038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:51:00 compute-0 nova_compute[265391]: 2025-09-30 18:51:00.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:51:00 compute-0 nova_compute[265391]: 2025-09-30 18:51:00.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:51:00 compute-0 nova_compute[265391]: 2025-09-30 18:51:00.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:51:00 compute-0 nova_compute[265391]: 2025-09-30 18:51:00.942 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:51:00 compute-0 nova_compute[265391]: 2025-09-30 18:51:00.942 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:51:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:51:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634599146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:51:01 compute-0 nova_compute[265391]: 2025-09-30 18:51:01.401 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:51:01 compute-0 openstack_network_exporter[279566]: ERROR   18:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:51:01 compute-0 openstack_network_exporter[279566]: ERROR   18:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:51:01 compute-0 openstack_network_exporter[279566]: ERROR   18:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:51:01 compute-0 openstack_network_exporter[279566]: ERROR   18:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:51:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:51:01 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:51:01 compute-0 openstack_network_exporter[279566]: ERROR   18:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:51:01 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:51:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:51:01 compute-0 podman[363822]: 2025-09-30 18:51:01.552235825 +0000 UTC m=+0.075355984 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Sep 30 18:51:01 compute-0 podman[363821]: 2025-09-30 18:51:01.570302964 +0000 UTC m=+0.093483944 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:51:01 compute-0 podman[363823]: 2025-09-30 18:51:01.587005636 +0000 UTC m=+0.112945898 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, vcs-type=git, version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Sep 30 18:51:01 compute-0 nova_compute[265391]: 2025-09-30 18:51:01.600 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:51:01 compute-0 nova_compute[265391]: 2025-09-30 18:51:01.601 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:51:01 compute-0 nova_compute[265391]: 2025-09-30 18:51:01.635 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:51:01 compute-0 nova_compute[265391]: 2025-09-30 18:51:01.635 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4266MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:51:01 compute-0 nova_compute[265391]: 2025-09-30 18:51:01.635 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:51:01 compute-0 nova_compute[265391]: 2025-09-30 18:51:01.635 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:51:01 compute-0 ceph-mon[73755]: pgmap v2181: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 607 B/s rd, 0 op/s
Sep 30 18:51:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3634599146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:51:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2182: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:02.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:02 compute-0 nova_compute[265391]: 2025-09-30 18:51:02.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:02.508 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:48:c5 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-cee64377-b6b9-46f2-8d77-c7978d4cc7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cee64377-b6b9-46f2-8d77-c7978d4cc7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2790c8e9fb6a48debd443ac79e2e12ba', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a3bc33b-b1e3-4a2f-8784-2e8238744730, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b27ac1cf-5e47-45c4-b2a7-c18dab16f3aa) old=Port_Binding(mac=['fa:16:3e:28:48:c5'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-cee64377-b6b9-46f2-8d77-c7978d4cc7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cee64377-b6b9-46f2-8d77-c7978d4cc7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2790c8e9fb6a48debd443ac79e2e12ba', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:51:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:02.509 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b27ac1cf-5e47-45c4-b2a7-c18dab16f3aa in datapath cee64377-b6b9-46f2-8d77-c7978d4cc7a0 updated
Sep 30 18:51:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:02.510 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cee64377-b6b9-46f2-8d77-c7978d4cc7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:51:02 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:02.511 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[ec885788-3575-4303-bc64-1d2daa06740a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:51:02 compute-0 nova_compute[265391]: 2025-09-30 18:51:02.740 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:51:02 compute-0 nova_compute[265391]: 2025-09-30 18:51:02.741 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:51:01 up  1:54,  0 user,  load average: 0.45, 0.69, 0.79\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:51:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:02.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:02 compute-0 nova_compute[265391]: 2025-09-30 18:51:02.805 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:51:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2618615078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:51:02 compute-0 sudo[363885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:51:02 compute-0 sudo[363885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:51:02 compute-0 sudo[363885]: pam_unix(sudo:session): session closed for user root
Sep 30 18:51:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:51:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653443428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:51:03 compute-0 nova_compute[265391]: 2025-09-30 18:51:03.277 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:51:03 compute-0 nova_compute[265391]: 2025-09-30 18:51:03.284 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:51:03 compute-0 nova_compute[265391]: 2025-09-30 18:51:03.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:03 compute-0 nova_compute[265391]: 2025-09-30 18:51:03.795 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:51:03 compute-0 ceph-mon[73755]: pgmap v2182: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/653443428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:51:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:03.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2183: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:04.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:04 compute-0 nova_compute[265391]: 2025-09-30 18:51:04.309 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:51:04 compute-0 nova_compute[265391]: 2025-09-30 18:51:04.309 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.674s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:51:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:51:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:04.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:51:04 compute-0 ceph-mon[73755]: pgmap v2183: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:51:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2184: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:06.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:06.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:07 compute-0 ceph-mon[73755]: pgmap v2184: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:07 compute-0 nova_compute[265391]: 2025-09-30 18:51:07.311 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:51:07 compute-0 nova_compute[265391]: 2025-09-30 18:51:07.312 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:51:07 compute-0 nova_compute[265391]: 2025-09-30 18:51:07.312 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:51:07 compute-0 nova_compute[265391]: 2025-09-30 18:51:07.312 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:51:07 compute-0 nova_compute[265391]: 2025-09-30 18:51:07.312 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:51:07 compute-0 nova_compute[265391]: 2025-09-30 18:51:07.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:51:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:51:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:07.407Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:51:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:51:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2185: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:51:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:51:08
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.nfs', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', '.mgr']
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:51:08 compute-0 nova_compute[265391]: 2025-09-30 18:51:08.425 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:51:08 compute-0 nova_compute[265391]: 2025-09-30 18:51:08.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:08.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:08] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:51:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:08] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:51:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:08.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:09 compute-0 ceph-mon[73755]: pgmap v2185: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 18:51:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:09.775 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:84:e3 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-22f29271-35aa-4de9-8453-2dab07456294', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22f29271-35aa-4de9-8453-2dab07456294', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '127ca83529de45efa0a76aa8ceefcd3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=661e58a3-2ec2-44de-9298-150cee8d1105, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7c3de4a3-3879-465a-88ae-6faa2c17e570) old=Port_Binding(mac=['fa:16:3e:1c:84:e3'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-22f29271-35aa-4de9-8453-2dab07456294', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22f29271-35aa-4de9-8453-2dab07456294', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '127ca83529de45efa0a76aa8ceefcd3d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:51:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:09.776 166158 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7c3de4a3-3879-465a-88ae-6faa2c17e570 in datapath 22f29271-35aa-4de9-8453-2dab07456294 updated
Sep 30 18:51:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:09.777 166158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22f29271-35aa-4de9-8453-2dab07456294, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Sep 30 18:51:09 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:09.778 292173 DEBUG oslo.privsep.daemon [-] privsep: reply[a81f4582-fd0c-4e3a-8f95-db095821d497]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Sep 30 18:51:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2186: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:51:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:10.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:11 compute-0 ceph-mon[73755]: pgmap v2186: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:51:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.351894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258271351956, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 2025, "num_deletes": 258, "total_data_size": 3744637, "memory_usage": 3816832, "flush_reason": "Manual Compaction"}
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258271368910, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 3620999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52480, "largest_seqno": 54504, "table_properties": {"data_size": 3612026, "index_size": 5531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18943, "raw_average_key_size": 20, "raw_value_size": 3593844, "raw_average_value_size": 3811, "num_data_blocks": 241, "num_entries": 943, "num_filter_entries": 943, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258074, "oldest_key_time": 1759258074, "file_creation_time": 1759258271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 17156 microseconds, and 10872 cpu microseconds.
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.369058) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 3620999 bytes OK
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.369102) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.370716) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.370729) EVENT_LOG_v1 {"time_micros": 1759258271370724, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.370744) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3736204, prev total WAL file size 3736204, number of live WAL files 2.
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.372011) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303037' seq:72057594037927935, type:22 .. '6C6F676D0032323631' seq:0, type:0; will stop at (end)
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(3536KB)], [122(11MB)]
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258271372068, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 15220651, "oldest_snapshot_seqno": -1}
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7868 keys, 15073650 bytes, temperature: kUnknown
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258271433171, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 15073650, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15022920, "index_size": 29946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 207127, "raw_average_key_size": 26, "raw_value_size": 14884110, "raw_average_value_size": 1891, "num_data_blocks": 1176, "num_entries": 7868, "num_filter_entries": 7868, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.433458) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 15073650 bytes
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.434891) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.1 rd, 246.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.1 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 8400, records dropped: 532 output_compression: NoCompression
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.434909) EVENT_LOG_v1 {"time_micros": 1759258271434901, "job": 74, "event": "compaction_finished", "compaction_time_micros": 61100, "compaction_time_cpu_micros": 28938, "output_level": 6, "num_output_files": 1, "total_output_size": 15073650, "num_input_records": 8400, "num_output_records": 7868, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258271435832, "job": 74, "event": "table_file_deletion", "file_number": 124}
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258271438046, "job": 74, "event": "table_file_deletion", "file_number": 122}
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.371935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.438190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.438196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.438198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.438199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:11.438201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:11 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:11.471 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:51:11 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:11.472 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:51:11 compute-0 nova_compute[265391]: 2025-09-30 18:51:11.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2187: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:12.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:12 compute-0 nova_compute[265391]: 2025-09-30 18:51:12.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:12.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:13 compute-0 ceph-mon[73755]: pgmap v2187: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:13 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:13.474 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:51:13 compute-0 nova_compute[265391]: 2025-09-30 18:51:13.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:13.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2188: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:51:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:14.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:14 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 18:51:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:14.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:15 compute-0 ceph-mon[73755]: pgmap v2188: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:51:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2189: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:16.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:16.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:17 compute-0 nova_compute[265391]: 2025-09-30 18:51:17.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:17.408Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:17 compute-0 ceph-mon[73755]: pgmap v2189: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:17 compute-0 podman[363948]: 2025-09-30 18:51:17.521520788 +0000 UTC m=+0.056042973 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4)
Sep 30 18:51:17 compute-0 podman[363949]: 2025-09-30 18:51:17.55514391 +0000 UTC m=+0.084136512 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller)
Sep 30 18:51:17 compute-0 podman[363950]: 2025-09-30 18:51:17.559078692 +0000 UTC m=+0.087015826 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:51:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2190: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Sep 30 18:51:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e144 e144: 2 total, 2 up, 2 in
Sep 30 18:51:18 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e144: 2 total, 2 up, 2 in
Sep 30 18:51:18 compute-0 nova_compute[265391]: 2025-09-30 18:51:18.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:18] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:51:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:18] "GET /metrics HTTP/1.1" 200 46709 "" "Prometheus/2.51.0"
Sep 30 18:51:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:18.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Sep 30 18:51:19 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e145 e145: 2 total, 2 up, 2 in
Sep 30 18:51:19 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e145: 2 total, 2 up, 2 in
Sep 30 18:51:19 compute-0 ceph-mon[73755]: pgmap v2190: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:51:19 compute-0 ceph-mon[73755]: osdmap e144: 2 total, 2 up, 2 in
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.499037) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258279499068, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 334, "num_deletes": 251, "total_data_size": 178427, "memory_usage": 185984, "flush_reason": "Manual Compaction"}
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258279503306, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 176245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54505, "largest_seqno": 54838, "table_properties": {"data_size": 174092, "index_size": 317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5417, "raw_average_key_size": 18, "raw_value_size": 169834, "raw_average_value_size": 583, "num_data_blocks": 14, "num_entries": 291, "num_filter_entries": 291, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258272, "oldest_key_time": 1759258272, "file_creation_time": 1759258279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 4337 microseconds, and 974 cpu microseconds.
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.503372) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 176245 bytes OK
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.503389) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.504924) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.504972) EVENT_LOG_v1 {"time_micros": 1759258279504962, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.504996) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 176138, prev total WAL file size 176138, number of live WAL files 2.
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.505468) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(172KB)], [125(14MB)]
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258279505503, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 15249895, "oldest_snapshot_seqno": -1}
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7645 keys, 13225395 bytes, temperature: kUnknown
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258279555222, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 13225395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13177503, "index_size": 27683, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 203182, "raw_average_key_size": 26, "raw_value_size": 13043860, "raw_average_value_size": 1706, "num_data_blocks": 1074, "num_entries": 7645, "num_filter_entries": 7645, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.555472) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13225395 bytes
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.557351) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 306.3 rd, 265.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 14.4 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(161.6) write-amplify(75.0) OK, records in: 8159, records dropped: 514 output_compression: NoCompression
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.557369) EVENT_LOG_v1 {"time_micros": 1759258279557361, "job": 76, "event": "compaction_finished", "compaction_time_micros": 49782, "compaction_time_cpu_micros": 26871, "output_level": 6, "num_output_files": 1, "total_output_size": 13225395, "num_input_records": 8159, "num_output_records": 7645, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258279557491, "job": 76, "event": "table_file_deletion", "file_number": 127}
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258279559570, "job": 76, "event": "table_file_deletion", "file_number": 125}
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.505387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.559605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.559609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.559610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.559612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:19 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:51:19.559613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:51:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2193: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:51:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:20.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:20 compute-0 ceph-mon[73755]: osdmap e145: 2 total, 2 up, 2 in
Sep 30 18:51:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:20.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:21 compute-0 ceph-mon[73755]: pgmap v2193: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:51:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2194: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 383 B/s rd, 0 op/s
Sep 30 18:51:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:22.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:51:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:51:22 compute-0 nova_compute[265391]: 2025-09-30 18:51:22.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:51:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:22.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:23 compute-0 sudo[364019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:51:23 compute-0 sudo[364019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:51:23 compute-0 sudo[364019]: pam_unix(sudo:session): session closed for user root
Sep 30 18:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=cleanup t=2025-09-30T18:51:23.512426723Z level=info msg="Completed cleanup jobs" duration=15.243075ms
Sep 30 18:51:23 compute-0 ceph-mon[73755]: pgmap v2194: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 383 B/s rd, 0 op/s
Sep 30 18:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T18:51:23.621033398Z level=info msg="Update check succeeded" duration=53.271891ms
Sep 30 18:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T18:51:23.627843024Z level=info msg="Update check succeeded" duration=64.323487ms
Sep 30 18:51:23 compute-0 nova_compute[265391]: 2025-09-30 18:51:23.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:23.940Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:51:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:23.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2195: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:51:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:24.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:25 compute-0 ceph-mon[73755]: pgmap v2195: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:51:25 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1106233728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:51:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2196: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:51:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:26 compute-0 ovn_controller[156242]: 2025-09-30T18:51:26Z|00325|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Sep 30 18:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Sep 30 18:51:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 e146: 2 total, 2 up, 2 in
Sep 30 18:51:26 compute-0 ceph-mon[73755]: log_channel(cluster) log [DBG] : osdmap e146: 2 total, 2 up, 2 in
Sep 30 18:51:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:26.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:27 compute-0 nova_compute[265391]: 2025-09-30 18:51:27.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:27 compute-0 ceph-mon[73755]: pgmap v2196: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 1.1 KiB/s rd, 767 B/s wr, 2 op/s
Sep 30 18:51:27 compute-0 ceph-mon[73755]: osdmap e146: 2 total, 2 up, 2 in
Sep 30 18:51:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:27.409Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2198: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 724 B/s rd, 724 B/s wr, 1 op/s
Sep 30 18:51:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:28.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:28 compute-0 nova_compute[265391]: 2025-09-30 18:51:28.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:28] "GET /metrics HTTP/1.1" 200 46712 "" "Prometheus/2.51.0"
Sep 30 18:51:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:28] "GET /metrics HTTP/1.1" 200 46712 "" "Prometheus/2.51.0"
Sep 30 18:51:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:28.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:28.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:29 compute-0 ceph-mon[73755]: pgmap v2198: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 724 B/s rd, 724 B/s wr, 1 op/s
Sep 30 18:51:29 compute-0 podman[276673]: time="2025-09-30T18:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:51:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:51:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10317 "" "Go-http-client/1.1"
Sep 30 18:51:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2199: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 921 B/s rd, 614 B/s wr, 1 op/s
Sep 30 18:51:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:30.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:31 compute-0 ceph-mon[73755]: pgmap v2199: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 921 B/s rd, 614 B/s wr, 1 op/s
Sep 30 18:51:31 compute-0 openstack_network_exporter[279566]: ERROR   18:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:51:31 compute-0 openstack_network_exporter[279566]: ERROR   18:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:51:31 compute-0 openstack_network_exporter[279566]: ERROR   18:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:51:31 compute-0 openstack_network_exporter[279566]: ERROR   18:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:51:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:51:31 compute-0 openstack_network_exporter[279566]: ERROR   18:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:51:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:51:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2200: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 921 B/s rd, 614 B/s wr, 1 op/s
Sep 30 18:51:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:32.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:32 compute-0 nova_compute[265391]: 2025-09-30 18:51:32.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:32 compute-0 podman[364054]: 2025-09-30 18:51:32.521792437 +0000 UTC m=+0.063587479 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Sep 30 18:51:32 compute-0 podman[364055]: 2025-09-30 18:51:32.52226976 +0000 UTC m=+0.059307658 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:51:32 compute-0 podman[364056]: 2025-09-30 18:51:32.536142539 +0000 UTC m=+0.068046424 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, distribution-scope=public)
Sep 30 18:51:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:32.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:33 compute-0 ceph-mon[73755]: pgmap v2200: 353 pgs: 353 active+clean; 41 MiB data, 396 MiB used, 40 GiB / 40 GiB avail; 921 B/s rd, 614 B/s wr, 1 op/s
Sep 30 18:51:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/374989348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:51:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2330939330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:51:33 compute-0 nova_compute[265391]: 2025-09-30 18:51:33.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:33.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2201: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Sep 30 18:51:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:34.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:34.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:35 compute-0 ceph-mon[73755]: pgmap v2201: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Sep 30 18:51:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2202: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Sep 30 18:51:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:36.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:51:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3397859791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:51:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:51:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3397859791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:51:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:36.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:51:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:51:37 compute-0 nova_compute[265391]: 2025-09-30 18:51:37.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:37.410Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:51:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:51:37 compute-0 ceph-mon[73755]: pgmap v2202: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Sep 30 18:51:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3397859791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:51:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3397859791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:51:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:51:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2203: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Sep 30 18:51:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:38 compute-0 nova_compute[265391]: 2025-09-30 18:51:38.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:38] "GET /metrics HTTP/1.1" 200 46760 "" "Prometheus/2.51.0"
Sep 30 18:51:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:38] "GET /metrics HTTP/1.1" 200 46760 "" "Prometheus/2.51.0"
Sep 30 18:51:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:38.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:38.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:39 compute-0 ceph-mon[73755]: pgmap v2203: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Sep 30 18:51:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2204: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:51:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:40.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:40.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:41 compute-0 ceph-mon[73755]: pgmap v2204: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:51:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2205: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:51:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:51:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:51:42 compute-0 nova_compute[265391]: 2025-09-30 18:51:42.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:42.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:43 compute-0 sudo[364124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:51:43 compute-0 sudo[364124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:51:43 compute-0 sudo[364124]: pam_unix(sudo:session): session closed for user root
Sep 30 18:51:43 compute-0 ceph-mon[73755]: pgmap v2205: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:51:43 compute-0 nova_compute[265391]: 2025-09-30 18:51:43.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:43.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2206: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Sep 30 18:51:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:44.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:44.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:45 compute-0 ceph-mon[73755]: pgmap v2206: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Sep 30 18:51:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2207: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:51:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:46.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:51:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:46.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:51:47 compute-0 nova_compute[265391]: 2025-09-30 18:51:47.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:47.411Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:47 compute-0 ceph-mon[73755]: pgmap v2207: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:51:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2208: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:51:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:48.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:48 compute-0 podman[364156]: 2025-09-30 18:51:48.520060177 +0000 UTC m=+0.057174393 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Sep 30 18:51:48 compute-0 podman[364158]: 2025-09-30 18:51:48.534093531 +0000 UTC m=+0.055037558 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:51:48 compute-0 podman[364157]: 2025-09-30 18:51:48.54641564 +0000 UTC m=+0.081878803 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 18:51:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:51:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2461365287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:51:48 compute-0 nova_compute[265391]: 2025-09-30 18:51:48.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:48] "GET /metrics HTTP/1.1" 200 46760 "" "Prometheus/2.51.0"
Sep 30 18:51:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:48] "GET /metrics HTTP/1.1" 200 46760 "" "Prometheus/2.51.0"
Sep 30 18:51:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:48.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:48.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:49 compute-0 ceph-mon[73755]: pgmap v2208: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:51:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2461365287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:51:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2209: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:51:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:50.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:50.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:51 compute-0 ceph-mon[73755]: pgmap v2209: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:51:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2210: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 65 op/s
Sep 30 18:51:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:52.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:51:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:51:52 compute-0 nova_compute[265391]: 2025-09-30 18:51:52.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:51:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:52.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:53 compute-0 ceph-mon[73755]: pgmap v2210: 353 pgs: 353 active+clean; 88 MiB data, 417 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 65 op/s
Sep 30 18:51:53 compute-0 nova_compute[265391]: 2025-09-30 18:51:53.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:53.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2211: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Sep 30 18:51:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:51:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:54.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:51:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:54.349 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:51:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:54.350 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:51:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:51:54.350 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:51:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:54.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:55 compute-0 sudo[364231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:51:55 compute-0 sudo[364231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:51:55 compute-0 sudo[364231]: pam_unix(sudo:session): session closed for user root
Sep 30 18:51:55 compute-0 sudo[364256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:51:55 compute-0 sudo[364256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:51:55 compute-0 nova_compute[265391]: 2025-09-30 18:51:55.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:51:55 compute-0 ceph-mon[73755]: pgmap v2211: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Sep 30 18:51:55 compute-0 sudo[364256]: pam_unix(sudo:session): session closed for user root
Sep 30 18:51:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 18:51:55 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:51:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2212: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 405 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Sep 30 18:51:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:56.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:51:56 compute-0 nova_compute[265391]: 2025-09-30 18:51:56.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:51:56 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 18:51:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:57 compute-0 sshd-session[364153]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:51:57 compute-0 sshd-session[364153]: banner exchange: Connection from 115.190.39.222 port 42842: Connection timed out
Sep 30 18:51:57 compute-0 nova_compute[265391]: 2025-09-30 18:51:57.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:57.412Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:51:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3734450267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:51:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:51:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3734450267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:51:57 compute-0 ceph-mon[73755]: pgmap v2212: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 405 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Sep 30 18:51:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3734450267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:51:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3734450267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:51:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2213: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 405 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Sep 30 18:51:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:51:58.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:51:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:51:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:51:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:51:58 compute-0 nova_compute[265391]: 2025-09-30 18:51:58.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:51:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:58] "GET /metrics HTTP/1.1" 200 46758 "" "Prometheus/2.51.0"
Sep 30 18:51:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:51:58] "GET /metrics HTTP/1.1" 200 46758 "" "Prometheus/2.51.0"
Sep 30 18:51:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:51:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:51:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:51:58.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:51:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:51:58.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:51:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:51:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:51:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:51:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:51:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:51:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 18:51:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:51:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:51:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:51:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:51:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:51:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:51:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2214: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Sep 30 18:51:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:51:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:51:59 compute-0 ceph-mon[73755]: pgmap v2213: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 405 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Sep 30 18:51:59 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:51:59 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:51:59 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 18:51:59 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:51:59 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:51:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:51:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:51:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:51:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:51:59 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:51:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:51:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:51:59 compute-0 sudo[364317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:51:59 compute-0 sudo[364317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:51:59 compute-0 sudo[364317]: pam_unix(sudo:session): session closed for user root
Sep 30 18:51:59 compute-0 podman[276673]: time="2025-09-30T18:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:51:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:51:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10316 "" "Go-http-client/1.1"
Sep 30 18:51:59 compute-0 sudo[364342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:51:59 compute-0 sudo[364342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:00 compute-0 podman[364408]: 2025-09-30 18:52:00.238117719 +0000 UTC m=+0.050228843 container create 96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:52:00 compute-0 systemd[1]: Started libpod-conmon-96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44.scope.
Sep 30 18:52:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:52:00 compute-0 podman[364408]: 2025-09-30 18:52:00.214415885 +0000 UTC m=+0.026527049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:52:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:00.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:00 compute-0 podman[364408]: 2025-09-30 18:52:00.334201679 +0000 UTC m=+0.146312853 container init 96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_pare, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:52:00 compute-0 podman[364408]: 2025-09-30 18:52:00.342535105 +0000 UTC m=+0.154646259 container start 96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_pare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:52:00 compute-0 podman[364408]: 2025-09-30 18:52:00.346555209 +0000 UTC m=+0.158666423 container attach 96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_pare, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:52:00 compute-0 happy_pare[364425]: 167 167
Sep 30 18:52:00 compute-0 systemd[1]: libpod-96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44.scope: Deactivated successfully.
Sep 30 18:52:00 compute-0 podman[364408]: 2025-09-30 18:52:00.349797914 +0000 UTC m=+0.161909048 container died 96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 18:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d376013ad1c6003d8ce33eb401f18630de439a794bbe5b0ad4e043ed2d168f8d-merged.mount: Deactivated successfully.
Sep 30 18:52:00 compute-0 podman[364408]: 2025-09-30 18:52:00.432107327 +0000 UTC m=+0.244218451 container remove 96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_pare, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:52:00 compute-0 systemd[1]: libpod-conmon-96a91249d55d6bd1c480b5bdf247042f11235dc2f351ef1a7e2a63342926cd44.scope: Deactivated successfully.
Sep 30 18:52:00 compute-0 podman[364450]: 2025-09-30 18:52:00.639533083 +0000 UTC m=+0.063083126 container create 901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_diffie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:52:00 compute-0 ceph-mon[73755]: pgmap v2214: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Sep 30 18:52:00 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:52:00 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:52:00 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:52:00 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:52:00 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:52:00 compute-0 systemd[1]: Started libpod-conmon-901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56.scope.
Sep 30 18:52:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:52:00 compute-0 podman[364450]: 2025-09-30 18:52:00.616550567 +0000 UTC m=+0.040100700 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6c78dca7484db6de4b487d39db76eadf25b6a31363f1846f274d84cbfca5c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6c78dca7484db6de4b487d39db76eadf25b6a31363f1846f274d84cbfca5c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6c78dca7484db6de4b487d39db76eadf25b6a31363f1846f274d84cbfca5c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6c78dca7484db6de4b487d39db76eadf25b6a31363f1846f274d84cbfca5c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6c78dca7484db6de4b487d39db76eadf25b6a31363f1846f274d84cbfca5c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:00 compute-0 podman[364450]: 2025-09-30 18:52:00.734008232 +0000 UTC m=+0.157558305 container init 901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_diffie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 18:52:00 compute-0 podman[364450]: 2025-09-30 18:52:00.749525304 +0000 UTC m=+0.173075357 container start 901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:52:00 compute-0 podman[364450]: 2025-09-30 18:52:00.753734353 +0000 UTC m=+0.177284486 container attach 901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:52:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:01 compute-0 serene_diffie[364466]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:52:01 compute-0 serene_diffie[364466]: --> All data devices are unavailable
Sep 30 18:52:01 compute-0 systemd[1]: libpod-901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56.scope: Deactivated successfully.
Sep 30 18:52:01 compute-0 podman[364450]: 2025-09-30 18:52:01.115236512 +0000 UTC m=+0.538786545 container died 901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e6c78dca7484db6de4b487d39db76eadf25b6a31363f1846f274d84cbfca5c4-merged.mount: Deactivated successfully.
Sep 30 18:52:01 compute-0 podman[364450]: 2025-09-30 18:52:01.221516717 +0000 UTC m=+0.645066750 container remove 901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_diffie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:52:01 compute-0 systemd[1]: libpod-conmon-901abc110b7400055719e137a01d5538c149a2c3779824505c1b0b7036d7af56.scope: Deactivated successfully.
Sep 30 18:52:01 compute-0 sudo[364342]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:01 compute-0 sudo[364495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:52:01 compute-0 sudo[364495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:01 compute-0 sudo[364495]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:01 compute-0 sudo[364520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:52:01 compute-0 sudo[364520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:01 compute-0 openstack_network_exporter[279566]: ERROR   18:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:52:01 compute-0 openstack_network_exporter[279566]: ERROR   18:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:52:01 compute-0 openstack_network_exporter[279566]: ERROR   18:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:52:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:52:01 compute-0 openstack_network_exporter[279566]: ERROR   18:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:52:01 compute-0 openstack_network_exporter[279566]: ERROR   18:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:52:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:52:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2215: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Sep 30 18:52:01 compute-0 podman[364589]: 2025-09-30 18:52:01.752253023 +0000 UTC m=+0.043077368 container create ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_antonelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 18:52:01 compute-0 systemd[1]: Started libpod-conmon-ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792.scope.
Sep 30 18:52:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:52:01 compute-0 podman[364589]: 2025-09-30 18:52:01.825265025 +0000 UTC m=+0.116089450 container init ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:52:01 compute-0 podman[364589]: 2025-09-30 18:52:01.735363305 +0000 UTC m=+0.026187680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:52:01 compute-0 podman[364589]: 2025-09-30 18:52:01.83237884 +0000 UTC m=+0.123203195 container start ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_antonelli, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:52:01 compute-0 podman[364589]: 2025-09-30 18:52:01.836141597 +0000 UTC m=+0.126966022 container attach ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:52:01 compute-0 condescending_antonelli[364606]: 167 167
Sep 30 18:52:01 compute-0 systemd[1]: libpod-ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792.scope: Deactivated successfully.
Sep 30 18:52:01 compute-0 podman[364589]: 2025-09-30 18:52:01.838198941 +0000 UTC m=+0.129023286 container died ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 18:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f28c02b7bd6c564753d49d845f6080cf4bc27921be11c0dd393264b41f5f6551-merged.mount: Deactivated successfully.
Sep 30 18:52:01 compute-0 podman[364589]: 2025-09-30 18:52:01.885461765 +0000 UTC m=+0.176286110 container remove ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_antonelli, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:52:01 compute-0 systemd[1]: libpod-conmon-ebeaab28fb25e8359e08338c30e47eec75a9461fdc9e23e6b26cac61613c8792.scope: Deactivated successfully.
Sep 30 18:52:02 compute-0 podman[364630]: 2025-09-30 18:52:02.059093926 +0000 UTC m=+0.041640471 container create 1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_villani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:52:02 compute-0 systemd[1]: Started libpod-conmon-1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87.scope.
Sep 30 18:52:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a88b551cfda722853c6cfd4fd6525dca165282ec9f5935103fb530d7acb6b5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a88b551cfda722853c6cfd4fd6525dca165282ec9f5935103fb530d7acb6b5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a88b551cfda722853c6cfd4fd6525dca165282ec9f5935103fb530d7acb6b5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a88b551cfda722853c6cfd4fd6525dca165282ec9f5935103fb530d7acb6b5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:02 compute-0 podman[364630]: 2025-09-30 18:52:02.041775607 +0000 UTC m=+0.024322162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:52:02 compute-0 podman[364630]: 2025-09-30 18:52:02.150031963 +0000 UTC m=+0.132578518 container init 1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_villani, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:52:02 compute-0 podman[364630]: 2025-09-30 18:52:02.157385843 +0000 UTC m=+0.139932378 container start 1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_villani, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:52:02 compute-0 podman[364630]: 2025-09-30 18:52:02.161432808 +0000 UTC m=+0.143979373 container attach 1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_villani, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:52:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:02.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:02 compute-0 nova_compute[265391]: 2025-09-30 18:52:02.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:02 compute-0 relaxed_villani[364646]: {
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:     "0": [
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:         {
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "devices": [
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "/dev/loop3"
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             ],
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "lv_name": "ceph_lv0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "lv_size": "21470642176",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "name": "ceph_lv0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "tags": {
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.cluster_name": "ceph",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.crush_device_class": "",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.encrypted": "0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.osd_id": "0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.type": "block",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.vdo": "0",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:                 "ceph.with_tpm": "0"
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             },
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "type": "block",
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:             "vg_name": "ceph_vg0"
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:         }
Sep 30 18:52:02 compute-0 relaxed_villani[364646]:     ]
Sep 30 18:52:02 compute-0 relaxed_villani[364646]: }
Sep 30 18:52:02 compute-0 nova_compute[265391]: 2025-09-30 18:52:02.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:02 compute-0 systemd[1]: libpod-1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87.scope: Deactivated successfully.
Sep 30 18:52:02 compute-0 podman[364630]: 2025-09-30 18:52:02.440937233 +0000 UTC m=+0.423483788 container died 1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a88b551cfda722853c6cfd4fd6525dca165282ec9f5935103fb530d7acb6b5c-merged.mount: Deactivated successfully.
Sep 30 18:52:02 compute-0 podman[364630]: 2025-09-30 18:52:02.553201722 +0000 UTC m=+0.535748257 container remove 1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:52:02 compute-0 systemd[1]: libpod-conmon-1a7cf14b949052a523ccfc9b6b6230cb3e394d147c2ab8d85cf6ff81e078fd87.scope: Deactivated successfully.
Sep 30 18:52:02 compute-0 sudo[364520]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:02 compute-0 sudo[364685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:52:02 compute-0 podman[364667]: 2025-09-30 18:52:02.673220713 +0000 UTC m=+0.072109530 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Sep 30 18:52:02 compute-0 sudo[364685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:02 compute-0 ceph-mon[73755]: pgmap v2215: 353 pgs: 353 active+clean; 121 MiB data, 450 MiB used, 40 GiB / 40 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Sep 30 18:52:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2324469393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:52:02 compute-0 sudo[364685]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:02 compute-0 podman[364668]: 2025-09-30 18:52:02.677538445 +0000 UTC m=+0.076837143 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_id=iscsid, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=watcher_latest)
Sep 30 18:52:02 compute-0 podman[364669]: 2025-09-30 18:52:02.687280327 +0000 UTC m=+0.075603110 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.)
Sep 30 18:52:02 compute-0 sudo[364749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:52:02 compute-0 sudo[364749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:02.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:02 compute-0 nova_compute[265391]: 2025-09-30 18:52:02.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:52:02 compute-0 nova_compute[265391]: 2025-09-30 18:52:02.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:52:02 compute-0 nova_compute[265391]: 2025-09-30 18:52:02.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:52:02 compute-0 nova_compute[265391]: 2025-09-30 18:52:02.942 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:52:02 compute-0 nova_compute[265391]: 2025-09-30 18:52:02.942 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:52:03 compute-0 podman[364817]: 2025-09-30 18:52:03.165101742 +0000 UTC m=+0.070544350 container create e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Sep 30 18:52:03 compute-0 podman[364817]: 2025-09-30 18:52:03.115105956 +0000 UTC m=+0.020548584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:52:03 compute-0 systemd[1]: Started libpod-conmon-e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6.scope.
Sep 30 18:52:03 compute-0 sudo[364850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:52:03 compute-0 sudo[364850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:03 compute-0 sudo[364850]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:52:03 compute-0 podman[364817]: 2025-09-30 18:52:03.344788669 +0000 UTC m=+0.250231297 container init e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_davinci, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:52:03 compute-0 podman[364817]: 2025-09-30 18:52:03.351123933 +0000 UTC m=+0.256566541 container start e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 18:52:03 compute-0 determined_davinci[364875]: 167 167
Sep 30 18:52:03 compute-0 systemd[1]: libpod-e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6.scope: Deactivated successfully.
Sep 30 18:52:03 compute-0 conmon[364875]: conmon e5dbf1428a2494d85a85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6.scope/container/memory.events
Sep 30 18:52:03 compute-0 podman[364817]: 2025-09-30 18:52:03.357999662 +0000 UTC m=+0.263442300 container attach e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_davinci, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:52:03 compute-0 podman[364817]: 2025-09-30 18:52:03.359044119 +0000 UTC m=+0.264486757 container died e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 18:52:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:52:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594717304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:52:03 compute-0 nova_compute[265391]: 2025-09-30 18:52:03.438 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee5116ae56f544f3db354e58bfa76ca0185956950de788e1cc2fcd98e378d8cc-merged.mount: Deactivated successfully.
Sep 30 18:52:03 compute-0 podman[364817]: 2025-09-30 18:52:03.549175625 +0000 UTC m=+0.454618233 container remove e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 18:52:03 compute-0 systemd[1]: libpod-conmon-e5dbf1428a2494d85a8537b29e1d2a70b447b195cf332f06a95187be5f4984f6.scope: Deactivated successfully.
Sep 30 18:52:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2216: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Sep 30 18:52:03 compute-0 nova_compute[265391]: 2025-09-30 18:52:03.588 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:52:03 compute-0 nova_compute[265391]: 2025-09-30 18:52:03.589 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:52:03 compute-0 nova_compute[265391]: 2025-09-30 18:52:03.609 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:52:03 compute-0 nova_compute[265391]: 2025-09-30 18:52:03.609 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4278MB free_disk=39.946693420410156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:52:03 compute-0 nova_compute[265391]: 2025-09-30 18:52:03.609 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:52:03 compute-0 nova_compute[265391]: 2025-09-30 18:52:03.610 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:52:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1594717304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:52:03 compute-0 podman[364905]: 2025-09-30 18:52:03.753533202 +0000 UTC m=+0.099579882 container create 42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lewin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:52:03 compute-0 podman[364905]: 2025-09-30 18:52:03.675316035 +0000 UTC m=+0.021362735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:52:03 compute-0 nova_compute[265391]: 2025-09-30 18:52:03.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:03 compute-0 systemd[1]: Started libpod-conmon-42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80.scope.
Sep 30 18:52:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c66ed0439f79a80dfe0c8b7bf9bd1c973ab76c9b7e5ac95167d2c3448d18a68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c66ed0439f79a80dfe0c8b7bf9bd1c973ab76c9b7e5ac95167d2c3448d18a68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c66ed0439f79a80dfe0c8b7bf9bd1c973ab76c9b7e5ac95167d2c3448d18a68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c66ed0439f79a80dfe0c8b7bf9bd1c973ab76c9b7e5ac95167d2c3448d18a68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:52:03 compute-0 podman[364905]: 2025-09-30 18:52:03.849837538 +0000 UTC m=+0.195884238 container init 42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lewin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:52:03 compute-0 podman[364905]: 2025-09-30 18:52:03.85762352 +0000 UTC m=+0.203670200 container start 42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lewin, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 18:52:03 compute-0 podman[364905]: 2025-09-30 18:52:03.863890762 +0000 UTC m=+0.209937452 container attach 42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 18:52:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:03.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:52:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:03.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:04.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:52:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121484505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:52:04 compute-0 lvm[364997]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:52:04 compute-0 lvm[364997]: VG ceph_vg0 finished
Sep 30 18:52:04 compute-0 beautiful_lewin[364921]: {}
Sep 30 18:52:04 compute-0 systemd[1]: libpod-42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80.scope: Deactivated successfully.
Sep 30 18:52:04 compute-0 systemd[1]: libpod-42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80.scope: Consumed 1.104s CPU time.
Sep 30 18:52:04 compute-0 podman[364905]: 2025-09-30 18:52:04.620560594 +0000 UTC m=+0.966607294 container died 42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c66ed0439f79a80dfe0c8b7bf9bd1c973ab76c9b7e5ac95167d2c3448d18a68-merged.mount: Deactivated successfully.
Sep 30 18:52:04 compute-0 podman[364905]: 2025-09-30 18:52:04.672903761 +0000 UTC m=+1.018950441 container remove 42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:52:04 compute-0 systemd[1]: libpod-conmon-42c3291d274cd2a098b9e83b9d64661157b7599256c710bee76ffb2c5ca18d80.scope: Deactivated successfully.
Sep 30 18:52:04 compute-0 sudo[364749]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:52:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:52:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:52:04 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:52:04 compute-0 ceph-mon[73755]: pgmap v2216: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 419 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Sep 30 18:52:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4121484505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:52:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:52:04 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:52:04 compute-0 sudo[365015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:52:04 compute-0 sudo[365015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:04 compute-0 sudo[365015]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:04.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:04 compute-0 nova_compute[265391]: 2025-09-30 18:52:04.887 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:52:04 compute-0 nova_compute[265391]: 2025-09-30 18:52:04.888 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:52:03 up  1:55,  0 user,  load average: 0.68, 0.66, 0.77\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:52:04 compute-0 nova_compute[265391]: 2025-09-30 18:52:04.905 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:52:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:52:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1119088799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:52:05 compute-0 nova_compute[265391]: 2025-09-30 18:52:05.320 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:52:05 compute-0 nova_compute[265391]: 2025-09-30 18:52:05.327 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:52:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2217: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 969 B/s rd, 16 KiB/s wr, 2 op/s
Sep 30 18:52:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1119088799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:52:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2537126038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:52:05 compute-0 nova_compute[265391]: 2025-09-30 18:52:05.838 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:52:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:06 compute-0 nova_compute[265391]: 2025-09-30 18:52:06.356 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:52:06 compute-0 nova_compute[265391]: 2025-09-30 18:52:06.357 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.747s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:52:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:06 compute-0 ceph-mon[73755]: pgmap v2217: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 969 B/s rd, 16 KiB/s wr, 2 op/s
Sep 30 18:52:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:52:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:52:07 compute-0 nova_compute[265391]: 2025-09-30 18:52:07.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:07.413Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:52:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:52:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2218: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 969 B/s rd, 16 KiB/s wr, 2 op/s
Sep 30 18:52:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:52:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:08.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011386234188806407 of space, bias 1.0, pg target 0.22772468377612815 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.677024321156475e-07 of space, bias 1.0, pg target 0.0001335404864231295 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:52:08
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.mgr', 'volumes', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'backups']
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:52:08 compute-0 nova_compute[265391]: 2025-09-30 18:52:08.357 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:08 compute-0 nova_compute[265391]: 2025-09-30 18:52:08.357 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:08 compute-0 nova_compute[265391]: 2025-09-30 18:52:08.358 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:08 compute-0 nova_compute[265391]: 2025-09-30 18:52:08.358 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:08 compute-0 nova_compute[265391]: 2025-09-30 18:52:08.358 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:52:08 compute-0 nova_compute[265391]: 2025-09-30 18:52:08.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:08 compute-0 nova_compute[265391]: 2025-09-30 18:52:08.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:08] "GET /metrics HTTP/1.1" 200 46762 "" "Prometheus/2.51.0"
Sep 30 18:52:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:08] "GET /metrics HTTP/1.1" 200 46762 "" "Prometheus/2.51.0"
Sep 30 18:52:08 compute-0 ceph-mon[73755]: pgmap v2218: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 969 B/s rd, 16 KiB/s wr, 2 op/s
Sep 30 18:52:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:08.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:08.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2219: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 16 KiB/s wr, 20 op/s
Sep 30 18:52:09 compute-0 ceph-mon[73755]: pgmap v2219: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 16 KiB/s wr, 20 op/s
Sep 30 18:52:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:10.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:10.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2220: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 4.2 KiB/s wr, 18 op/s
Sep 30 18:52:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:12.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:12 compute-0 nova_compute[265391]: 2025-09-30 18:52:12.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:12 compute-0 ceph-mon[73755]: pgmap v2220: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 4.2 KiB/s wr, 18 op/s
Sep 30 18:52:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:12.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2221: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 5.2 KiB/s wr, 19 op/s
Sep 30 18:52:13 compute-0 nova_compute[265391]: 2025-09-30 18:52:13.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:13.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:14.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:14 compute-0 ceph-mon[73755]: pgmap v2221: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 14 KiB/s rd, 5.2 KiB/s wr, 19 op/s
Sep 30 18:52:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:14.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2222: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Sep 30 18:52:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:16.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:16 compute-0 ceph-mon[73755]: pgmap v2222: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Sep 30 18:52:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:52:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:16.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:52:17 compute-0 nova_compute[265391]: 2025-09-30 18:52:17.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:17.415Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2223: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Sep 30 18:52:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:18.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:18 compute-0 ceph-mon[73755]: pgmap v2223: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Sep 30 18:52:18 compute-0 nova_compute[265391]: 2025-09-30 18:52:18.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:18] "GET /metrics HTTP/1.1" 200 46762 "" "Prometheus/2.51.0"
Sep 30 18:52:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:18] "GET /metrics HTTP/1.1" 200 46762 "" "Prometheus/2.51.0"
Sep 30 18:52:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:18.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:18.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:19 compute-0 podman[365076]: 2025-09-30 18:52:19.518886073 +0000 UTC m=+0.051653399 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Sep 30 18:52:19 compute-0 podman[365078]: 2025-09-30 18:52:19.524003806 +0000 UTC m=+0.055398957 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:52:19 compute-0 podman[365077]: 2025-09-30 18:52:19.554206589 +0000 UTC m=+0.086789391 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4)
Sep 30 18:52:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2224: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Sep 30 18:52:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:20.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:20 compute-0 ceph-mon[73755]: pgmap v2224: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Sep 30 18:52:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2225: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:52:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:22.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:52:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:52:22 compute-0 nova_compute[265391]: 2025-09-30 18:52:22.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:22 compute-0 ceph-mon[73755]: pgmap v2225: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:52:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:52:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:22.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:23 compute-0 sudo[365144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:52:23 compute-0 sudo[365144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:23 compute-0 sudo[365144]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2226: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 1 op/s
Sep 30 18:52:23 compute-0 nova_compute[265391]: 2025-09-30 18:52:23.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:23.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:24.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:24 compute-0 ceph-mon[73755]: pgmap v2226: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 1 op/s
Sep 30 18:52:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:24.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:25 compute-0 nova_compute[265391]: 2025-09-30 18:52:25.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2227: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:52:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:26.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:26 compute-0 ceph-mon[73755]: pgmap v2227: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:52:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:26.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:27 compute-0 nova_compute[265391]: 2025-09-30 18:52:27.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:27.416Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2228: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:52:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:28.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:28 compute-0 nova_compute[265391]: 2025-09-30 18:52:28.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:28] "GET /metrics HTTP/1.1" 200 46763 "" "Prometheus/2.51.0"
Sep 30 18:52:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:28] "GET /metrics HTTP/1.1" 200 46763 "" "Prometheus/2.51.0"
Sep 30 18:52:28 compute-0 ceph-mon[73755]: pgmap v2228: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:52:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:28.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:28.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2229: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:52:29 compute-0 podman[276673]: time="2025-09-30T18:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:52:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:52:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10309 "" "Go-http-client/1.1"
Sep 30 18:52:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:30.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:30 compute-0 ceph-mon[73755]: pgmap v2229: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:52:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:31 compute-0 openstack_network_exporter[279566]: ERROR   18:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:52:31 compute-0 openstack_network_exporter[279566]: ERROR   18:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:52:31 compute-0 openstack_network_exporter[279566]: ERROR   18:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:52:31 compute-0 openstack_network_exporter[279566]: ERROR   18:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:52:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:52:31 compute-0 openstack_network_exporter[279566]: ERROR   18:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:52:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:52:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2230: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:52:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:32.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:32 compute-0 nova_compute[265391]: 2025-09-30 18:52:32.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:32 compute-0 ceph-mon[73755]: pgmap v2230: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:52:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:32.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:33 compute-0 podman[365180]: 2025-09-30 18:52:33.516180578 +0000 UTC m=+0.059093122 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid)
Sep 30 18:52:33 compute-0 podman[365181]: 2025-09-30 18:52:33.545584621 +0000 UTC m=+0.076483434 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41)
Sep 30 18:52:33 compute-0 podman[365179]: 2025-09-30 18:52:33.554981354 +0000 UTC m=+0.099343116 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 18:52:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2231: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:52:33 compute-0 nova_compute[265391]: 2025-09-30 18:52:33.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:33 compute-0 ceph-mon[73755]: pgmap v2231: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:52:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:33.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:34.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:34.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2232: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:52:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:36.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:52:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3635020971' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:52:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:52:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3635020971' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:52:36 compute-0 ceph-mon[73755]: pgmap v2232: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:52:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3635020971' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:52:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3635020971' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:52:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:36.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:52:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:52:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:37.417Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:37 compute-0 nova_compute[265391]: 2025-09-30 18:52:37.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:52:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:52:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2233: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:52:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:52:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:38.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:38 compute-0 ceph-mon[73755]: pgmap v2233: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 682 B/s rd, 0 B/s wr, 0 op/s
Sep 30 18:52:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:38] "GET /metrics HTTP/1.1" 200 46764 "" "Prometheus/2.51.0"
Sep 30 18:52:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:38] "GET /metrics HTTP/1.1" 200 46764 "" "Prometheus/2.51.0"
Sep 30 18:52:38 compute-0 nova_compute[265391]: 2025-09-30 18:52:38.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:38.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:38.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2234: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:52:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:40.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:40 compute-0 ceph-mon[73755]: pgmap v2234: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 938 B/s rd, 8.1 KiB/s wr, 2 op/s
Sep 30 18:52:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:40.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2235: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:52:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:42.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:42 compute-0 nova_compute[265391]: 2025-09-30 18:52:42.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:42 compute-0 ceph-mon[73755]: pgmap v2235: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:52:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:42.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:43 compute-0 sudo[365248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:52:43 compute-0 sudo[365248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:52:43 compute-0 sudo[365248]: pam_unix(sudo:session): session closed for user root
Sep 30 18:52:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2236: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:52:43 compute-0 nova_compute[265391]: 2025-09-30 18:52:43.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:43.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:52:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:43.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:52:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:44.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:52:44 compute-0 ceph-mon[73755]: pgmap v2236: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:52:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2237: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:52:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:52:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:46.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:52:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:52:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3596888471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:52:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:52:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3596888471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:52:46 compute-0 ceph-mon[73755]: pgmap v2237: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:52:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3596888471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:52:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3596888471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:52:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:46.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:47.418Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:47 compute-0 nova_compute[265391]: 2025-09-30 18:52:47.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2238: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:52:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:48.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:48 compute-0 ceph-mon[73755]: pgmap v2238: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 8.1 KiB/s wr, 1 op/s
Sep 30 18:52:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:48] "GET /metrics HTTP/1.1" 200 46764 "" "Prometheus/2.51.0"
Sep 30 18:52:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:48] "GET /metrics HTTP/1.1" 200 46764 "" "Prometheus/2.51.0"
Sep 30 18:52:48 compute-0 nova_compute[265391]: 2025-09-30 18:52:48.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:48.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:48.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:49 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:52:49.014 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:52:49 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:52:49.014 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:52:49 compute-0 nova_compute[265391]: 2025-09-30 18:52:49.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2239: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 8.3 KiB/s wr, 15 op/s
Sep 30 18:52:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:50.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:50 compute-0 podman[365284]: 2025-09-30 18:52:50.522293007 +0000 UTC m=+0.058154538 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:52:50 compute-0 podman[365282]: 2025-09-30 18:52:50.538038025 +0000 UTC m=+0.080502907 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Sep 30 18:52:50 compute-0 podman[365283]: 2025-09-30 18:52:50.554527633 +0000 UTC m=+0.091570645 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest)
Sep 30 18:52:50 compute-0 ceph-mon[73755]: pgmap v2239: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 8.3 KiB/s wr, 15 op/s
Sep 30 18:52:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:50.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2240: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 255 B/s wr, 14 op/s
Sep 30 18:52:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:52:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:52:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:52.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:52 compute-0 nova_compute[265391]: 2025-09-30 18:52:52.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:52 compute-0 ceph-mon[73755]: pgmap v2240: 353 pgs: 353 active+clean; 121 MiB data, 468 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 255 B/s wr, 14 op/s
Sep 30 18:52:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:52:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2495173031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:52:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:52.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2241: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:52:53 compute-0 nova_compute[265391]: 2025-09-30 18:52:53.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:53.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:52:54.350 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:52:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:52:54.351 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:52:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:52:54.351 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:52:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:54.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:54 compute-0 ceph-mon[73755]: pgmap v2241: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:52:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:54.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:52:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3487887483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:52:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:52:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3487887483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:52:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2242: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:52:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3487887483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:52:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3487887483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:52:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:52:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:56.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:56 compute-0 ceph-mon[73755]: pgmap v2242: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:52:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:52:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:56.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:52:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:57.419Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:57 compute-0 nova_compute[265391]: 2025-09-30 18:52:57.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:57 compute-0 nova_compute[265391]: 2025-09-30 18:52:57.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2243: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:52:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1694317258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:52:57 compute-0 ceph-mon[73755]: pgmap v2243: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:52:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1694317258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:52:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:52:58.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:58 compute-0 nova_compute[265391]: 2025-09-30 18:52:58.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:52:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:58] "GET /metrics HTTP/1.1" 200 46767 "" "Prometheus/2.51.0"
Sep 30 18:52:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:52:58] "GET /metrics HTTP/1.1" 200 46767 "" "Prometheus/2.51.0"
Sep 30 18:52:58 compute-0 nova_compute[265391]: 2025-09-30 18:52:58.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:52:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:52:58.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:52:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:52:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:52:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:52:58.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:52:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:52:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:52:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:52:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:52:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:52:59 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:52:59.016 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:52:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2244: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 40 KiB/s rd, 2.0 KiB/s wr, 56 op/s
Sep 30 18:52:59 compute-0 podman[276673]: time="2025-09-30T18:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:52:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:52:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10315 "" "Go-http-client/1.1"
Sep 30 18:53:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:00.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:00 compute-0 ceph-mon[73755]: pgmap v2244: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 40 KiB/s rd, 2.0 KiB/s wr, 56 op/s
Sep 30 18:53:00 compute-0 sshd-session[365354]: Connection closed by 154.125.120.7 port 55606 [preauth]
Sep 30 18:53:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:00.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:01 compute-0 openstack_network_exporter[279566]: ERROR   18:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:53:01 compute-0 openstack_network_exporter[279566]: ERROR   18:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:53:01 compute-0 openstack_network_exporter[279566]: ERROR   18:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:53:01 compute-0 openstack_network_exporter[279566]: ERROR   18:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:53:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:53:01 compute-0 openstack_network_exporter[279566]: ERROR   18:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:53:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:53:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2245: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Sep 30 18:53:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:02.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:02 compute-0 nova_compute[265391]: 2025-09-30 18:53:02.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:53:02 compute-0 nova_compute[265391]: 2025-09-30 18:53:02.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:02 compute-0 ceph-mon[73755]: pgmap v2245: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Sep 30 18:53:02 compute-0 nova_compute[265391]: 2025-09-30 18:53:02.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:53:02 compute-0 nova_compute[265391]: 2025-09-30 18:53:02.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:53:02 compute-0 nova_compute[265391]: 2025-09-30 18:53:02.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:53:02 compute-0 nova_compute[265391]: 2025-09-30 18:53:02.941 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:53:02 compute-0 nova_compute[265391]: 2025-09-30 18:53:02.941 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:53:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:02.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:53:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395640280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:53:03 compute-0 nova_compute[265391]: 2025-09-30 18:53:03.381 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:53:03 compute-0 nova_compute[265391]: 2025-09-30 18:53:03.517 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:53:03 compute-0 nova_compute[265391]: 2025-09-30 18:53:03.518 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:53:03 compute-0 nova_compute[265391]: 2025-09-30 18:53:03.550 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:53:03 compute-0 nova_compute[265391]: 2025-09-30 18:53:03.551 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4320MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:53:03 compute-0 nova_compute[265391]: 2025-09-30 18:53:03.551 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:53:03 compute-0 nova_compute[265391]: 2025-09-30 18:53:03.551 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:53:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2246: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Sep 30 18:53:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1153435571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:53:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1395640280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:53:03 compute-0 sudo[365389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:53:03 compute-0 sudo[365389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:03 compute-0 sudo[365389]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:03 compute-0 nova_compute[265391]: 2025-09-30 18:53:03.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:03 compute-0 podman[365413]: 2025-09-30 18:53:03.816221563 +0000 UTC m=+0.081055052 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:53:03 compute-0 podman[365415]: 2025-09-30 18:53:03.825058592 +0000 UTC m=+0.078222908 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Sep 30 18:53:03 compute-0 podman[365414]: 2025-09-30 18:53:03.82730298 +0000 UTC m=+0.081591265 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 18:53:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:03.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:04.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:04 compute-0 nova_compute[265391]: 2025-09-30 18:53:04.594 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:53:04 compute-0 nova_compute[265391]: 2025-09-30 18:53:04.594 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:53:03 up  1:56,  0 user,  load average: 0.35, 0.58, 0.74\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:53:04 compute-0 nova_compute[265391]: 2025-09-30 18:53:04.614 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:53:04 compute-0 ceph-mon[73755]: pgmap v2246: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Sep 30 18:53:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:04.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:05 compute-0 sudo[365496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:53:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652299208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:53:05 compute-0 sudo[365496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:05 compute-0 sudo[365496]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:05 compute-0 nova_compute[265391]: 2025-09-30 18:53:05.087 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:53:05 compute-0 nova_compute[265391]: 2025-09-30 18:53:05.095 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:53:05 compute-0 sudo[365523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:53:05 compute-0 sudo[365523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2247: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Sep 30 18:53:05 compute-0 nova_compute[265391]: 2025-09-30 18:53:05.607 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:53:05 compute-0 sudo[365523]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2652299208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:53:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3024213687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:53:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:53:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:53:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2248: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 695 B/s wr, 17 op/s
Sep 30 18:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:53:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:53:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:53:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:53:05 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:53:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:53:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:53:06 compute-0 sudo[365583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:53:06 compute-0 sudo[365583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:06 compute-0 sudo[365583]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:06 compute-0 sudo[365608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:53:06 compute-0 sudo[365608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:06 compute-0 nova_compute[265391]: 2025-09-30 18:53:06.120 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:53:06 compute-0 nova_compute[265391]: 2025-09-30 18:53:06.120 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.569s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:53:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:06.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:06 compute-0 podman[365673]: 2025-09-30 18:53:06.50764411 +0000 UTC m=+0.051197978 container create 1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euler, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:53:06 compute-0 systemd[1]: Started libpod-conmon-1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80.scope.
Sep 30 18:53:06 compute-0 podman[365673]: 2025-09-30 18:53:06.483988457 +0000 UTC m=+0.027542405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:53:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:53:06 compute-0 podman[365673]: 2025-09-30 18:53:06.601043631 +0000 UTC m=+0.144597559 container init 1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:53:06 compute-0 podman[365673]: 2025-09-30 18:53:06.616930832 +0000 UTC m=+0.160484710 container start 1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euler, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:53:06 compute-0 podman[365673]: 2025-09-30 18:53:06.620903845 +0000 UTC m=+0.164457733 container attach 1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euler, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:53:06 compute-0 sleepy_euler[365690]: 167 167
Sep 30 18:53:06 compute-0 systemd[1]: libpod-1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80.scope: Deactivated successfully.
Sep 30 18:53:06 compute-0 conmon[365690]: conmon 1dcc735c437ede42be45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80.scope/container/memory.events
Sep 30 18:53:06 compute-0 podman[365673]: 2025-09-30 18:53:06.627705312 +0000 UTC m=+0.171259190 container died 1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-522eb866284b789584a6404a6579fc86532362f47aabb130a36674cf608d28be-merged.mount: Deactivated successfully.
Sep 30 18:53:06 compute-0 podman[365673]: 2025-09-30 18:53:06.678006645 +0000 UTC m=+0.221560533 container remove 1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 18:53:06 compute-0 systemd[1]: libpod-conmon-1dcc735c437ede42be4520606c4b192eccc897b6e419f8b7687ab2051ad1ed80.scope: Deactivated successfully.
Sep 30 18:53:06 compute-0 ceph-mon[73755]: pgmap v2247: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Sep 30 18:53:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:53:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:53:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:53:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:53:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:53:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:53:06 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:53:06 compute-0 podman[365713]: 2025-09-30 18:53:06.933247011 +0000 UTC m=+0.069167074 container create 8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:53:06 compute-0 systemd[1]: Started libpod-conmon-8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a.scope.
Sep 30 18:53:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:06.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:06 compute-0 podman[365713]: 2025-09-30 18:53:06.904505996 +0000 UTC m=+0.040426099 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:53:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea25370986cbd7a4b10ffedf3edd8b6c27b3313f0a61d50fa8841eeb7c933d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea25370986cbd7a4b10ffedf3edd8b6c27b3313f0a61d50fa8841eeb7c933d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea25370986cbd7a4b10ffedf3edd8b6c27b3313f0a61d50fa8841eeb7c933d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea25370986cbd7a4b10ffedf3edd8b6c27b3313f0a61d50fa8841eeb7c933d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea25370986cbd7a4b10ffedf3edd8b6c27b3313f0a61d50fa8841eeb7c933d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:07 compute-0 podman[365713]: 2025-09-30 18:53:07.051736562 +0000 UTC m=+0.187656645 container init 8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 18:53:07 compute-0 podman[365713]: 2025-09-30 18:53:07.063439305 +0000 UTC m=+0.199359368 container start 8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:53:07 compute-0 podman[365713]: 2025-09-30 18:53:07.06747179 +0000 UTC m=+0.203391873 container attach 8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:53:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:53:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:53:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:07.420Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:07 compute-0 amazing_hoover[365730]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:53:07 compute-0 amazing_hoover[365730]: --> All data devices are unavailable
Sep 30 18:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:53:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:53:07 compute-0 nova_compute[265391]: 2025-09-30 18:53:07.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:07 compute-0 systemd[1]: libpod-8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a.scope: Deactivated successfully.
Sep 30 18:53:07 compute-0 podman[365713]: 2025-09-30 18:53:07.492913356 +0000 UTC m=+0.628833440 container died 8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:53:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ea25370986cbd7a4b10ffedf3edd8b6c27b3313f0a61d50fa8841eeb7c933d7-merged.mount: Deactivated successfully.
Sep 30 18:53:07 compute-0 podman[365713]: 2025-09-30 18:53:07.540450589 +0000 UTC m=+0.676370652 container remove 8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hoover, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:53:07 compute-0 systemd[1]: libpod-conmon-8432c3e556d1ec356cea1089a8f2ee9f8a5a27f0ec3eb693c7b233289116ae0a.scope: Deactivated successfully.
Sep 30 18:53:07 compute-0 sudo[365608]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:07 compute-0 sudo[365759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:53:07 compute-0 sudo[365759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:07 compute-0 sudo[365759]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:07 compute-0 sudo[365784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:53:07 compute-0 sudo[365784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:07 compute-0 ceph-mon[73755]: pgmap v2248: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 695 B/s wr, 17 op/s
Sep 30 18:53:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:53:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2249: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 695 B/s wr, 17 op/s
Sep 30 18:53:08 compute-0 podman[365850]: 2025-09-30 18:53:08.146083965 +0000 UTC m=+0.054093633 container create b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:53:08 compute-0 systemd[1]: Started libpod-conmon-b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025.scope.
Sep 30 18:53:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:53:08 compute-0 podman[365850]: 2025-09-30 18:53:08.127072752 +0000 UTC m=+0.035082450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:53:08 compute-0 podman[365850]: 2025-09-30 18:53:08.235920803 +0000 UTC m=+0.143930461 container init b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_brattain, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:53:08 compute-0 podman[365850]: 2025-09-30 18:53:08.247836422 +0000 UTC m=+0.155846150 container start b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:53:08 compute-0 podman[365850]: 2025-09-30 18:53:08.252025841 +0000 UTC m=+0.160035499 container attach b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:53:08 compute-0 optimistic_brattain[365867]: 167 167
Sep 30 18:53:08 compute-0 systemd[1]: libpod-b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025.scope: Deactivated successfully.
Sep 30 18:53:08 compute-0 conmon[365867]: conmon b21d77a4be0b31d2ee98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025.scope/container/memory.events
Sep 30 18:53:08 compute-0 podman[365850]: 2025-09-30 18:53:08.255141091 +0000 UTC m=+0.163150759 container died b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b9e37cc328b5ff73d938b861432edcbeed0a30ff552ae14aa0a925e721d35b0-merged.mount: Deactivated successfully.
Sep 30 18:53:08 compute-0 podman[365850]: 2025-09-30 18:53:08.293111565 +0000 UTC m=+0.201121223 container remove b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:53:08 compute-0 systemd[1]: libpod-conmon-b21d77a4be0b31d2ee98df37a8314b940d8f3e39d97edc3165234ba94e3e7025.scope: Deactivated successfully.
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.861581851924204e-07 of space, bias 1.0, pg target 5.7231637038484085e-05 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:53:08
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.nfs']
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:53:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:08.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:08 compute-0 podman[365893]: 2025-09-30 18:53:08.504140615 +0000 UTC m=+0.055355556 container create 7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:53:08 compute-0 systemd[1]: Started libpod-conmon-7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622.scope.
Sep 30 18:53:08 compute-0 podman[365893]: 2025-09-30 18:53:08.47499657 +0000 UTC m=+0.026211561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:53:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad908fc549a0691c2e49a8b651effbf1875c8ac7af1c1017fd8873e97a455e74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad908fc549a0691c2e49a8b651effbf1875c8ac7af1c1017fd8873e97a455e74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad908fc549a0691c2e49a8b651effbf1875c8ac7af1c1017fd8873e97a455e74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad908fc549a0691c2e49a8b651effbf1875c8ac7af1c1017fd8873e97a455e74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:08 compute-0 podman[365893]: 2025-09-30 18:53:08.626225749 +0000 UTC m=+0.177440670 container init 7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:53:08 compute-0 podman[365893]: 2025-09-30 18:53:08.64131707 +0000 UTC m=+0.192531971 container start 7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:53:08 compute-0 podman[365893]: 2025-09-30 18:53:08.646041913 +0000 UTC m=+0.197256804 container attach 7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:53:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1519403048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:53:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:08] "GET /metrics HTTP/1.1" 200 46749 "" "Prometheus/2.51.0"
Sep 30 18:53:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:08] "GET /metrics HTTP/1.1" 200 46749 "" "Prometheus/2.51.0"
Sep 30 18:53:08 compute-0 nova_compute[265391]: 2025-09-30 18:53:08.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:08 compute-0 serene_hellman[365910]: {
Sep 30 18:53:08 compute-0 serene_hellman[365910]:     "0": [
Sep 30 18:53:08 compute-0 serene_hellman[365910]:         {
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "devices": [
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "/dev/loop3"
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             ],
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "lv_name": "ceph_lv0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "lv_size": "21470642176",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "name": "ceph_lv0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "tags": {
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.cluster_name": "ceph",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.crush_device_class": "",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.encrypted": "0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.osd_id": "0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.type": "block",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.vdo": "0",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:                 "ceph.with_tpm": "0"
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             },
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "type": "block",
Sep 30 18:53:08 compute-0 serene_hellman[365910]:             "vg_name": "ceph_vg0"
Sep 30 18:53:08 compute-0 serene_hellman[365910]:         }
Sep 30 18:53:08 compute-0 serene_hellman[365910]:     ]
Sep 30 18:53:08 compute-0 serene_hellman[365910]: }
Sep 30 18:53:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:08.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:08 compute-0 systemd[1]: libpod-7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622.scope: Deactivated successfully.
Sep 30 18:53:08 compute-0 podman[365893]: 2025-09-30 18:53:08.980955603 +0000 UTC m=+0.532170504 container died 7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 18:53:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad908fc549a0691c2e49a8b651effbf1875c8ac7af1c1017fd8873e97a455e74-merged.mount: Deactivated successfully.
Sep 30 18:53:09 compute-0 podman[365893]: 2025-09-30 18:53:09.037189121 +0000 UTC m=+0.588404022 container remove 7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:53:09 compute-0 systemd[1]: libpod-conmon-7b02d7855e39ef872dd98cfe967ce398a14f4dcc25991af9f331bd78aa0b2622.scope: Deactivated successfully.
Sep 30 18:53:09 compute-0 sudo[365784]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:09 compute-0 nova_compute[265391]: 2025-09-30 18:53:09.120 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:53:09 compute-0 nova_compute[265391]: 2025-09-30 18:53:09.121 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:53:09 compute-0 nova_compute[265391]: 2025-09-30 18:53:09.121 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:53:09 compute-0 nova_compute[265391]: 2025-09-30 18:53:09.121 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:53:09 compute-0 nova_compute[265391]: 2025-09-30 18:53:09.121 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:53:09 compute-0 sudo[365929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:53:09 compute-0 sudo[365929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:09 compute-0 sudo[365929]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:09 compute-0 sudo[365954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:53:09 compute-0 sudo[365954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:09 compute-0 nova_compute[265391]: 2025-09-30 18:53:09.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:53:09 compute-0 podman[366019]: 2025-09-30 18:53:09.597027981 +0000 UTC m=+0.048523689 container create 9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kare, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:53:09 compute-0 systemd[1]: Started libpod-conmon-9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc.scope.
Sep 30 18:53:09 compute-0 podman[366019]: 2025-09-30 18:53:09.576796957 +0000 UTC m=+0.028292675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:53:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:53:09 compute-0 podman[366019]: 2025-09-30 18:53:09.696690174 +0000 UTC m=+0.148185982 container init 9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:53:09 compute-0 podman[366019]: 2025-09-30 18:53:09.709489196 +0000 UTC m=+0.160984924 container start 9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 18:53:09 compute-0 podman[366019]: 2025-09-30 18:53:09.713844959 +0000 UTC m=+0.165340747 container attach 9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 18:53:09 compute-0 dreamy_kare[366036]: 167 167
Sep 30 18:53:09 compute-0 systemd[1]: libpod-9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc.scope: Deactivated successfully.
Sep 30 18:53:09 compute-0 podman[366019]: 2025-09-30 18:53:09.717459052 +0000 UTC m=+0.168954820 container died 9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kare, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7fc76ecd7b91195da7e23617c010f0e299ddf0afaeac8fee598a5acc2c12864-merged.mount: Deactivated successfully.
Sep 30 18:53:09 compute-0 podman[366019]: 2025-09-30 18:53:09.776298087 +0000 UTC m=+0.227793825 container remove 9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_kare, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:53:09 compute-0 ceph-mon[73755]: pgmap v2249: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 13 KiB/s rd, 695 B/s wr, 17 op/s
Sep 30 18:53:09 compute-0 systemd[1]: libpod-conmon-9077ec25128ea016d7b97219687532bf38b0e2ce44e1d256c8347fb621e059fc.scope: Deactivated successfully.
Sep 30 18:53:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2250: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 596 B/s rd, 0 op/s
Sep 30 18:53:10 compute-0 podman[366064]: 2025-09-30 18:53:10.04029007 +0000 UTC m=+0.076718020 container create 18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 18:53:10 compute-0 systemd[1]: Started libpod-conmon-18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee.scope.
Sep 30 18:53:10 compute-0 podman[366064]: 2025-09-30 18:53:10.011487723 +0000 UTC m=+0.047915713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:53:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11026d66474239d8ca8118f51cb2c10c2112dcc73095cebb2d17463b1f44395f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11026d66474239d8ca8118f51cb2c10c2112dcc73095cebb2d17463b1f44395f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11026d66474239d8ca8118f51cb2c10c2112dcc73095cebb2d17463b1f44395f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11026d66474239d8ca8118f51cb2c10c2112dcc73095cebb2d17463b1f44395f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:53:10 compute-0 podman[366064]: 2025-09-30 18:53:10.147664323 +0000 UTC m=+0.184092253 container init 18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:53:10 compute-0 podman[366064]: 2025-09-30 18:53:10.158459012 +0000 UTC m=+0.194886932 container start 18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:53:10 compute-0 podman[366064]: 2025-09-30 18:53:10.162688492 +0000 UTC m=+0.199116442 container attach 18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:53:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:10.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:10 compute-0 lvm[366155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:53:10 compute-0 lvm[366155]: VG ceph_vg0 finished
Sep 30 18:53:10 compute-0 nice_liskov[366081]: {}
Sep 30 18:53:10 compute-0 systemd[1]: libpod-18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee.scope: Deactivated successfully.
Sep 30 18:53:10 compute-0 podman[366064]: 2025-09-30 18:53:10.842077751 +0000 UTC m=+0.878505681 container died 18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 18:53:10 compute-0 systemd[1]: libpod-18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee.scope: Consumed 1.042s CPU time.
Sep 30 18:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-11026d66474239d8ca8118f51cb2c10c2112dcc73095cebb2d17463b1f44395f-merged.mount: Deactivated successfully.
Sep 30 18:53:10 compute-0 podman[366064]: 2025-09-30 18:53:10.886481052 +0000 UTC m=+0.922908962 container remove 18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_liskov, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:53:10 compute-0 systemd[1]: libpod-conmon-18b48f097226ad4ad959f3ecf6b123609459da23a25c6b919a3ac94e7d03b6ee.scope: Deactivated successfully.
Sep 30 18:53:10 compute-0 sudo[365954]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:53:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:53:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:53:10 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:53:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:10.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:11 compute-0 sudo[366170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:53:11 compute-0 sudo[366170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:11 compute-0 sudo[366170]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.384692) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258391384734, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1270, "num_deletes": 250, "total_data_size": 2185623, "memory_usage": 2226024, "flush_reason": "Manual Compaction"}
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258391392197, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 1349943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54839, "largest_seqno": 56108, "table_properties": {"data_size": 1345236, "index_size": 2102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12891, "raw_average_key_size": 21, "raw_value_size": 1334772, "raw_average_value_size": 2191, "num_data_blocks": 93, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258280, "oldest_key_time": 1759258280, "file_creation_time": 1759258391, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 7550 microseconds, and 4208 cpu microseconds.
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.392239) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 1349943 bytes OK
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.392258) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.394388) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.394405) EVENT_LOG_v1 {"time_micros": 1759258391394400, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.394422) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2179915, prev total WAL file size 2179915, number of live WAL files 2.
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.395543) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303037' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(1318KB)], [128(12MB)]
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258391395589, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 14575338, "oldest_snapshot_seqno": -1}
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7785 keys, 11497049 bytes, temperature: kUnknown
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258391449807, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11497049, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11451410, "index_size": 25046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 206399, "raw_average_key_size": 26, "raw_value_size": 11318613, "raw_average_value_size": 1453, "num_data_blocks": 966, "num_entries": 7785, "num_filter_entries": 7785, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258391, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.450292) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11497049 bytes
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.451754) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 267.3 rd, 210.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.6 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(19.3) write-amplify(8.5) OK, records in: 8254, records dropped: 469 output_compression: NoCompression
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.451776) EVENT_LOG_v1 {"time_micros": 1759258391451766, "job": 78, "event": "compaction_finished", "compaction_time_micros": 54535, "compaction_time_cpu_micros": 37066, "output_level": 6, "num_output_files": 1, "total_output_size": 11497049, "num_input_records": 8254, "num_output_records": 7785, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258391452373, "job": 78, "event": "table_file_deletion", "file_number": 130}
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258391455836, "job": 78, "event": "table_file_deletion", "file_number": 128}
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.395463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.455959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.455968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.455971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.455974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:53:11 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:53:11.455982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:53:11 compute-0 ceph-mon[73755]: pgmap v2250: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 596 B/s rd, 0 op/s
Sep 30 18:53:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:53:11 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:53:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2251: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 596 B/s rd, 0 op/s
Sep 30 18:53:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:12.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:12 compute-0 nova_compute[265391]: 2025-09-30 18:53:12.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:13 compute-0 nova_compute[265391]: 2025-09-30 18:53:13.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:13 compute-0 ceph-mon[73755]: pgmap v2251: 353 pgs: 353 active+clean; 41 MiB data, 422 MiB used, 40 GiB / 40 GiB avail; 596 B/s rd, 0 op/s
Sep 30 18:53:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2252: 353 pgs: 353 active+clean; 88 MiB data, 433 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Sep 30 18:53:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:13.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:14.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:15.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:15 compute-0 ceph-mon[73755]: pgmap v2252: 353 pgs: 353 active+clean; 88 MiB data, 433 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Sep 30 18:53:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2253: 353 pgs: 353 active+clean; 88 MiB data, 433 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Sep 30 18:53:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:16.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:17.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:17.422Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:17 compute-0 nova_compute[265391]: 2025-09-30 18:53:17.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:17 compute-0 ceph-mon[73755]: pgmap v2253: 353 pgs: 353 active+clean; 88 MiB data, 433 MiB used, 40 GiB / 40 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Sep 30 18:53:17 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4194731416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:53:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2254: 353 pgs: 353 active+clean; 88 MiB data, 433 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:53:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:18.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:18] "GET /metrics HTTP/1.1" 200 46749 "" "Prometheus/2.51.0"
Sep 30 18:53:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:18] "GET /metrics HTTP/1.1" 200 46749 "" "Prometheus/2.51.0"
Sep 30 18:53:18 compute-0 nova_compute[265391]: 2025-09-30 18:53:18.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:18 compute-0 ceph-mon[73755]: pgmap v2254: 353 pgs: 353 active+clean; 88 MiB data, 433 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:53:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3814294932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:53:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:18.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:19.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2255: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:53:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:20.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:20 compute-0 ceph-mon[73755]: pgmap v2255: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:53:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:21.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:21 compute-0 podman[366206]: 2025-09-30 18:53:21.526182143 +0000 UTC m=+0.058224170 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Sep 30 18:53:21 compute-0 podman[366208]: 2025-09-30 18:53:21.534102088 +0000 UTC m=+0.067651514 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:53:21 compute-0 podman[366207]: 2025-09-30 18:53:21.571802816 +0000 UTC m=+0.109048498 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 18:53:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2256: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:53:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:53:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:53:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:22.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:22 compute-0 nova_compute[265391]: 2025-09-30 18:53:22.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:22 compute-0 ceph-mon[73755]: pgmap v2256: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 18:53:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:53:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:23.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:23 compute-0 sudo[366277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:53:23 compute-0 sudo[366277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:23 compute-0 nova_compute[265391]: 2025-09-30 18:53:23.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:23 compute-0 sudo[366277]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2257: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:53:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:23.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:24.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:24 compute-0 ceph-mon[73755]: pgmap v2257: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 18:53:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:25.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2258: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:53:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:27 compute-0 ceph-mon[73755]: pgmap v2258: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:53:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:27.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:27.423Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:27 compute-0 nova_compute[265391]: 2025-09-30 18:53:27.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2259: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:53:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:28.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:28] "GET /metrics HTTP/1.1" 200 46765 "" "Prometheus/2.51.0"
Sep 30 18:53:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:28] "GET /metrics HTTP/1.1" 200 46765 "" "Prometheus/2.51.0"
Sep 30 18:53:28 compute-0 nova_compute[265391]: 2025-09-30 18:53:28.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:28.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:29.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:29 compute-0 ceph-mon[73755]: pgmap v2259: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 7.7 KiB/s rd, 13 KiB/s wr, 10 op/s
Sep 30 18:53:29 compute-0 podman[276673]: time="2025-09-30T18:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:53:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:53:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10326 "" "Go-http-client/1.1"
Sep 30 18:53:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2260: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:53:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:31.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:31 compute-0 ceph-mon[73755]: pgmap v2260: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:53:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:31 compute-0 openstack_network_exporter[279566]: ERROR   18:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:53:31 compute-0 openstack_network_exporter[279566]: ERROR   18:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:53:31 compute-0 openstack_network_exporter[279566]: ERROR   18:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:53:31 compute-0 openstack_network_exporter[279566]: ERROR   18:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:53:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:53:31 compute-0 openstack_network_exporter[279566]: ERROR   18:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:53:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:53:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2261: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:53:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:32.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 18:53:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3151049741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:53:32 compute-0 nova_compute[265391]: 2025-09-30 18:53:32.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:33.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:33 compute-0 ceph-mon[73755]: pgmap v2261: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Sep 30 18:53:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3151049741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 18:53:33 compute-0 nova_compute[265391]: 2025-09-30 18:53:33.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2262: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 77 op/s
Sep 30 18:53:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:33.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:34.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:34 compute-0 podman[366314]: 2025-09-30 18:53:34.542320577 +0000 UTC m=+0.064527413 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid)
Sep 30 18:53:34 compute-0 podman[366315]: 2025-09-30 18:53:34.565305573 +0000 UTC m=+0.079052410 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Sep 30 18:53:34 compute-0 podman[366313]: 2025-09-30 18:53:34.568745752 +0000 UTC m=+0.092285633 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:53:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:35.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:35 compute-0 ceph-mon[73755]: pgmap v2262: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 77 op/s
Sep 30 18:53:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2263: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 67 op/s
Sep 30 18:53:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:36.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:53:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2235796451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:53:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:53:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2235796451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:53:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:37.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:37 compute-0 ceph-mon[73755]: pgmap v2263: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 67 op/s
Sep 30 18:53:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2235796451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:53:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2235796451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:53:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:53:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:53:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:37.424Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:53:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:53:37 compute-0 nova_compute[265391]: 2025-09-30 18:53:37.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2264: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 67 op/s
Sep 30 18:53:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:53:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:38.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:38] "GET /metrics HTTP/1.1" 200 46761 "" "Prometheus/2.51.0"
Sep 30 18:53:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:38] "GET /metrics HTTP/1.1" 200 46761 "" "Prometheus/2.51.0"
Sep 30 18:53:38 compute-0 nova_compute[265391]: 2025-09-30 18:53:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:38.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:39.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:39 compute-0 ceph-mon[73755]: pgmap v2264: 353 pgs: 353 active+clean; 88 MiB data, 443 MiB used, 40 GiB / 40 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 67 op/s
Sep 30 18:53:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2265: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Sep 30 18:53:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:40.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:41.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:41 compute-0 ceph-mon[73755]: pgmap v2265: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Sep 30 18:53:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2266: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:53:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:53:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3891315952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:53:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:53:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3891315952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:53:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:42.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:42 compute-0 nova_compute[265391]: 2025-09-30 18:53:42.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:43.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854564858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:53:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:53:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854564858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:53:43 compute-0 ceph-mon[73755]: pgmap v2266: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:53:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3891315952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:53:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3891315952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:53:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2854564858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:53:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2854564858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:53:43 compute-0 nova_compute[265391]: 2025-09-30 18:53:43.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2267: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:53:43 compute-0 sudo[366385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:53:43 compute-0 sudo[366385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:53:43 compute-0 sudo[366385]: pam_unix(sudo:session): session closed for user root
Sep 30 18:53:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:43.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:44.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:45.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:45 compute-0 ceph-mon[73755]: pgmap v2267: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Sep 30 18:53:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2268: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:53:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:46.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:47.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:47 compute-0 ceph-mon[73755]: pgmap v2268: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:53:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:47.425Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:47 compute-0 nova_compute[265391]: 2025-09-30 18:53:47.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2269: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:53:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:48.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:48] "GET /metrics HTTP/1.1" 200 46761 "" "Prometheus/2.51.0"
Sep 30 18:53:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:48] "GET /metrics HTTP/1.1" 200 46761 "" "Prometheus/2.51.0"
Sep 30 18:53:48 compute-0 nova_compute[265391]: 2025-09-30 18:53:48.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:48.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:49.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:49 compute-0 ceph-mon[73755]: pgmap v2269: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:53:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2270: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:53:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:51 compute-0 ceph-mon[73755]: pgmap v2270: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 18:53:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2271: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 12 KiB/s wr, 0 op/s
Sep 30 18:53:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:53:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:53:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:52.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:52 compute-0 nova_compute[265391]: 2025-09-30 18:53:52.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:52 compute-0 podman[366418]: 2025-09-30 18:53:52.511327266 +0000 UTC m=+0.051797983 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:53:52 compute-0 podman[366420]: 2025-09-30 18:53:52.543585402 +0000 UTC m=+0.077465359 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:53:52 compute-0 podman[366419]: 2025-09-30 18:53:52.559014572 +0000 UTC m=+0.097082417 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Sep 30 18:53:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:53.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:53 compute-0 ceph-mon[73755]: pgmap v2271: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 12 KiB/s wr, 0 op/s
Sep 30 18:53:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:53:53 compute-0 nova_compute[265391]: 2025-09-30 18:53:53.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2272: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 12 KiB/s wr, 1 op/s
Sep 30 18:53:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:53.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:53:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:53.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:53:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:53:54.352 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:53:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:53:54.352 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:53:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:53:54.352 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:53:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:54.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:55 compute-0 ceph-mon[73755]: pgmap v2272: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 12 KiB/s wr, 1 op/s
Sep 30 18:53:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2273: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:53:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:53:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:56.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:57.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:57 compute-0 ceph-mon[73755]: pgmap v2273: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:53:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:57.426Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:57 compute-0 nova_compute[265391]: 2025-09-30 18:53:57.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:53:57 compute-0 nova_compute[265391]: 2025-09-30 18:53:57.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:53:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2282182362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:53:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:53:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2282182362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:53:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2274: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:53:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2282182362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:53:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2282182362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:53:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:53:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:53:58.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:53:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:58] "GET /metrics HTTP/1.1" 200 46762 "" "Prometheus/2.51.0"
Sep 30 18:53:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:53:58] "GET /metrics HTTP/1.1" 200 46762 "" "Prometheus/2.51.0"
Sep 30 18:53:58 compute-0 nova_compute[265391]: 2025-09-30 18:53:58.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:53:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:53:58.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:53:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:53:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:53:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:53:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:53:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:53:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:53:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:53:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:53:59.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:53:59 compute-0 ceph-mon[73755]: pgmap v2274: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:53:59 compute-0 nova_compute[265391]: 2025-09-30 18:53:59.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:53:59 compute-0 podman[276673]: time="2025-09-30T18:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:53:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:53:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10318 "" "Go-http-client/1.1"
Sep 30 18:53:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2275: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:00.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:01.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:01 compute-0 ceph-mon[73755]: pgmap v2275: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:01 compute-0 openstack_network_exporter[279566]: ERROR   18:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:54:01 compute-0 openstack_network_exporter[279566]: ERROR   18:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:54:01 compute-0 openstack_network_exporter[279566]: ERROR   18:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:54:01 compute-0 openstack_network_exporter[279566]: ERROR   18:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:54:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:54:01 compute-0 openstack_network_exporter[279566]: ERROR   18:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:54:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:54:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2276: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:02 compute-0 nova_compute[265391]: 2025-09-30 18:54:02.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:03.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:03 compute-0 ceph-mon[73755]: pgmap v2276: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/335007788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:54:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2277: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:03 compute-0 nova_compute[265391]: 2025-09-30 18:54:03.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:03.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:04 compute-0 sudo[366499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:54:04 compute-0 sudo[366499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:04 compute-0 sudo[366499]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:04 compute-0 nova_compute[265391]: 2025-09-30 18:54:04.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:04.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:04 compute-0 nova_compute[265391]: 2025-09-30 18:54:04.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:54:04 compute-0 nova_compute[265391]: 2025-09-30 18:54:04.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:54:04 compute-0 nova_compute[265391]: 2025-09-30 18:54:04.940 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:54:04 compute-0 nova_compute[265391]: 2025-09-30 18:54:04.940 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:54:04 compute-0 nova_compute[265391]: 2025-09-30 18:54:04.940 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:54:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:05.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:54:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895805963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:54:05 compute-0 ceph-mon[73755]: pgmap v2277: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:05 compute-0 nova_compute[265391]: 2025-09-30 18:54:05.390 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:54:05 compute-0 podman[366546]: 2025-09-30 18:54:05.541220441 +0000 UTC m=+0.085649221 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Sep 30 18:54:05 compute-0 podman[366547]: 2025-09-30 18:54:05.5681972 +0000 UTC m=+0.100998829 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:54:05 compute-0 podman[366553]: 2025-09-30 18:54:05.568702923 +0000 UTC m=+0.098055853 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Sep 30 18:54:05 compute-0 nova_compute[265391]: 2025-09-30 18:54:05.584 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:54:05 compute-0 nova_compute[265391]: 2025-09-30 18:54:05.585 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:54:05 compute-0 nova_compute[265391]: 2025-09-30 18:54:05.616 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:54:05 compute-0 nova_compute[265391]: 2025-09-30 18:54:05.616 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4297MB free_disk=39.94666290283203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:54:05 compute-0 nova_compute[265391]: 2025-09-30 18:54:05.617 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:54:05 compute-0 nova_compute[265391]: 2025-09-30 18:54:05.617 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:54:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2278: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1895805963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:54:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/509061425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:54:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:06 compute-0 nova_compute[265391]: 2025-09-30 18:54:06.661 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:54:06 compute-0 nova_compute[265391]: 2025-09-30 18:54:06.662 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:54:05 up  1:57,  0 user,  load average: 0.13, 0.47, 0.69\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:54:06 compute-0 nova_compute[265391]: 2025-09-30 18:54:06.682 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:54:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:54:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/20938895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:54:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:54:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:07.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:54:07 compute-0 nova_compute[265391]: 2025-09-30 18:54:07.124 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:54:07 compute-0 nova_compute[265391]: 2025-09-30 18:54:07.130 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:54:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:54:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:54:07 compute-0 ceph-mon[73755]: pgmap v2278: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/20938895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:54:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:54:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:07.427Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:54:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:54:07 compute-0 nova_compute[265391]: 2025-09-30 18:54:07.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:07 compute-0 nova_compute[265391]: 2025-09-30 18:54:07.639 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:54:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2279: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:08 compute-0 nova_compute[265391]: 2025-09-30 18:54:08.152 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:54:08 compute-0 nova_compute[265391]: 2025-09-30 18:54:08.152 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.536s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011383372606954484 of space, bias 1.0, pg target 0.22766745213908968 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.76930308654034e-07 of space, bias 1.0, pg target 9.538606173080681e-05 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:54:08
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.control', '.nfs', 'backups']
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:54:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:08.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:08] "GET /metrics HTTP/1.1" 200 46764 "" "Prometheus/2.51.0"
Sep 30 18:54:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:08] "GET /metrics HTTP/1.1" 200 46764 "" "Prometheus/2.51.0"
Sep 30 18:54:08 compute-0 nova_compute[265391]: 2025-09-30 18:54:08.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:08.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:09 compute-0 ceph-mon[73755]: pgmap v2279: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 1023 B/s wr, 0 op/s
Sep 30 18:54:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2280: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 1 op/s
Sep 30 18:54:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:10.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:11.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:11 compute-0 nova_compute[265391]: 2025-09-30 18:54:11.153 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:11 compute-0 nova_compute[265391]: 2025-09-30 18:54:11.154 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:11 compute-0 nova_compute[265391]: 2025-09-30 18:54:11.154 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:11 compute-0 nova_compute[265391]: 2025-09-30 18:54:11.154 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:11 compute-0 nova_compute[265391]: 2025-09-30 18:54:11.154 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:11 compute-0 nova_compute[265391]: 2025-09-30 18:54:11.154 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:54:11 compute-0 sudo[366634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:54:11 compute-0 sudo[366634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:11 compute-0 sudo[366634]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:11 compute-0 sudo[366659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 18:54:11 compute-0 sudo[366659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:11 compute-0 ceph-mon[73755]: pgmap v2280: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 1 op/s
Sep 30 18:54:11 compute-0 podman[366757]: 2025-09-30 18:54:11.857048148 +0000 UTC m=+0.056570638 container exec 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:54:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2281: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:54:11 compute-0 podman[366757]: 2025-09-30 18:54:11.949966546 +0000 UTC m=+0.149489006 container exec_died 28cb2903608cec8e7c5a39aa3f398cadea7d807beef2cb8cca333795d18a1341 (image=quay.io/ceph/ceph:v19, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:54:12 compute-0 podman[366878]: 2025-09-30 18:54:12.413010536 +0000 UTC m=+0.056259759 container exec 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:54:12 compute-0 podman[366878]: 2025-09-30 18:54:12.448757143 +0000 UTC m=+0.092006376 container exec_died 0a64c261699b14850432928cca94e24743e97435fdc1d2cca08edcee1f870789 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:54:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:12 compute-0 nova_compute[265391]: 2025-09-30 18:54:12.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:12 compute-0 podman[366969]: 2025-09-30 18:54:12.874839406 +0000 UTC m=+0.167452541 container exec 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:54:12 compute-0 podman[366969]: 2025-09-30 18:54:12.894868085 +0000 UTC m=+0.187481170 container exec_died 96c1a4d1476c3fe56b2b4855037bb3aa81f60f8974668b12bc71055b46c71430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:54:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:13.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:13 compute-0 podman[367034]: 2025-09-30 18:54:13.287727567 +0000 UTC m=+0.131238612 container exec e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:54:13 compute-0 podman[367034]: 2025-09-30 18:54:13.327747104 +0000 UTC m=+0.171258129 container exec_died e1cdc51276e16f535030605d2a18f629e9fe31bf73cf0a7411e18214526b68e5 (image=quay.io/ceph/haproxy:2.3, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-haproxy-nfs-cephfs-compute-0-jcdnha)
Sep 30 18:54:13 compute-0 ceph-mon[73755]: pgmap v2281: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 2.3 KiB/s wr, 0 op/s
Sep 30 18:54:13 compute-0 podman[367101]: 2025-09-30 18:54:13.635974914 +0000 UTC m=+0.099862010 container exec b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public)
Sep 30 18:54:13 compute-0 podman[367101]: 2025-09-30 18:54:13.674629605 +0000 UTC m=+0.138516681 container exec_died b5b34e9f9a1e5f8a3081bfbcbc3f8d4401b0873cf75447793762597a4ea89ff4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-keepalived-nfs-cephfs-compute-0-miadhc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, name=keepalived, release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, description=keepalived for Ceph)
Sep 30 18:54:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2282: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:54:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:13.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:13 compute-0 nova_compute[265391]: 2025-09-30 18:54:13.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:13 compute-0 podman[367169]: 2025-09-30 18:54:13.990300307 +0000 UTC m=+0.081878853 container exec 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:54:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:14 compute-0 podman[367169]: 2025-09-30 18:54:14.007755089 +0000 UTC m=+0.099333615 container exec_died 316307af09cabd347a032cf06e9544ea621ab1be46fe52d1c226ab05d1d9e331 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:54:14 compute-0 podman[367244]: 2025-09-30 18:54:14.27592395 +0000 UTC m=+0.076628577 container exec cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:54:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:14.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:14 compute-0 podman[367244]: 2025-09-30 18:54:14.485318467 +0000 UTC m=+0.286023134 container exec_died cb74c136339a4f8b8601bbbbeb0c8f6158501bcf17f6fc4369db4a5ec1b442a2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 18:54:14 compute-0 podman[367354]: 2025-09-30 18:54:14.94765015 +0000 UTC m=+0.065465878 container exec 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:54:14 compute-0 podman[367354]: 2025-09-30 18:54:14.988121569 +0000 UTC m=+0.105937307 container exec_died 262a601a228f752d0c9dbf2b0c2d1b05fe66f88f5c16c2fc8adb324868709b53 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 18:54:15 compute-0 sudo[366659]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:15.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:15 compute-0 sudo[367397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:54:15 compute-0 sudo[367397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:15 compute-0 sudo[367397]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:15 compute-0 sudo[367422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:54:15 compute-0 sudo[367422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: pgmap v2282: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:54:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:15 compute-0 sudo[367422]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:54:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2283: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 515 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:54:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:54:15 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:54:15 compute-0 sudo[367482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:54:15 compute-0 sudo[367482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:15 compute-0 sudo[367482]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:15 compute-0 sudo[367507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:54:15 compute-0 sudo[367507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:16 compute-0 podman[367574]: 2025-09-30 18:54:16.443313064 +0000 UTC m=+0.041485716 container create 153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lederberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:54:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:16.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:16 compute-0 systemd[1]: Started libpod-conmon-153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c.scope.
Sep 30 18:54:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:54:16 compute-0 podman[367574]: 2025-09-30 18:54:16.427568326 +0000 UTC m=+0.025741008 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:54:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:54:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:54:16 compute-0 ceph-mon[73755]: pgmap v2283: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 515 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:54:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:54:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:54:16 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:54:16 compute-0 podman[367574]: 2025-09-30 18:54:16.536545261 +0000 UTC m=+0.134717923 container init 153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:54:16 compute-0 podman[367574]: 2025-09-30 18:54:16.544563698 +0000 UTC m=+0.142736350 container start 153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lederberg, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:54:16 compute-0 podman[367574]: 2025-09-30 18:54:16.548515021 +0000 UTC m=+0.146687713 container attach 153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lederberg, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:54:16 compute-0 zealous_lederberg[367590]: 167 167
Sep 30 18:54:16 compute-0 systemd[1]: libpod-153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c.scope: Deactivated successfully.
Sep 30 18:54:16 compute-0 podman[367574]: 2025-09-30 18:54:16.551795676 +0000 UTC m=+0.149968358 container died 153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lederberg, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f13e66b37db670a0cd9fa03106e04b05e5b5859bdc760f0a82dc997de22c2d0-merged.mount: Deactivated successfully.
Sep 30 18:54:16 compute-0 podman[367574]: 2025-09-30 18:54:16.591671069 +0000 UTC m=+0.189843721 container remove 153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_lederberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:54:16 compute-0 systemd[1]: libpod-conmon-153a562d4f0cd2f6716f16c25496b00bb3ec2c035b6906edcecbc5a0b87c047c.scope: Deactivated successfully.
Sep 30 18:54:16 compute-0 podman[367615]: 2025-09-30 18:54:16.747647262 +0000 UTC m=+0.043839037 container create 77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_vaughan, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:54:16 compute-0 systemd[1]: Started libpod-conmon-77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275.scope.
Sep 30 18:54:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2cf7d0c1ab4222b60b8f44534e7162a0a4a496f26b70c60b6ae663891a0edc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2cf7d0c1ab4222b60b8f44534e7162a0a4a496f26b70c60b6ae663891a0edc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2cf7d0c1ab4222b60b8f44534e7162a0a4a496f26b70c60b6ae663891a0edc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2cf7d0c1ab4222b60b8f44534e7162a0a4a496f26b70c60b6ae663891a0edc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2cf7d0c1ab4222b60b8f44534e7162a0a4a496f26b70c60b6ae663891a0edc3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:16 compute-0 podman[367615]: 2025-09-30 18:54:16.729696907 +0000 UTC m=+0.025888702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:54:16 compute-0 podman[367615]: 2025-09-30 18:54:16.841931266 +0000 UTC m=+0.138123071 container init 77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:54:16 compute-0 podman[367615]: 2025-09-30 18:54:16.850260422 +0000 UTC m=+0.146452197 container start 77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_vaughan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:54:16 compute-0 podman[367615]: 2025-09-30 18:54:16.853055004 +0000 UTC m=+0.149246779 container attach 77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 18:54:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:17.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:17 compute-0 blissful_vaughan[367631]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:54:17 compute-0 blissful_vaughan[367631]: --> All data devices are unavailable
Sep 30 18:54:17 compute-0 systemd[1]: libpod-77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275.scope: Deactivated successfully.
Sep 30 18:54:17 compute-0 podman[367615]: 2025-09-30 18:54:17.215743124 +0000 UTC m=+0.511934909 container died 77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 18:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2cf7d0c1ab4222b60b8f44534e7162a0a4a496f26b70c60b6ae663891a0edc3-merged.mount: Deactivated successfully.
Sep 30 18:54:17 compute-0 podman[367615]: 2025-09-30 18:54:17.250333081 +0000 UTC m=+0.546524846 container remove 77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:54:17 compute-0 systemd[1]: libpod-conmon-77e0a842a367753a5ca8a2a46b46b7389a9b3774e20ed1c90adac45653d06275.scope: Deactivated successfully.
Sep 30 18:54:17 compute-0 sudo[367507]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:17 compute-0 sudo[367658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:54:17 compute-0 sudo[367658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:17 compute-0 sudo[367658]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:17.428Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:54:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:17.430Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:17 compute-0 sudo[367683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:54:17 compute-0 sudo[367683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:17 compute-0 nova_compute[265391]: 2025-09-30 18:54:17.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2284: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 515 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:54:17 compute-0 ceph-mon[73755]: pgmap v2284: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 515 B/s rd, 2.3 KiB/s wr, 1 op/s
Sep 30 18:54:17 compute-0 podman[367753]: 2025-09-30 18:54:17.935800167 +0000 UTC m=+0.048840087 container create 86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:54:17 compute-0 systemd[1]: Started libpod-conmon-86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149.scope.
Sep 30 18:54:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:54:18 compute-0 podman[367753]: 2025-09-30 18:54:17.914985568 +0000 UTC m=+0.028025488 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:54:18 compute-0 podman[367753]: 2025-09-30 18:54:18.012664069 +0000 UTC m=+0.125703999 container init 86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:54:18 compute-0 podman[367753]: 2025-09-30 18:54:18.019861126 +0000 UTC m=+0.132901036 container start 86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_varahamihira, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:54:18 compute-0 podman[367753]: 2025-09-30 18:54:18.023753957 +0000 UTC m=+0.136793887 container attach 86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:54:18 compute-0 charming_varahamihira[367770]: 167 167
Sep 30 18:54:18 compute-0 systemd[1]: libpod-86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149.scope: Deactivated successfully.
Sep 30 18:54:18 compute-0 conmon[367770]: conmon 86cba4ebdf297d2bac11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149.scope/container/memory.events
Sep 30 18:54:18 compute-0 podman[367753]: 2025-09-30 18:54:18.029131376 +0000 UTC m=+0.142171296 container died 86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd65285de1e18296ed4785512d8ca8ea198a14099fe1cff7eb30a0622e6c5ab7-merged.mount: Deactivated successfully.
Sep 30 18:54:18 compute-0 podman[367753]: 2025-09-30 18:54:18.067065159 +0000 UTC m=+0.180105069 container remove 86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:54:18 compute-0 systemd[1]: libpod-conmon-86cba4ebdf297d2bac1197de03df674dc7e53f5fb3917f8e116795a111af7149.scope: Deactivated successfully.
Sep 30 18:54:18 compute-0 podman[367793]: 2025-09-30 18:54:18.228101143 +0000 UTC m=+0.049464193 container create 6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 18:54:18 compute-0 systemd[1]: Started libpod-conmon-6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06.scope.
Sep 30 18:54:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7fc9ac3075f9e1da095fa865985d2950090b3828613f307f6504d6b0153e593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7fc9ac3075f9e1da095fa865985d2950090b3828613f307f6504d6b0153e593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7fc9ac3075f9e1da095fa865985d2950090b3828613f307f6504d6b0153e593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7fc9ac3075f9e1da095fa865985d2950090b3828613f307f6504d6b0153e593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:18 compute-0 podman[367793]: 2025-09-30 18:54:18.208912576 +0000 UTC m=+0.030275636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:54:18 compute-0 podman[367793]: 2025-09-30 18:54:18.305445188 +0000 UTC m=+0.126808238 container init 6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:54:18 compute-0 podman[367793]: 2025-09-30 18:54:18.313898007 +0000 UTC m=+0.135261047 container start 6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:54:18 compute-0 podman[367793]: 2025-09-30 18:54:18.317022158 +0000 UTC m=+0.138385188 container attach 6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:54:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:18.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:18 compute-0 priceless_bartik[367810]: {
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:     "0": [
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:         {
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "devices": [
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "/dev/loop3"
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             ],
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "lv_name": "ceph_lv0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "lv_size": "21470642176",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "name": "ceph_lv0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "tags": {
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.cluster_name": "ceph",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.crush_device_class": "",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.encrypted": "0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.osd_id": "0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.type": "block",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.vdo": "0",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:                 "ceph.with_tpm": "0"
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             },
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "type": "block",
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:             "vg_name": "ceph_vg0"
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:         }
Sep 30 18:54:18 compute-0 priceless_bartik[367810]:     ]
Sep 30 18:54:18 compute-0 priceless_bartik[367810]: }
Sep 30 18:54:18 compute-0 systemd[1]: libpod-6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06.scope: Deactivated successfully.
Sep 30 18:54:18 compute-0 podman[367793]: 2025-09-30 18:54:18.606997304 +0000 UTC m=+0.428360364 container died 6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7fc9ac3075f9e1da095fa865985d2950090b3828613f307f6504d6b0153e593-merged.mount: Deactivated successfully.
Sep 30 18:54:18 compute-0 podman[367793]: 2025-09-30 18:54:18.655279275 +0000 UTC m=+0.476642315 container remove 6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:54:18 compute-0 systemd[1]: libpod-conmon-6b8869d9a0807804bc088d85a6fe245e079ead6317ae4f19ea838ba3867f3f06.scope: Deactivated successfully.
Sep 30 18:54:18 compute-0 sudo[367683]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:18 compute-0 sudo[367832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:54:18 compute-0 sudo[367832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:18 compute-0 sudo[367832]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:18] "GET /metrics HTTP/1.1" 200 46764 "" "Prometheus/2.51.0"
Sep 30 18:54:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:18] "GET /metrics HTTP/1.1" 200 46764 "" "Prometheus/2.51.0"
Sep 30 18:54:18 compute-0 sudo[367857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:54:18 compute-0 sudo[367857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:18.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:54:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344533578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:54:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:54:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344533578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:54:18 compute-0 nova_compute[265391]: 2025-09-30 18:54:18.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:19 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1344533578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:54:19 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1344533578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:54:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:19.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:19 compute-0 podman[367923]: 2025-09-30 18:54:19.313498195 +0000 UTC m=+0.055187381 container create c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 18:54:19 compute-0 systemd[1]: Started libpod-conmon-c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c.scope.
Sep 30 18:54:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:54:19 compute-0 podman[367923]: 2025-09-30 18:54:19.293840836 +0000 UTC m=+0.035530062 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:54:19 compute-0 podman[367923]: 2025-09-30 18:54:19.38892014 +0000 UTC m=+0.130609336 container init c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:54:19 compute-0 podman[367923]: 2025-09-30 18:54:19.39741277 +0000 UTC m=+0.139101946 container start c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:54:19 compute-0 podman[367923]: 2025-09-30 18:54:19.400964662 +0000 UTC m=+0.142653828 container attach c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:54:19 compute-0 hopeful_satoshi[367939]: 167 167
Sep 30 18:54:19 compute-0 podman[367923]: 2025-09-30 18:54:19.403863737 +0000 UTC m=+0.145552923 container died c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_satoshi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:54:19 compute-0 systemd[1]: libpod-c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c.scope: Deactivated successfully.
Sep 30 18:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-961b44051dade147f83d015595db4c4ebce4f2b35e636b40781f658fc6e464ae-merged.mount: Deactivated successfully.
Sep 30 18:54:19 compute-0 podman[367923]: 2025-09-30 18:54:19.438018902 +0000 UTC m=+0.179708068 container remove c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_satoshi, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:54:19 compute-0 systemd[1]: libpod-conmon-c0261fc12f2652d5865f36a94e296e4e9d40e196bbef3e983eefc8702ac6c24c.scope: Deactivated successfully.
Sep 30 18:54:19 compute-0 podman[367964]: 2025-09-30 18:54:19.600515253 +0000 UTC m=+0.037467402 container create f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:54:19 compute-0 systemd[1]: Started libpod-conmon-f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585.scope.
Sep 30 18:54:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb9ddbae360d0606c8de59d83c9e1c2768cb7ce8f5510d565ee72f272480abf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb9ddbae360d0606c8de59d83c9e1c2768cb7ce8f5510d565ee72f272480abf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb9ddbae360d0606c8de59d83c9e1c2768cb7ce8f5510d565ee72f272480abf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb9ddbae360d0606c8de59d83c9e1c2768cb7ce8f5510d565ee72f272480abf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:54:19 compute-0 podman[367964]: 2025-09-30 18:54:19.584429556 +0000 UTC m=+0.021381725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:54:19 compute-0 podman[367964]: 2025-09-30 18:54:19.691798099 +0000 UTC m=+0.128750268 container init f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:54:19 compute-0 podman[367964]: 2025-09-30 18:54:19.698436201 +0000 UTC m=+0.135388350 container start f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_galileo, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 18:54:19 compute-0 podman[367964]: 2025-09-30 18:54:19.701689325 +0000 UTC m=+0.138641504 container attach f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 18:54:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2285: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 2.4 KiB/s rd, 2.3 KiB/s wr, 3 op/s
Sep 30 18:54:20 compute-0 ceph-mon[73755]: pgmap v2285: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 2.4 KiB/s rd, 2.3 KiB/s wr, 3 op/s
Sep 30 18:54:20 compute-0 lvm[368055]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:54:20 compute-0 lvm[368055]: VG ceph_vg0 finished
Sep 30 18:54:20 compute-0 nice_galileo[367980]: {}
Sep 30 18:54:20 compute-0 systemd[1]: libpod-f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585.scope: Deactivated successfully.
Sep 30 18:54:20 compute-0 systemd[1]: libpod-f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585.scope: Consumed 1.162s CPU time.
Sep 30 18:54:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:20.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:20 compute-0 podman[368059]: 2025-09-30 18:54:20.498180449 +0000 UTC m=+0.029765132 container died f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb9ddbae360d0606c8de59d83c9e1c2768cb7ce8f5510d565ee72f272480abf-merged.mount: Deactivated successfully.
Sep 30 18:54:20 compute-0 podman[368059]: 2025-09-30 18:54:20.54529205 +0000 UTC m=+0.076876683 container remove f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_galileo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 18:54:20 compute-0 systemd[1]: libpod-conmon-f8646de670da7494d70b255661c32694f8a28e9360b55406e52f0a64ff8d0585.scope: Deactivated successfully.
Sep 30 18:54:20 compute-0 sudo[367857]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:54:20 compute-0 auditd[705]: Audit daemon rotating log files
Sep 30 18:54:20 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:20 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:54:20 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:20 compute-0 sudo[368074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:54:20 compute-0 sudo[368074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:20 compute-0 sudo[368074]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:21.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:21 compute-0 nova_compute[265391]: 2025-09-30 18:54:21.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:54:21.387 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 18:54:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:54:21.390 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 18:54:21 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:54:21.392 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 18:54:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:54:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2286: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 2 op/s
Sep 30 18:54:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:54:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:54:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:22.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:22 compute-0 nova_compute[265391]: 2025-09-30 18:54:22.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:22 compute-0 ceph-mon[73755]: pgmap v2286: 353 pgs: 353 active+clean; 121 MiB data, 452 MiB used, 40 GiB / 40 GiB avail; 2.2 KiB/s rd, 2 op/s
Sep 30 18:54:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:54:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:23.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:23 compute-0 podman[368102]: 2025-09-30 18:54:23.508307247 +0000 UTC m=+0.052702477 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:54:23 compute-0 podman[368104]: 2025-09-30 18:54:23.515951736 +0000 UTC m=+0.055584662 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:54:23 compute-0 podman[368103]: 2025-09-30 18:54:23.608674829 +0000 UTC m=+0.138538892 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Sep 30 18:54:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2287: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Sep 30 18:54:23 compute-0 ceph-mon[73755]: pgmap v2287: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Sep 30 18:54:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:23.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:23 compute-0 nova_compute[265391]: 2025-09-30 18:54:23.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:24 compute-0 sudo[368171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:54:24 compute-0 sudo[368171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:24 compute-0 sudo[368171]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:24.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:25.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2288: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:54:25 compute-0 ceph-mon[73755]: pgmap v2288: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:54:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:26.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:26 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1484437556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:54:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:27.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:27.431Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:54:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:27.432Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:27 compute-0 nova_compute[265391]: 2025-09-30 18:54:27.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2289: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:54:27 compute-0 ceph-mon[73755]: pgmap v2289: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:54:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:54:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:28.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:54:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:28] "GET /metrics HTTP/1.1" 200 46762 "" "Prometheus/2.51.0"
Sep 30 18:54:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:28] "GET /metrics HTTP/1.1" 200 46762 "" "Prometheus/2.51.0"
Sep 30 18:54:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:28.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:29 compute-0 nova_compute[265391]: 2025-09-30 18:54:29.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:54:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3446365784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:54:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:54:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3446365784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:54:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3446365784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:54:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3446365784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:54:29 compute-0 podman[276673]: time="2025-09-30T18:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:54:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:54:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10323 "" "Go-http-client/1.1"
Sep 30 18:54:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2290: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:54:30 compute-0 nova_compute[265391]: 2025-09-30 18:54:30.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:30 compute-0 ceph-mon[73755]: pgmap v2290: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Sep 30 18:54:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:31.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:31 compute-0 openstack_network_exporter[279566]: ERROR   18:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:54:31 compute-0 openstack_network_exporter[279566]: ERROR   18:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:54:31 compute-0 openstack_network_exporter[279566]: ERROR   18:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:54:31 compute-0 openstack_network_exporter[279566]: ERROR   18:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:54:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:54:31 compute-0 openstack_network_exporter[279566]: ERROR   18:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:54:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:54:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2291: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 28 KiB/s rd, 1.4 KiB/s wr, 39 op/s
Sep 30 18:54:31 compute-0 ceph-mon[73755]: pgmap v2291: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 28 KiB/s rd, 1.4 KiB/s wr, 39 op/s
Sep 30 18:54:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:32 compute-0 nova_compute[265391]: 2025-09-30 18:54:32.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:33.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2292: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 53 op/s
Sep 30 18:54:33 compute-0 ceph-mon[73755]: pgmap v2292: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 53 op/s
Sep 30 18:54:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:33.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:34 compute-0 nova_compute[265391]: 2025-09-30 18:54:34.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:34.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:35.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2293: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Sep 30 18:54:35 compute-0 ceph-mon[73755]: pgmap v2293: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Sep 30 18:54:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:36.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:54:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518998138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:54:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:54:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518998138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:54:36 compute-0 podman[368208]: 2025-09-30 18:54:36.552009959 +0000 UTC m=+0.088192257 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:54:36 compute-0 podman[368210]: 2025-09-30 18:54:36.56207476 +0000 UTC m=+0.098649868 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, version=9.6, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Sep 30 18:54:36 compute-0 podman[368209]: 2025-09-30 18:54:36.57328002 +0000 UTC m=+0.104961601 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:54:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3518998138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:54:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3518998138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:54:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:37.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:54:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:54:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:37.432Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:54:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:54:37 compute-0 nova_compute[265391]: 2025-09-30 18:54:37.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2294: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Sep 30 18:54:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:54:37 compute-0 ceph-mon[73755]: pgmap v2294: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Sep 30 18:54:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:38.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:38] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 18:54:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:38] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 18:54:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:38.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:39 compute-0 nova_compute[265391]: 2025-09-30 18:54:39.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:39.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2295: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 255 B/s wr, 14 op/s
Sep 30 18:54:39 compute-0 ceph-mon[73755]: pgmap v2295: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 255 B/s wr, 14 op/s
Sep 30 18:54:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:40.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2296: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Sep 30 18:54:41 compute-0 ceph-mon[73755]: pgmap v2296: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Sep 30 18:54:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:42 compute-0 nova_compute[265391]: 2025-09-30 18:54:42.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:43.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2297: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 255 B/s wr, 14 op/s
Sep 30 18:54:43 compute-0 ceph-mon[73755]: pgmap v2297: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 11 KiB/s rd, 255 B/s wr, 14 op/s
Sep 30 18:54:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:43.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:44 compute-0 nova_compute[265391]: 2025-09-30 18:54:44.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:44 compute-0 sudo[368274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:54:44 compute-0 sudo[368274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:54:44 compute-0 sudo[368274]: pam_unix(sudo:session): session closed for user root
Sep 30 18:54:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:44.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:45.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2298: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:45 compute-0 ceph-mon[73755]: pgmap v2298: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:46.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:54:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:54:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:47.433Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:47 compute-0 nova_compute[265391]: 2025-09-30 18:54:47.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2299: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:47 compute-0 ceph-mon[73755]: pgmap v2299: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:47.942636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258487942676, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1092, "num_deletes": 251, "total_data_size": 1827270, "memory_usage": 1861264, "flush_reason": "Manual Compaction"}
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258487956000, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1776786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56109, "largest_seqno": 57200, "table_properties": {"data_size": 1771557, "index_size": 2688, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11675, "raw_average_key_size": 19, "raw_value_size": 1760943, "raw_average_value_size": 3015, "num_data_blocks": 119, "num_entries": 584, "num_filter_entries": 584, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258391, "oldest_key_time": 1759258391, "file_creation_time": 1759258487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 13427 microseconds, and 8376 cpu microseconds.
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:47.956058) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1776786 bytes OK
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:47.956088) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:47.957780) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:47.957803) EVENT_LOG_v1 {"time_micros": 1759258487957795, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:47.957826) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1822231, prev total WAL file size 1822231, number of live WAL files 2.
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:47.958826) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1735KB)], [131(10MB)]
Sep 30 18:54:47 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258487958867, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 13273835, "oldest_snapshot_seqno": -1}
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7853 keys, 11260929 bytes, temperature: kUnknown
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258488024307, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 11260929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11215084, "index_size": 25124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 208562, "raw_average_key_size": 26, "raw_value_size": 11081329, "raw_average_value_size": 1411, "num_data_blocks": 964, "num_entries": 7853, "num_filter_entries": 7853, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:48.024589) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11260929 bytes
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:48.025802) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.5 rd, 171.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(13.8) write-amplify(6.3) OK, records in: 8369, records dropped: 516 output_compression: NoCompression
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:48.025822) EVENT_LOG_v1 {"time_micros": 1759258488025813, "job": 80, "event": "compaction_finished", "compaction_time_micros": 65543, "compaction_time_cpu_micros": 46901, "output_level": 6, "num_output_files": 1, "total_output_size": 11260929, "num_input_records": 8369, "num_output_records": 7853, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258488026444, "job": 80, "event": "table_file_deletion", "file_number": 133}
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258488029516, "job": 80, "event": "table_file_deletion", "file_number": 131}
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:47.958728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:48.029584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:48.029591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:48.029594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:48.029597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:54:48 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:54:48.029601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:54:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:48.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:48] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 18:54:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:48] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 18:54:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:48.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:49 compute-0 nova_compute[265391]: 2025-09-30 18:54:49.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:54:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:49.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:54:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2300: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:54:49 compute-0 ceph-mon[73755]: pgmap v2300: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:54:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:54:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:50.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:54:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2301: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:52 compute-0 ceph-mon[73755]: pgmap v2301: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:54:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:54:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:54:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:52.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:54:52 compute-0 nova_compute[265391]: 2025-09-30 18:54:52.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:54:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2302: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:54:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:53.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:54:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:53.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:54:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:54 compute-0 ceph-mon[73755]: pgmap v2302: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:54:54 compute-0 nova_compute[265391]: 2025-09-30 18:54:54.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:54:54.353 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:54:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:54:54.353 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:54:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:54:54.353 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:54:54 compute-0 podman[368312]: 2025-09-30 18:54:54.514146682 +0000 UTC m=+0.052420900 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:54:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:54.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:54 compute-0 podman[368310]: 2025-09-30 18:54:54.529332365 +0000 UTC m=+0.074557783 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Sep 30 18:54:54 compute-0 podman[368311]: 2025-09-30 18:54:54.566811477 +0000 UTC m=+0.109139740 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Sep 30 18:54:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2303: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:55 compute-0 ceph-mon[73755]: pgmap v2303: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:54:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:56.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:57.434Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:57 compute-0 nova_compute[265391]: 2025-09-30 18:54:57.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:54:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3136854427' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:54:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:54:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3136854427' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:54:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3136854427' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:54:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3136854427' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:54:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2304: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:54:58.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:58 compute-0 ceph-mon[73755]: pgmap v2304: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:54:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:58] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:54:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:54:58] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:54:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:54:58.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:54:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:54:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:54:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:54:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:54:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:54:59 compute-0 nova_compute[265391]: 2025-09-30 18:54:59.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:54:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:54:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:54:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:54:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:54:59 compute-0 nova_compute[265391]: 2025-09-30 18:54:59.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:59 compute-0 nova_compute[265391]: 2025-09-30 18:54:59.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:54:59 compute-0 podman[276673]: time="2025-09-30T18:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:54:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:54:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10319 "" "Go-http-client/1.1"
Sep 30 18:54:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2305: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:54:59 compute-0 ceph-mon[73755]: pgmap v2305: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:00.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:01 compute-0 openstack_network_exporter[279566]: ERROR   18:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:55:01 compute-0 openstack_network_exporter[279566]: ERROR   18:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:55:01 compute-0 openstack_network_exporter[279566]: ERROR   18:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:55:01 compute-0 openstack_network_exporter[279566]: ERROR   18:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:55:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:55:01 compute-0 openstack_network_exporter[279566]: ERROR   18:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:55:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:55:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2306: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:01 compute-0 ceph-mon[73755]: pgmap v2306: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:02.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:02 compute-0 nova_compute[265391]: 2025-09-30 18:55:02.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2940139615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:55:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:03.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2307: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:03.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:04 compute-0 nova_compute[265391]: 2025-09-30 18:55:04.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:04 compute-0 ceph-mon[73755]: pgmap v2307: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:04 compute-0 sudo[368387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:55:04 compute-0 sudo[368387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:04 compute-0 sudo[368387]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:04.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2993550184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:55:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:05.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:05 compute-0 nova_compute[265391]: 2025-09-30 18:55:05.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2308: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:05 compute-0 nova_compute[265391]: 2025-09-30 18:55:05.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:55:05 compute-0 nova_compute[265391]: 2025-09-30 18:55:05.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:55:05 compute-0 nova_compute[265391]: 2025-09-30 18:55:05.938 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:55:05 compute-0 nova_compute[265391]: 2025-09-30 18:55:05.938 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:55:05 compute-0 nova_compute[265391]: 2025-09-30 18:55:05.939 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:55:06 compute-0 ceph-mon[73755]: pgmap v2308: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:55:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536301855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:55:06 compute-0 nova_compute[265391]: 2025-09-30 18:55:06.447 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:55:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:06.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:06 compute-0 nova_compute[265391]: 2025-09-30 18:55:06.625 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:55:06 compute-0 nova_compute[265391]: 2025-09-30 18:55:06.626 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:55:06 compute-0 nova_compute[265391]: 2025-09-30 18:55:06.645 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:55:06 compute-0 nova_compute[265391]: 2025-09-30 18:55:06.646 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4315MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:55:06 compute-0 nova_compute[265391]: 2025-09-30 18:55:06.646 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:55:06 compute-0 nova_compute[265391]: 2025-09-30 18:55:06.646 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:55:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3536301855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:55:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:07.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:55:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:55:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:07.435Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:55:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:55:07 compute-0 podman[368438]: 2025-09-30 18:55:07.557474171 +0000 UTC m=+0.090297631 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930)
Sep 30 18:55:07 compute-0 podman[368437]: 2025-09-30 18:55:07.568297422 +0000 UTC m=+0.104001897 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:07 compute-0 podman[368439]: 2025-09-30 18:55:07.577251684 +0000 UTC m=+0.104135690 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.696 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.696 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:55:06 up  1:58,  0 user,  load average: 0.27, 0.45, 0.66\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.719 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.739 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.739 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.757 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.785 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 18:55:07 compute-0 nova_compute[265391]: 2025-09-30 18:55:07.803 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:55:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2309: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:55:08 compute-0 ceph-mon[73755]: pgmap v2309: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:55:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1414025965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:55:08 compute-0 nova_compute[265391]: 2025-09-30 18:55:08.250 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:55:08 compute-0 nova_compute[265391]: 2025-09-30 18:55:08.257 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:55:08
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', '.nfs', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'images']
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:55:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:08.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:08 compute-0 nova_compute[265391]: 2025-09-30 18:55:08.768 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:55:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:08] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:55:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:08] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:55:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:08.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1414025965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:55:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:09.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.279 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.280 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.634s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.429 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 18:55:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2310: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:09 compute-0 nova_compute[265391]: 2025-09-30 18:55:09.940 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 18:55:10 compute-0 ceph-mon[73755]: pgmap v2310: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:10.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:11.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2311: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:11 compute-0 ceph-mon[73755]: pgmap v2311: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:11 compute-0 nova_compute[265391]: 2025-09-30 18:55:11.940 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:12 compute-0 nova_compute[265391]: 2025-09-30 18:55:12.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:12 compute-0 nova_compute[265391]: 2025-09-30 18:55:12.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:12.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:13.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2312: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:13 compute-0 ceph-mon[73755]: pgmap v2312: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:13.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:14 compute-0 nova_compute[265391]: 2025-09-30 18:55:14.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:14.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:15.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2313: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:15 compute-0 ceph-mon[73755]: pgmap v2313: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:16.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:17.436Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:17 compute-0 nova_compute[265391]: 2025-09-30 18:55:17.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2314: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:17 compute-0 ceph-mon[73755]: pgmap v2314: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:18 compute-0 nova_compute[265391]: 2025-09-30 18:55:18.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:18.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:18] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:55:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:18] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:55:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:18.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:19 compute-0 nova_compute[265391]: 2025-09-30 18:55:19.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:19.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2315: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:19 compute-0 ceph-mon[73755]: pgmap v2315: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:20 compute-0 sudo[368534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:55:20 compute-0 sudo[368534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:20 compute-0 sudo[368534]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:21 compute-0 sudo[368559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:55:21 compute-0 sudo[368559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:55:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:21.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:55:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:21 compute-0 sudo[368559]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:55:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:55:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:55:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2316: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 512 B/s rd, 0 op/s
Sep 30 18:55:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:55:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:55:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:55:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:55:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:55:21 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:55:21 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: pgmap v2316: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 512 B/s rd, 0 op/s
Sep 30 18:55:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:55:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:55:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:55:21 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:55:21 compute-0 sudo[368617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:55:21 compute-0 sudo[368617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:21 compute-0 sudo[368617]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:22 compute-0 sudo[368642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:55:22 compute-0 sudo[368642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:55:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:55:22 compute-0 podman[368707]: 2025-09-30 18:55:22.403105975 +0000 UTC m=+0.038484369 container create 5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_borg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 18:55:22 compute-0 systemd[1]: Started libpod-conmon-5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0.scope.
Sep 30 18:55:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:55:22 compute-0 podman[368707]: 2025-09-30 18:55:22.481165978 +0000 UTC m=+0.116544382 container init 5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_borg, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:55:22 compute-0 podman[368707]: 2025-09-30 18:55:22.386205637 +0000 UTC m=+0.021584071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:55:22 compute-0 podman[368707]: 2025-09-30 18:55:22.487601795 +0000 UTC m=+0.122980199 container start 5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:55:22 compute-0 podman[368707]: 2025-09-30 18:55:22.490310905 +0000 UTC m=+0.125689309 container attach 5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_borg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:55:22 compute-0 wizardly_borg[368723]: 167 167
Sep 30 18:55:22 compute-0 systemd[1]: libpod-5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0.scope: Deactivated successfully.
Sep 30 18:55:22 compute-0 podman[368707]: 2025-09-30 18:55:22.49357346 +0000 UTC m=+0.128951864 container died 5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_borg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 18:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffd91254ec61bf5284e9efb3b69c213c257d3d4266bdc3d8467c6ed769888813-merged.mount: Deactivated successfully.
Sep 30 18:55:22 compute-0 podman[368707]: 2025-09-30 18:55:22.523820763 +0000 UTC m=+0.159199167 container remove 5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_borg, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:55:22 compute-0 systemd[1]: libpod-conmon-5f28738c1ba6b839eb936e71cc53d3b91105eacc5074fc4d131ffad06f1720d0.scope: Deactivated successfully.
Sep 30 18:55:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:22.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:22 compute-0 nova_compute[265391]: 2025-09-30 18:55:22.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:22 compute-0 podman[368748]: 2025-09-30 18:55:22.737084641 +0000 UTC m=+0.063156818 container create 7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:55:22 compute-0 systemd[1]: Started libpod-conmon-7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec.scope.
Sep 30 18:55:22 compute-0 podman[368748]: 2025-09-30 18:55:22.71237554 +0000 UTC m=+0.038447797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:55:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/addb52f9961b545803a8e3f182762eda31a0375ce13c3dd9c06b8f9397bd2cba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/addb52f9961b545803a8e3f182762eda31a0375ce13c3dd9c06b8f9397bd2cba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/addb52f9961b545803a8e3f182762eda31a0375ce13c3dd9c06b8f9397bd2cba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/addb52f9961b545803a8e3f182762eda31a0375ce13c3dd9c06b8f9397bd2cba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/addb52f9961b545803a8e3f182762eda31a0375ce13c3dd9c06b8f9397bd2cba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:22 compute-0 podman[368748]: 2025-09-30 18:55:22.849750961 +0000 UTC m=+0.175823158 container init 7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hertz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:55:22 compute-0 podman[368748]: 2025-09-30 18:55:22.868207749 +0000 UTC m=+0.194279926 container start 7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hertz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:55:22 compute-0 podman[368748]: 2025-09-30 18:55:22.87130845 +0000 UTC m=+0.197380647 container attach 7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hertz, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:55:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:55:23 compute-0 keen_hertz[368764]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:55:23 compute-0 keen_hertz[368764]: --> All data devices are unavailable
Sep 30 18:55:23 compute-0 systemd[1]: libpod-7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec.scope: Deactivated successfully.
Sep 30 18:55:23 compute-0 podman[368748]: 2025-09-30 18:55:23.218839057 +0000 UTC m=+0.544911244 container died 7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hertz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:55:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:55:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-addb52f9961b545803a8e3f182762eda31a0375ce13c3dd9c06b8f9397bd2cba-merged.mount: Deactivated successfully.
Sep 30 18:55:23 compute-0 podman[368748]: 2025-09-30 18:55:23.263400352 +0000 UTC m=+0.589472529 container remove 7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hertz, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:55:23 compute-0 systemd[1]: libpod-conmon-7eb0efe12e934df685c5caa57bf7d17a08a814917d4e60f6acd2d194d8fc7eec.scope: Deactivated successfully.
Sep 30 18:55:23 compute-0 sudo[368642]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:23 compute-0 sudo[368790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:55:23 compute-0 sudo[368790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:23 compute-0 sudo[368790]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:23 compute-0 sudo[368815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:55:23 compute-0 sudo[368815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2317: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 768 B/s rd, 0 op/s
Sep 30 18:55:23 compute-0 podman[368884]: 2025-09-30 18:55:23.866855603 +0000 UTC m=+0.043351645 container create 9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:55:23 compute-0 systemd[1]: Started libpod-conmon-9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3.scope.
Sep 30 18:55:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:55:23 compute-0 podman[368884]: 2025-09-30 18:55:23.84436957 +0000 UTC m=+0.020865632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:55:23 compute-0 ceph-mon[73755]: pgmap v2317: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 768 B/s rd, 0 op/s
Sep 30 18:55:23 compute-0 podman[368884]: 2025-09-30 18:55:23.958828496 +0000 UTC m=+0.135324608 container init 9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 18:55:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:23.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:55:23 compute-0 podman[368884]: 2025-09-30 18:55:23.968941588 +0000 UTC m=+0.145437630 container start 9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_banzai, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 18:55:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:23.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:55:23 compute-0 modest_banzai[368902]: 167 167
Sep 30 18:55:23 compute-0 podman[368884]: 2025-09-30 18:55:23.973385893 +0000 UTC m=+0.149881945 container attach 9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_banzai, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:55:23 compute-0 systemd[1]: libpod-9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3.scope: Deactivated successfully.
Sep 30 18:55:23 compute-0 conmon[368902]: conmon 9b293b9942fbd9628146 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3.scope/container/memory.events
Sep 30 18:55:23 compute-0 podman[368884]: 2025-09-30 18:55:23.975864877 +0000 UTC m=+0.152360939 container died 9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e0b3e89e858a9cb8a4efa69b54268871c67e16a6e132a93d742d8eb343104a4-merged.mount: Deactivated successfully.
Sep 30 18:55:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:24 compute-0 podman[368884]: 2025-09-30 18:55:24.01107696 +0000 UTC m=+0.187573002 container remove 9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_banzai, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 18:55:24 compute-0 systemd[1]: libpod-conmon-9b293b9942fbd96281466e61b06be7995813b18d57300202d443cede3e7619e3.scope: Deactivated successfully.
Sep 30 18:55:24 compute-0 nova_compute[265391]: 2025-09-30 18:55:24.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:24 compute-0 podman[368927]: 2025-09-30 18:55:24.180913142 +0000 UTC m=+0.043438567 container create d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:55:24 compute-0 systemd[1]: Started libpod-conmon-d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0.scope.
Sep 30 18:55:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25948be6d2ab26182fb01079e97d8e03bccea9ebc18f0a117bb3c525cbda9bcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25948be6d2ab26182fb01079e97d8e03bccea9ebc18f0a117bb3c525cbda9bcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25948be6d2ab26182fb01079e97d8e03bccea9ebc18f0a117bb3c525cbda9bcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25948be6d2ab26182fb01079e97d8e03bccea9ebc18f0a117bb3c525cbda9bcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:24 compute-0 podman[368927]: 2025-09-30 18:55:24.162203837 +0000 UTC m=+0.024729282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:55:24 compute-0 podman[368927]: 2025-09-30 18:55:24.259895038 +0000 UTC m=+0.122420493 container init d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:55:24 compute-0 podman[368927]: 2025-09-30 18:55:24.268629085 +0000 UTC m=+0.131154500 container start d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_elgamal, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:55:24 compute-0 podman[368927]: 2025-09-30 18:55:24.272660679 +0000 UTC m=+0.135186094 container attach d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_elgamal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]: {
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:     "0": [
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:         {
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "devices": [
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "/dev/loop3"
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             ],
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "lv_name": "ceph_lv0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "lv_size": "21470642176",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "name": "ceph_lv0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "tags": {
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.cluster_name": "ceph",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.crush_device_class": "",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.encrypted": "0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.osd_id": "0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.type": "block",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.vdo": "0",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:                 "ceph.with_tpm": "0"
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             },
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "type": "block",
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:             "vg_name": "ceph_vg0"
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:         }
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]:     ]
Sep 30 18:55:24 compute-0 interesting_elgamal[368944]: }
Sep 30 18:55:24 compute-0 sudo[368953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:55:24 compute-0 sudo[368953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:24.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:24 compute-0 systemd[1]: libpod-d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0.scope: Deactivated successfully.
Sep 30 18:55:24 compute-0 sudo[368953]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:24 compute-0 podman[368927]: 2025-09-30 18:55:24.60681813 +0000 UTC m=+0.469343555 container died d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-25948be6d2ab26182fb01079e97d8e03bccea9ebc18f0a117bb3c525cbda9bcc-merged.mount: Deactivated successfully.
Sep 30 18:55:24 compute-0 podman[368927]: 2025-09-30 18:55:24.646749715 +0000 UTC m=+0.509275140 container remove d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_elgamal, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:55:24 compute-0 systemd[1]: libpod-conmon-d4a934f87a90fab32e4fccb21a513699a9119d528c3b36004049cc12718d54c0.scope: Deactivated successfully.
Sep 30 18:55:24 compute-0 sudo[368815]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:24 compute-0 podman[368978]: 2025-09-30 18:55:24.696911525 +0000 UTC m=+0.093089353 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:55:24 compute-0 podman[368977]: 2025-09-30 18:55:24.700671773 +0000 UTC m=+0.096537803 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Sep 30 18:55:24 compute-0 podman[368979]: 2025-09-30 18:55:24.72140914 +0000 UTC m=+0.108701708 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 18:55:24 compute-0 sudo[369052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:55:24 compute-0 sudo[369052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:24 compute-0 sudo[369052]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:24 compute-0 sudo[369082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:55:24 compute-0 sudo[369082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:25 compute-0 podman[369148]: 2025-09-30 18:55:25.264212429 +0000 UTC m=+0.051575408 container create 2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_matsumoto, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 18:55:25 compute-0 systemd[1]: Started libpod-conmon-2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab.scope.
Sep 30 18:55:25 compute-0 podman[369148]: 2025-09-30 18:55:25.240439263 +0000 UTC m=+0.027802272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:55:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:55:25 compute-0 podman[369148]: 2025-09-30 18:55:25.358928324 +0000 UTC m=+0.146291333 container init 2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 18:55:25 compute-0 podman[369148]: 2025-09-30 18:55:25.366011927 +0000 UTC m=+0.153374906 container start 2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_matsumoto, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:55:25 compute-0 podman[369148]: 2025-09-30 18:55:25.369702143 +0000 UTC m=+0.157065132 container attach 2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:55:25 compute-0 bold_matsumoto[369164]: 167 167
Sep 30 18:55:25 compute-0 systemd[1]: libpod-2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab.scope: Deactivated successfully.
Sep 30 18:55:25 compute-0 podman[369148]: 2025-09-30 18:55:25.373399019 +0000 UTC m=+0.160762038 container died 2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bda77ba398f2c6562e97fddb1c29528b38af01211edbffac6f98eb60caaa857-merged.mount: Deactivated successfully.
Sep 30 18:55:25 compute-0 podman[369148]: 2025-09-30 18:55:25.4139433 +0000 UTC m=+0.201306279 container remove 2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 18:55:25 compute-0 systemd[1]: libpod-conmon-2f65463fa5f416ebf679ba7e14b5358a95dd7a0997e7a8a21f4c61becbd9c9ab.scope: Deactivated successfully.
Sep 30 18:55:25 compute-0 podman[369188]: 2025-09-30 18:55:25.626383416 +0000 UTC m=+0.060251153 container create b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 18:55:25 compute-0 systemd[1]: Started libpod-conmon-b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f.scope.
Sep 30 18:55:25 compute-0 podman[369188]: 2025-09-30 18:55:25.605221667 +0000 UTC m=+0.039089444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:55:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bb98092e5dfa32ae14f645ac7775910f63798778de8d599555ab042ee06d87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bb98092e5dfa32ae14f645ac7775910f63798778de8d599555ab042ee06d87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bb98092e5dfa32ae14f645ac7775910f63798778de8d599555ab042ee06d87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bb98092e5dfa32ae14f645ac7775910f63798778de8d599555ab042ee06d87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:55:25 compute-0 podman[369188]: 2025-09-30 18:55:25.725921005 +0000 UTC m=+0.159788762 container init b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:55:25 compute-0 podman[369188]: 2025-09-30 18:55:25.734727714 +0000 UTC m=+0.168595451 container start b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:55:25 compute-0 podman[369188]: 2025-09-30 18:55:25.744015054 +0000 UTC m=+0.177882811 container attach b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kare, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:55:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2318: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 512 B/s rd, 0 op/s
Sep 30 18:55:25 compute-0 ceph-mon[73755]: pgmap v2318: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 512 B/s rd, 0 op/s
Sep 30 18:55:26 compute-0 lvm[369281]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:55:26 compute-0 lvm[369281]: VG ceph_vg0 finished
Sep 30 18:55:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:26 compute-0 happy_kare[369206]: {}
Sep 30 18:55:26 compute-0 systemd[1]: libpod-b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f.scope: Deactivated successfully.
Sep 30 18:55:26 compute-0 systemd[1]: libpod-b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f.scope: Consumed 1.141s CPU time.
Sep 30 18:55:26 compute-0 podman[369188]: 2025-09-30 18:55:26.515526361 +0000 UTC m=+0.949394098 container died b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-73bb98092e5dfa32ae14f645ac7775910f63798778de8d599555ab042ee06d87-merged.mount: Deactivated successfully.
Sep 30 18:55:26 compute-0 podman[369188]: 2025-09-30 18:55:26.570430274 +0000 UTC m=+1.004298011 container remove b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:55:26 compute-0 systemd[1]: libpod-conmon-b82779d3c28bfb2ad8c5e246a2cf30273226c82c127b7264610e993c794bcd4f.scope: Deactivated successfully.
Sep 30 18:55:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:26.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:26 compute-0 sudo[369082]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:55:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:55:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:55:26 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:55:26 compute-0 sudo[369298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:55:26 compute-0 sudo[369298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:26 compute-0 sudo[369298]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:26 compute-0 sshd-session[369323]: Accepted publickey for zuul from 192.168.122.10 port 33270 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 18:55:26 compute-0 systemd-logind[811]: New session 59 of user zuul.
Sep 30 18:55:26 compute-0 systemd[1]: Started Session 59 of User zuul.
Sep 30 18:55:26 compute-0 sshd-session[369323]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 18:55:26 compute-0 sudo[369327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Sep 30 18:55:26 compute-0 sudo[369327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:55:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:27.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:27.437Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:27 compute-0 nova_compute[265391]: 2025-09-30 18:55:27.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:55:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:55:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2319: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 512 B/s rd, 0 op/s
Sep 30 18:55:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:28.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:28 compute-0 ceph-mon[73755]: pgmap v2319: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 512 B/s rd, 0 op/s
Sep 30 18:55:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:28] "GET /metrics HTTP/1.1" 200 46741 "" "Prometheus/2.51.0"
Sep 30 18:55:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:28] "GET /metrics HTTP/1.1" 200 46741 "" "Prometheus/2.51.0"
Sep 30 18:55:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:28.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:29 compute-0 nova_compute[265391]: 2025-09-30 18:55:29.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:29.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:29 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.18956 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:29 compute-0 podman[276673]: time="2025-09-30T18:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:55:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:55:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10321 "" "Go-http-client/1.1"
Sep 30 18:55:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2320: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 768 B/s rd, 0 op/s
Sep 30 18:55:29 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27345 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:29 compute-0 ceph-mon[73755]: from='client.18956 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:29 compute-0 ceph-mon[73755]: pgmap v2320: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 768 B/s rd, 0 op/s
Sep 30 18:55:30 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.18960 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:30 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27349 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status"} v 0)
Sep 30 18:55:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21381944' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 18:55:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:30.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:30 compute-0 ceph-mon[73755]: from='client.27345 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:30 compute-0 ceph-mon[73755]: from='client.18960 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:30 compute-0 ceph-mon[73755]: from='client.27349 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/21381944' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 18:55:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:31.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:31 compute-0 openstack_network_exporter[279566]: ERROR   18:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:55:31 compute-0 openstack_network_exporter[279566]: ERROR   18:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:55:31 compute-0 openstack_network_exporter[279566]: ERROR   18:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:55:31 compute-0 openstack_network_exporter[279566]: ERROR   18:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:55:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:55:31 compute-0 openstack_network_exporter[279566]: ERROR   18:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:55:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:55:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2321: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 512 B/s rd, 0 op/s
Sep 30 18:55:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2986207997' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 18:55:31 compute-0 ceph-mon[73755]: pgmap v2321: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 512 B/s rd, 0 op/s
Sep 30 18:55:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:32.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:32 compute-0 nova_compute[265391]: 2025-09-30 18:55:32.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2322: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:33 compute-0 ceph-mon[73755]: pgmap v2322: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:33.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:34 compute-0 nova_compute[265391]: 2025-09-30 18:55:34.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:34.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:35.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2323: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:35 compute-0 ceph-mon[73755]: pgmap v2323: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:55:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1109390619' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:55:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:55:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1109390619' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:55:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:36.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1109390619' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:55:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1109390619' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:55:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:37.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:55:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:55:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:37.440Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:55:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:55:37 compute-0 nova_compute[265391]: 2025-09-30 18:55:37.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:37 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27359 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2324: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 18:55:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 18:55:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:55:37 compute-0 ceph-mon[73755]: from='client.27359 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:37 compute-0 ceph-mon[73755]: pgmap v2324: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3782359562' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 18:55:37 compute-0 ceph-mon[73755]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 18:55:38 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27367 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:38 compute-0 sshd-session[369695]: Connection closed by 147.185.132.192 port 54516
Sep 30 18:55:38 compute-0 podman[369701]: 2025-09-30 18:55:38.553984366 +0000 UTC m=+0.077315515 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:55:38 compute-0 podman[369700]: 2025-09-30 18:55:38.558906464 +0000 UTC m=+0.078924257 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:55:38 compute-0 podman[369699]: 2025-09-30 18:55:38.576977442 +0000 UTC m=+0.110228658 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 18:55:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:38.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:38 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27375 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:38] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:55:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:38] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:55:38 compute-0 ceph-mon[73755]: from='client.27367 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:38 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3577071448' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:55:38 compute-0 ceph-mon[73755]: from='client.27375 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:38 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4270488756' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 18:55:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:38.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:39 compute-0 nova_compute[265391]: 2025-09-30 18:55:39.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:39 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27383 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:39.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:39 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27395 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2325: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:39 compute-0 ceph-mon[73755]: from='client.27383 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1463308040' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 18:55:39 compute-0 ceph-mon[73755]: from='client.27395 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/664134983' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 18:55:39 compute-0 ceph-mon[73755]: pgmap v2325: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:40 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27399 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 18:55:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 18:55:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:40.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:41 compute-0 ceph-mon[73755]: from='client.27399 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3321485735' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 18:55:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/196665390' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 18:55:41 compute-0 ceph-mon[73755]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 18:55:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/844676723' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 18:55:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1698590819' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 18:55:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:41.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:41 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27423 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:41 compute-0 ceph-mgr[74051]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 18:55:41 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T18:55:41.394+0000 7faab05c1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 18:55:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2326: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1737613640' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 18:55:42 compute-0 ceph-mon[73755]: from='client.27423 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3597903295' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 18:55:42 compute-0 ceph-mon[73755]: pgmap v2326: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2751400103' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 18:55:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 18:55:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4247926951' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 18:55:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:42.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:42 compute-0 nova_compute[265391]: 2025-09-30 18:55:42.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:42 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27445 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3416236125' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 18:55:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4247926951' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 18:55:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/601627967' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 18:55:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1231668273' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 18:55:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:43.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:43 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27453 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:43 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19004 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2327: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:43.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:44 compute-0 nova_compute[265391]: 2025-09-30 18:55:44.003 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:44 compute-0 nova_compute[265391]: 2025-09-30 18:55:44.003 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 18:55:44 compute-0 nova_compute[265391]: 2025-09-30 18:55:44.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:44 compute-0 ceph-mon[73755]: from='client.27445 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:44 compute-0 ceph-mon[73755]: from='client.27453 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2571273096' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 18:55:44 compute-0 ceph-mon[73755]: from='client.19004 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:44 compute-0 ceph-mon[73755]: pgmap v2327: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2007948442' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 18:55:44 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27469 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 18:55:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3374520111' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 18:55:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:44.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:44 compute-0 sudo[369816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:55:44 compute-0 sudo[369816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:55:44 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27477 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:44 compute-0 sudo[369816]: pam_unix(sudo:session): session closed for user root
Sep 30 18:55:45 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27485 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:45 compute-0 ceph-mon[73755]: from='client.27469 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3374520111' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 18:55:45 compute-0 ceph-mon[73755]: from='client.27477 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2364943987' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 18:55:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:45.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:45 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27493 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2328: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:45 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27499 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:46 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27507 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:46 compute-0 ceph-mon[73755]: from='client.27485 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4222235085' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 18:55:46 compute-0 ceph-mon[73755]: from='client.27493 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:46 compute-0 ceph-mon[73755]: pgmap v2328: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:46.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:46 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27511 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:47 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27519 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:47.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:47 compute-0 ceph-mon[73755]: from='client.27499 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:47 compute-0 ceph-mon[73755]: from='client.27507 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2913047677' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 18:55:47 compute-0 ceph-mon[73755]: from='client.27511 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/682867562' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 18:55:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3173099720' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 18:55:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:47.440Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:47 compute-0 nova_compute[265391]: 2025-09-30 18:55:47.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2329: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:48 compute-0 ceph-mon[73755]: from='client.27519 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1775577588' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 18:55:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3224008869' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 18:55:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1305591873' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 18:55:48 compute-0 ceph-mon[73755]: pgmap v2329: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/768375213' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 18:55:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2191495725' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 18:55:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:48.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:48] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:55:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:48] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:55:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:48.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:49 compute-0 nova_compute[265391]: 2025-09-30 18:55:49.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:49.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/50140661' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 18:55:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2754314125' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 18:55:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4178499829' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 18:55:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3593172710' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 18:55:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2330: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:49 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27575 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:50 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27583 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2675601677' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 18:55:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4061543543' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 18:55:50 compute-0 ceph-mon[73755]: pgmap v2330: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2041355492' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 18:55:50 compute-0 ceph-mon[73755]: from='client.27575 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2717751240' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 18:55:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.003000078s ======
Sep 30 18:55:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:50.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Sep 30 18:55:50 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27587 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:51 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27595 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:51.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:51 compute-0 ceph-mon[73755]: from='client.27583 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:51 compute-0 ceph-mon[73755]: from='client.27587 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:51 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/201153571' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 18:55:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:51 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27603 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2331: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:51 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27611 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:55:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:55:52 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27619 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:52 compute-0 ceph-mon[73755]: from='client.27595 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:52 compute-0 ceph-mon[73755]: from='client.27603 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/572766346' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 18:55:52 compute-0 ceph-mon[73755]: pgmap v2331: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:52 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/768507186' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:55:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:55:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:52.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:52 compute-0 nova_compute[265391]: 2025-09-30 18:55:52.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 18:55:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 18:55:52 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27627 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:53.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:53 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27641 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:53 compute-0 ceph-mon[73755]: from='client.27611 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:53 compute-0 ceph-mon[73755]: from='client.27619 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2866731057' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 18:55:53 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 18:55:53 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 18:55:53 compute-0 ceph-mon[73755]: from='client.27627 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:53 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 18:55:53 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 18:55:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2332: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:53 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27651 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:53.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:54 compute-0 nova_compute[265391]: 2025-09-30 18:55:54.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:55:54.355 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:55:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:55:54.355 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:55:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:55:54.355 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:55:54 compute-0 ceph-mon[73755]: from='client.27641 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:55:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1041952759' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 18:55:54 compute-0 ceph-mon[73755]: pgmap v2332: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2106535500' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 18:55:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:54.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:55.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:55 compute-0 ceph-mon[73755]: from='client.27651 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2244064885' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 18:55:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2174272043' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 18:55:55 compute-0 podman[369941]: 2025-09-30 18:55:55.525092058 +0000 UTC m=+0.058223180 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Sep 30 18:55:55 compute-0 podman[369942]: 2025-09-30 18:55:55.557794475 +0000 UTC m=+0.089759087 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest)
Sep 30 18:55:55 compute-0 podman[369943]: 2025-09-30 18:55:55.563265037 +0000 UTC m=+0.090424254 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:55:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2333: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:56 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27671 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:55:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/265782300' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 18:55:56 compute-0 ceph-mon[73755]: pgmap v2333: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:55:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:57.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:55:57 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27683 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:57.442Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:57 compute-0 ceph-mon[73755]: from='client.27671 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/828699626' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 18:55:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1557476775' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 18:55:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:55:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3677158116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:55:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:55:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3677158116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:55:57 compute-0 nova_compute[265391]: 2025-09-30 18:55:57.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2334: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:58 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27699 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:55:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:58 compute-0 ceph-mon[73755]: from='client.27683 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3677158116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:55:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3677158116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:55:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3639933351' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 18:55:58 compute-0 ceph-mon[73755]: pgmap v2334: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:55:58 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27703 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:58] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 18:55:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:55:58] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 18:55:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:55:58.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:55:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:55:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:55:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:55:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:55:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:55:59 compute-0 nova_compute[265391]: 2025-09-30 18:55:59.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:55:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd dump"} v 0)
Sep 30 18:55:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/249781741' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 18:55:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:55:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:55:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:55:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:55:59 compute-0 podman[276673]: time="2025-09-30T18:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:55:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:55:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10314 "" "Go-http-client/1.1"
Sep 30 18:55:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2335: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:55:59 compute-0 ceph-mon[73755]: from='client.27699 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:59 compute-0 ceph-mon[73755]: from='client.27703 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:55:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/249781741' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 18:55:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2497781289' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 18:55:59 compute-0 nova_compute[265391]: 2025-09-30 18:55:59.942 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:55:59 compute-0 nova_compute[265391]: 2025-09-30 18:55:59.943 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19048 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27717 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:56:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:00.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:00 compute-0 ceph-mon[73755]: pgmap v2335: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:00 compute-0 ceph-mon[73755]: from='client.19048 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:00 compute-0 ceph-mon[73755]: from='client.27717 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Sep 30 18:56:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3242465448' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 18:56:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:01.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:01 compute-0 openstack_network_exporter[279566]: ERROR   18:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:56:01 compute-0 openstack_network_exporter[279566]: ERROR   18:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:56:01 compute-0 openstack_network_exporter[279566]: ERROR   18:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:56:01 compute-0 openstack_network_exporter[279566]: ERROR   18:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:56:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:56:01 compute-0 openstack_network_exporter[279566]: ERROR   18:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:56:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:56:01 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27727 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2336: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3242465448' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 18:56:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/674035074' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Sep 30 18:56:01 compute-0 ceph-mon[73755]: from='client.27727 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:01 compute-0 ceph-mon[73755]: pgmap v2336: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:02 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19056 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:02.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:02 compute-0 nova_compute[265391]: 2025-09-30 18:56:02.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:02 compute-0 ceph-mon[73755]: from='client.19056 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3044538243' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 18:56:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:03.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Sep 30 18:56:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/399368709' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:03 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19062 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2337: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2190048606' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Sep 30 18:56:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/399368709' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:03 compute-0 ceph-mon[73755]: from='client.19062 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:03 compute-0 ceph-mon[73755]: pgmap v2337: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:03.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:56:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:03.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:56:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:04 compute-0 nova_compute[265391]: 2025-09-30 18:56:04.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:04.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:04 compute-0 sudo[370051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:56:04 compute-0 sudo[370051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:04 compute-0 sudo[370051]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4021390343' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:56:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2632073261' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Sep 30 18:56:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:05.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2338: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2157336329' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:05 compute-0 ceph-mon[73755]: pgmap v2338: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2821228972' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Sep 30 18:56:06 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27767 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:06.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2724288933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:56:07 compute-0 ceph-mon[73755]: from='client.27767 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1638146919' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Sep 30 18:56:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:07.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:56:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:56:07 compute-0 nova_compute[265391]: 2025-09-30 18:56:07.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:56:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:07.442Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:56:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:56:07 compute-0 nova_compute[265391]: 2025-09-30 18:56:07.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:07 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19076 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2339: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:07 compute-0 nova_compute[265391]: 2025-09-30 18:56:07.984 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:56:07 compute-0 nova_compute[265391]: 2025-09-30 18:56:07.985 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:56:07 compute-0 nova_compute[265391]: 2025-09-30 18:56:07.986 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:56:07 compute-0 nova_compute[265391]: 2025-09-30 18:56:07.986 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:56:07 compute-0 nova_compute[265391]: 2025-09-30 18:56:07.987 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:56:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2011182209' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:56:08 compute-0 ceph-mon[73755]: from='client.19076 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:08 compute-0 ceph-mon[73755]: pgmap v2339: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:56:08
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.nfs', '.mgr', 'vms', '.rgw.root', 'default.rgw.control']
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:56:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:56:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3085274472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:56:08 compute-0 nova_compute[265391]: 2025-09-30 18:56:08.458 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27789 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:08 compute-0 nova_compute[265391]: 2025-09-30 18:56:08.624 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:56:08 compute-0 nova_compute[265391]: 2025-09-30 18:56:08.625 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:56:08 compute-0 nova_compute[265391]: 2025-09-30 18:56:08.642 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:56:08 compute-0 nova_compute[265391]: 2025-09-30 18:56:08.645 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4182MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:56:08 compute-0 nova_compute[265391]: 2025-09-30 18:56:08.646 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:56:08 compute-0 nova_compute[265391]: 2025-09-30 18:56:08.646 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:56:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:08.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:08] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:56:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:08] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:56:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:08.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:09 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27793 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/227151964' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Sep 30 18:56:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4084825127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:56:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3085274472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:56:09 compute-0 ceph-mon[73755]: from='client.27789 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:09 compute-0 nova_compute[265391]: 2025-09-30 18:56:09.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:09.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:09 compute-0 podman[370152]: 2025-09-30 18:56:09.551183493 +0000 UTC m=+0.092314194 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Sep 30 18:56:09 compute-0 podman[370154]: 2025-09-30 18:56:09.552011275 +0000 UTC m=+0.082098939 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, release=1755695350)
Sep 30 18:56:09 compute-0 podman[370153]: 2025-09-30 18:56:09.590252006 +0000 UTC m=+0.121163522 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:56:09 compute-0 nova_compute[265391]: 2025-09-30 18:56:09.749 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:56:09 compute-0 nova_compute[265391]: 2025-09-30 18:56:09.749 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:56:08 up  1:59,  0 user,  load average: 0.43, 0.48, 0.66\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:56:09 compute-0 nova_compute[265391]: 2025-09-30 18:56:09.852 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:56:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2340: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:10 compute-0 ceph-mon[73755]: from='client.27793 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4202097990' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3940667694' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Sep 30 18:56:10 compute-0 ceph-mon[73755]: pgmap v2340: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27805 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:56:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600127837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:56:10 compute-0 nova_compute[265391]: 2025-09-30 18:56:10.313 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:56:10 compute-0 nova_compute[265391]: 2025-09-30 18:56:10.321 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27809 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:10 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:56:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:10.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:10 compute-0 nova_compute[265391]: 2025-09-30 18:56:10.830 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:56:11 compute-0 ceph-mon[73755]: from='client.27805 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1600127837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:56:11 compute-0 ceph-mon[73755]: from='client.27809 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/794127967' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:56:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:11 compute-0 nova_compute[265391]: 2025-09-30 18:56:11.347 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:56:11 compute-0 nova_compute[265391]: 2025-09-30 18:56:11.347 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.701s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:56:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:11 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27821 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2341: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3392779860' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Sep 30 18:56:12 compute-0 ceph-mon[73755]: from='client.27821 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:12 compute-0 ceph-mon[73755]: pgmap v2341: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:12 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27825 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:12.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:12 compute-0 nova_compute[265391]: 2025-09-30 18:56:12.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:13 compute-0 ceph-mon[73755]: from='client.27825 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1487946791' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 18:56:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2342: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:13.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:14 compute-0 nova_compute[265391]: 2025-09-30 18:56:14.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/514176730' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Sep 30 18:56:14 compute-0 ceph-mon[73755]: pgmap v2342: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:14 compute-0 nova_compute[265391]: 2025-09-30 18:56:14.348 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:56:14 compute-0 nova_compute[265391]: 2025-09-30 18:56:14.349 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:56:14 compute-0 nova_compute[265391]: 2025-09-30 18:56:14.349 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:56:14 compute-0 nova_compute[265391]: 2025-09-30 18:56:14.349 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:56:14 compute-0 nova_compute[265391]: 2025-09-30 18:56:14.349 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:56:14 compute-0 nova_compute[265391]: 2025-09-30 18:56:14.349 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:56:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:14.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:15.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2343: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:15 compute-0 ceph-mon[73755]: pgmap v2343: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:16.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:17.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:17.444Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:17 compute-0 nova_compute[265391]: 2025-09-30 18:56:17.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2344: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:17 compute-0 ceph-mon[73755]: pgmap v2344: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:18.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:18] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:56:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:18] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:56:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:18.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:19 compute-0 nova_compute[265391]: 2025-09-30 18:56:19.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:19.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2345: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:19 compute-0 ceph-mon[73755]: pgmap v2345: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:20.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:21.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2346: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:21 compute-0 ceph-mon[73755]: pgmap v2346: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:56:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:56:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:22.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:22 compute-0 nova_compute[265391]: 2025-09-30 18:56:22.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:56:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:23.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:23 compute-0 sshd-session[370009]: Received disconnect from 154.125.120.7 port 45264:11: Bye Bye [preauth]
Sep 30 18:56:23 compute-0 sshd-session[370009]: Disconnected from 154.125.120.7 port 45264 [preauth]
Sep 30 18:56:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2347: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:23.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:23 compute-0 ceph-mon[73755]: pgmap v2347: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:23 compute-0 ceph-mgr[74051]: [dashboard INFO request] [192.168.122.100:36618] [POST] [200] [0.002s] [4.0B] [d77fe671-b831-402d-bfd8-6c0b15168a42] /api/prometheus_receiver
Sep 30 18:56:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:24 compute-0 nova_compute[265391]: 2025-09-30 18:56:24.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:24.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:24 compute-0 sudo[370302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:56:24 compute-0 sudo[370302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:24 compute-0 sudo[370302]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:25.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2348: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:25 compute-0 ceph-mon[73755]: pgmap v2348: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:26 compute-0 podman[370334]: 2025-09-30 18:56:26.108131699 +0000 UTC m=+0.056503915 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:56:26 compute-0 podman[370332]: 2025-09-30 18:56:26.125895669 +0000 UTC m=+0.081513893 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:56:26 compute-0 podman[370333]: 2025-09-30 18:56:26.130206891 +0000 UTC m=+0.082692854 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Sep 30 18:56:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:26.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:26 compute-0 sudo[370417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:56:26 compute-0 sudo[370417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:26 compute-0 sudo[370417]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:26 compute-0 ovs-vsctl[370468]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Sep 30 18:56:27 compute-0 sudo[370448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:56:27 compute-0 sudo[370448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:27.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:27.445Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:27 compute-0 sudo[370448]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:27 compute-0 nova_compute[265391]: 2025-09-30 18:56:27.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:56:27 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:56:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2349: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 516 B/s rd, 0 op/s
Sep 30 18:56:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2350: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:56:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:56:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:56:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:56:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:56:27 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:56:27 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:56:27 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:56:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:56:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:56:27 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:56:27 compute-0 sudo[370596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:56:27 compute-0 sudo[370596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:27 compute-0 sudo[370596]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:27 compute-0 sudo[370654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:56:27 compute-0 sudo[370654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:27 compute-0 virtqemud[265263]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Sep 30 18:56:27 compute-0 virtqemud[265263]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Sep 30 18:56:27 compute-0 virtqemud[265263]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Sep 30 18:56:28 compute-0 podman[370821]: 2025-09-30 18:56:28.352626672 +0000 UTC m=+0.041356523 container create ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 18:56:28 compute-0 systemd[1]: Started libpod-conmon-ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab.scope.
Sep 30 18:56:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:56:28 compute-0 podman[370821]: 2025-09-30 18:56:28.423704114 +0000 UTC m=+0.112433985 container init ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_vaughan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:56:28 compute-0 podman[370821]: 2025-09-30 18:56:28.334747188 +0000 UTC m=+0.023477059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:56:28 compute-0 podman[370821]: 2025-09-30 18:56:28.43202696 +0000 UTC m=+0.120756821 container start ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_vaughan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:56:28 compute-0 podman[370821]: 2025-09-30 18:56:28.437298836 +0000 UTC m=+0.126028777 container attach ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 18:56:28 compute-0 wonderful_vaughan[370872]: 167 167
Sep 30 18:56:28 compute-0 systemd[1]: libpod-ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab.scope: Deactivated successfully.
Sep 30 18:56:28 compute-0 podman[370821]: 2025-09-30 18:56:28.438505588 +0000 UTC m=+0.127235459 container died ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_vaughan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a13430a5f1d0b1585c4439fc0114f251648d1b4db5f3dd7e79c0e19abfd3b488-merged.mount: Deactivated successfully.
Sep 30 18:56:28 compute-0 podman[370821]: 2025-09-30 18:56:28.488268828 +0000 UTC m=+0.176998679 container remove ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:56:28 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: cache status {prefix=cache status} (starting...)
Sep 30 18:56:28 compute-0 systemd[1]: libpod-conmon-ec72ac28ddff946a927c5d69b0c0f7cb04b61dcc98141e293722d50c1aa2adab.scope: Deactivated successfully.
Sep 30 18:56:28 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: client ls {prefix=client ls} (starting...)
Sep 30 18:56:28 compute-0 podman[370949]: 2025-09-30 18:56:28.656044196 +0000 UTC m=+0.047136393 container create 51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:56:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:28.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:28 compute-0 lvm[370976]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:56:28 compute-0 lvm[370976]: VG ceph_vg0 finished
Sep 30 18:56:28 compute-0 systemd[1]: Started libpod-conmon-51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81.scope.
Sep 30 18:56:28 compute-0 podman[370949]: 2025-09-30 18:56:28.630846153 +0000 UTC m=+0.021938370 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:56:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2335dc85d15f06e76a1e5135ef6faf3db5c9547348031dee5631b0074b74e1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2335dc85d15f06e76a1e5135ef6faf3db5c9547348031dee5631b0074b74e1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2335dc85d15f06e76a1e5135ef6faf3db5c9547348031dee5631b0074b74e1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2335dc85d15f06e76a1e5135ef6faf3db5c9547348031dee5631b0074b74e1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2335dc85d15f06e76a1e5135ef6faf3db5c9547348031dee5631b0074b74e1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:28 compute-0 podman[370949]: 2025-09-30 18:56:28.761593092 +0000 UTC m=+0.152685309 container init 51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:56:28 compute-0 podman[370949]: 2025-09-30 18:56:28.769439355 +0000 UTC m=+0.160531552 container start 51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_raman, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:56:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:28] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:56:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:28] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:56:28 compute-0 kernel: block dm-0: the capability attribute has been deprecated.
Sep 30 18:56:28 compute-0 podman[370949]: 2025-09-30 18:56:28.80665473 +0000 UTC m=+0.197746947 container attach 51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_raman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:56:28 compute-0 ceph-mon[73755]: pgmap v2349: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 516 B/s rd, 0 op/s
Sep 30 18:56:28 compute-0 ceph-mon[73755]: pgmap v2350: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:28.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:29 compute-0 dazzling_raman[370989]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:56:29 compute-0 dazzling_raman[370989]: --> All data devices are unavailable
Sep 30 18:56:29 compute-0 systemd[1]: libpod-51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81.scope: Deactivated successfully.
Sep 30 18:56:29 compute-0 podman[370949]: 2025-09-30 18:56:29.105939337 +0000 UTC m=+0.497031534 container died 51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_raman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2335dc85d15f06e76a1e5135ef6faf3db5c9547348031dee5631b0074b74e1e-merged.mount: Deactivated successfully.
Sep 30 18:56:29 compute-0 podman[370949]: 2025-09-30 18:56:29.171459175 +0000 UTC m=+0.562551372 container remove 51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_raman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:56:29 compute-0 systemd[1]: libpod-conmon-51d07957def47e4ae941154cc89420b20ee8c90ae68adf047045e5bdf235bb81.scope: Deactivated successfully.
Sep 30 18:56:29 compute-0 nova_compute[265391]: 2025-09-30 18:56:29.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:29 compute-0 sudo[370654]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:29 compute-0 sudo[371169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:56:29 compute-0 sudo[371169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:29 compute-0 sudo[371169]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:29 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: damage ls {prefix=damage ls} (starting...)
Sep 30 18:56:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:29.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:29 compute-0 sudo[371194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:56:29 compute-0 sudo[371194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:29 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19090 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:29 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump loads {prefix=dump loads} (starting...)
Sep 30 18:56:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 18:56:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3409039046' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 18:56:29 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Sep 30 18:56:29 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Sep 30 18:56:29 compute-0 podman[371296]: 2025-09-30 18:56:29.745872672 +0000 UTC m=+0.045798818 container create 94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_rosalind, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Sep 30 18:56:29 compute-0 podman[276673]: time="2025-09-30T18:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:56:29 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19098 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2351: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:29 compute-0 podman[371296]: 2025-09-30 18:56:29.721319586 +0000 UTC m=+0.021245752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:56:29 compute-0 systemd[1]: Started libpod-conmon-94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b.scope.
Sep 30 18:56:29 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Sep 30 18:56:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:56:29 compute-0 podman[371296]: 2025-09-30 18:56:29.918786844 +0000 UTC m=+0.218712990 container init 94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:56:29 compute-0 podman[371296]: 2025-09-30 18:56:29.927278864 +0000 UTC m=+0.227205010 container start 94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:56:29 compute-0 clever_rosalind[371336]: 167 167
Sep 30 18:56:29 compute-0 systemd[1]: libpod-94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b.scope: Deactivated successfully.
Sep 30 18:56:29 compute-0 podman[371296]: 2025-09-30 18:56:29.939624924 +0000 UTC m=+0.239551070 container attach 94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:56:29 compute-0 podman[371296]: 2025-09-30 18:56:29.939963933 +0000 UTC m=+0.239890079 container died 94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:56:29 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3409039046' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 18:56:29 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:56:29 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863328886' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-44806f7511f953d41aa1f6b7de8bb3bf0027dac536d36c3f6c0185d9e98d252f-merged.mount: Deactivated successfully.
Sep 30 18:56:29 compute-0 podman[371296]: 2025-09-30 18:56:29.981735586 +0000 UTC m=+0.281661732 container remove 94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_rosalind, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:56:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42751 "" "Go-http-client/1.1"
Sep 30 18:56:30 compute-0 podman[276673]: @ - - [30/Sep/2025:18:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10319 "" "Go-http-client/1.1"
Sep 30 18:56:30 compute-0 systemd[1]: libpod-conmon-94d52aa7a84270b772c899417774cd9a3643c6b217f62b03dc64d0b67a9bab9b.scope: Deactivated successfully.
Sep 30 18:56:30 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Sep 30 18:56:30 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19106 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:30 compute-0 podman[371395]: 2025-09-30 18:56:30.157867961 +0000 UTC m=+0.047824941 container create b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:56:30 compute-0 systemd[1]: Started libpod-conmon-b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3.scope.
Sep 30 18:56:30 compute-0 podman[371395]: 2025-09-30 18:56:30.132021451 +0000 UTC m=+0.021978481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:56:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6b7782543bf5406a0b93cfbc04582b8882f166fae4bcac7d68ab172ff13126/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6b7782543bf5406a0b93cfbc04582b8882f166fae4bcac7d68ab172ff13126/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6b7782543bf5406a0b93cfbc04582b8882f166fae4bcac7d68ab172ff13126/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6b7782543bf5406a0b93cfbc04582b8882f166fae4bcac7d68ab172ff13126/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:30 compute-0 podman[371395]: 2025-09-30 18:56:30.261161068 +0000 UTC m=+0.151118058 container init b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 18:56:30 compute-0 podman[371395]: 2025-09-30 18:56:30.267375629 +0000 UTC m=+0.157332609 container start b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 18:56:30 compute-0 podman[371395]: 2025-09-30 18:56:30.270642984 +0000 UTC m=+0.160599964 container attach b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chebyshev, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:56:30 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Sep 30 18:56:30 compute-0 nova_compute[265391]: 2025-09-30 18:56:30.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:56:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config log"} v 0)
Sep 30 18:56:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506331521' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 18:56:30 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: get subtrees {prefix=get subtrees} (starting...)
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]: {
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:     "0": [
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:         {
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "devices": [
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "/dev/loop3"
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             ],
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "lv_name": "ceph_lv0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "lv_size": "21470642176",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "name": "ceph_lv0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "tags": {
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.cluster_name": "ceph",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.crush_device_class": "",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.encrypted": "0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.osd_id": "0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.type": "block",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.vdo": "0",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:                 "ceph.with_tpm": "0"
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             },
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "type": "block",
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:             "vg_name": "ceph_vg0"
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:         }
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]:     ]
Sep 30 18:56:30 compute-0 objective_chebyshev[371427]: }
Sep 30 18:56:30 compute-0 systemd[1]: libpod-b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3.scope: Deactivated successfully.
Sep 30 18:56:30 compute-0 podman[371395]: 2025-09-30 18:56:30.587896717 +0000 UTC m=+0.477853697 container died b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 18:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a6b7782543bf5406a0b93cfbc04582b8882f166fae4bcac7d68ab172ff13126-merged.mount: Deactivated successfully.
Sep 30 18:56:30 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19114 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:30 compute-0 podman[371395]: 2025-09-30 18:56:30.638822727 +0000 UTC m=+0.528779707 container remove b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 18:56:30 compute-0 systemd[1]: libpod-conmon-b74d5a4e65c7600f565fca4a42a1e6e1dc8d2504fc3daf2b69ac8bcc90bc62f3.scope: Deactivated successfully.
Sep 30 18:56:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:30.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:30 compute-0 sudo[371194]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:30 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: ops {prefix=ops} (starting...)
Sep 30 18:56:30 compute-0 sudo[371501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:56:30 compute-0 sudo[371501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:30 compute-0 sudo[371501]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:30 compute-0 sudo[371534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:56:30 compute-0 sudo[371534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:30 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config-key dump"} v 0)
Sep 30 18:56:30 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2756469354' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 18:56:30 compute-0 ceph-mon[73755]: from='client.19090 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:30 compute-0 ceph-mon[73755]: from='client.19098 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:30 compute-0 ceph-mon[73755]: pgmap v2351: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2863328886' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:56:30 compute-0 ceph-mon[73755]: from='client.19106 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3506331521' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 18:56:30 compute-0 ceph-mon[73755]: from='client.19114 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:30 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2756469354' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 18:56:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Sep 30 18:56:31 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133251732' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 18:56:31 compute-0 podman[371644]: 2025-09-30 18:56:31.258614751 +0000 UTC m=+0.046437555 container create 5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_galileo, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:56:31 compute-0 systemd[1]: Started libpod-conmon-5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791.scope.
Sep 30 18:56:31 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19126 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:56:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:31.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:31 compute-0 podman[371644]: 2025-09-30 18:56:31.242185545 +0000 UTC m=+0.030008369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:56:31 compute-0 podman[371644]: 2025-09-30 18:56:31.337807223 +0000 UTC m=+0.125630027 container init 5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:56:31 compute-0 podman[371644]: 2025-09-30 18:56:31.34615842 +0000 UTC m=+0.133981234 container start 5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Sep 30 18:56:31 compute-0 podman[371644]: 2025-09-30 18:56:31.36393409 +0000 UTC m=+0.151756914 container attach 5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 18:56:31 compute-0 keen_galileo[371681]: 167 167
Sep 30 18:56:31 compute-0 systemd[1]: libpod-5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791.scope: Deactivated successfully.
Sep 30 18:56:31 compute-0 podman[371644]: 2025-09-30 18:56:31.368801796 +0000 UTC m=+0.156624600 container died 5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_galileo, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 18:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-17722479af596740da7ae27572abad50d6af82f78759953d4cc2e02929470d41-merged.mount: Deactivated successfully.
Sep 30 18:56:31 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: session ls {prefix=session ls} (starting...)
Sep 30 18:56:31 compute-0 podman[371644]: 2025-09-30 18:56:31.405324163 +0000 UTC m=+0.193146957 container remove 5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_galileo, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 18:56:31 compute-0 systemd[1]: libpod-conmon-5dd0285da856da702ee0b22ffbe516975fbd74adcb54e584907f2178747e4791.scope: Deactivated successfully.
Sep 30 18:56:31 compute-0 openstack_network_exporter[279566]: ERROR   18:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:56:31 compute-0 openstack_network_exporter[279566]: ERROR   18:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:56:31 compute-0 openstack_network_exporter[279566]: ERROR   18:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:56:31 compute-0 openstack_network_exporter[279566]: ERROR   18:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:56:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:56:31 compute-0 openstack_network_exporter[279566]: ERROR   18:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:56:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:56:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:31 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: status {prefix=status} (starting...)
Sep 30 18:56:31 compute-0 podman[371730]: 2025-09-30 18:56:31.564707044 +0000 UTC m=+0.047430820 container create a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jepsen, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:56:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr dump"} v 0)
Sep 30 18:56:31 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442968888' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 18:56:31 compute-0 systemd[1]: Started libpod-conmon-a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a.scope.
Sep 30 18:56:31 compute-0 podman[371730]: 2025-09-30 18:56:31.54142191 +0000 UTC m=+0.024145706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:56:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786fb05e5b2863559b200977118b1cf4bc060439b2e6610683de235c35f77a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786fb05e5b2863559b200977118b1cf4bc060439b2e6610683de235c35f77a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786fb05e5b2863559b200977118b1cf4bc060439b2e6610683de235c35f77a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786fb05e5b2863559b200977118b1cf4bc060439b2e6610683de235c35f77a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:56:31 compute-0 podman[371730]: 2025-09-30 18:56:31.666865772 +0000 UTC m=+0.149589578 container init a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 18:56:31 compute-0 podman[371730]: 2025-09-30 18:56:31.675357412 +0000 UTC m=+0.158081188 container start a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 18:56:31 compute-0 podman[371730]: 2025-09-30 18:56:31.680090715 +0000 UTC m=+0.162814511 container attach a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:56:31 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19134 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2352: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2133251732' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 18:56:31 compute-0 ceph-mon[73755]: from='client.19126 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:31 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1442968888' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 18:56:31 compute-0 ceph-mon[73755]: from='client.19134 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:31 compute-0 ceph-mon[73755]: pgmap v2352: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 18:56:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709307474' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 18:56:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 18:56:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790081530' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 18:56:32 compute-0 lvm[371916]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:56:32 compute-0 lvm[371916]: VG ceph_vg0 finished
Sep 30 18:56:32 compute-0 friendly_jepsen[371751]: {}
Sep 30 18:56:32 compute-0 podman[371730]: 2025-09-30 18:56:32.412869386 +0000 UTC m=+0.895593162 container died a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 18:56:32 compute-0 systemd[1]: libpod-a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a.scope: Deactivated successfully.
Sep 30 18:56:32 compute-0 systemd[1]: libpod-a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a.scope: Consumed 1.095s CPU time.
Sep 30 18:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-786fb05e5b2863559b200977118b1cf4bc060439b2e6610683de235c35f77a21-merged.mount: Deactivated successfully.
Sep 30 18:56:32 compute-0 podman[371730]: 2025-09-30 18:56:32.454148906 +0000 UTC m=+0.936872672 container remove a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:56:32 compute-0 systemd[1]: libpod-conmon-a50d322346986193b14e18ac5f7ff445040b3c933e867b70b8f1f8627e13909a.scope: Deactivated successfully.
Sep 30 18:56:32 compute-0 sudo[371534]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:56:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:56:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:56:32 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:56:32 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Sep 30 18:56:32 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448556682' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 18:56:32 compute-0 sudo[371955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:56:32 compute-0 sudo[371955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:32 compute-0 sudo[371955]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:32.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:32 compute-0 nova_compute[265391]: 2025-09-30 18:56:32.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3709307474' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 18:56:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2790081530' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 18:56:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:56:32 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:56:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/448556682' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 18:56:32 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/321118810' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 18:56:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/308662736' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19156 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mgr[74051]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 18:56:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T18:56:33.171+0000 7faab05c1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 18:56:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:33.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr stat"} v 0)
Sep 30 18:56:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596230471' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Sep 30 18:56:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30544332' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2353: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 18:56:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002619199' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:33.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:56:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:33.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:56:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/308662736' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mon[73755]: from='client.19156 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1596230471' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/30544332' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mon[73755]: pgmap v2353: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:33 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4002619199' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 18:56:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:56:33 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.3 total, 600.0 interval
                                           Cumulative writes: 12K writes, 58K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1543 writes, 6840 keys, 1543 commit groups, 1.0 writes per commit group, ingest: 10.92 MB, 0.02 MB/s
                                           Interval WAL: 1543 writes, 1543 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    107.9      0.77              0.25        40    0.019       0      0       0.0       0.0
                                             L6      1/0   10.74 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.4    205.5    177.0      2.52              1.24        39    0.065    253K    21K       0.0       0.0
                                            Sum      1/0   10.74 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.4    157.6    160.9      3.28              1.49        79    0.042    253K    21K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2    203.6    205.8      0.34              0.20        10    0.034     41K   2547       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    205.5    177.0      2.52              1.24        39    0.065    253K    21K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    157.4      0.52              0.25        39    0.013       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.24              0.00         1    0.242       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4801.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.081, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.52 GB write, 0.11 MB/s write, 0.51 GB read, 0.11 MB/s read, 3.3 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e76de37350#2 capacity: 304.00 MB usage: 51.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000373 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3029,50.02 MB,16.4544%) FilterBlock(80,705.30 KB,0.226568%) IndexBlock(80,1.06 MB,0.348568%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 18:56:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Sep 30 18:56:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357592871' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 18:56:34 compute-0 nova_compute[265391]: 2025-09-30 18:56:34.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:34 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:34 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr dump"} v 0)
Sep 30 18:56:34 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931900365' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 18:56:34 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19184 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:34.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1357592871' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 18:56:35 compute-0 ceph-mon[73755]: from='client.19176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:35 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3931900365' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 18:56:35 compute-0 ceph-mon[73755]: from='client.19184 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:35 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19192 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 18:56:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094080132' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:16.518577+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 41017344 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:17.518754+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 41017344 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:18.518929+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 41017344 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:19.519078+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2044257 data_alloc: 234881024 data_used: 21544960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 41017344 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:20.519233+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168812544 unmapped: 41009152 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:21.519458+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5841000/0x0/0x4ffc00000, data 0x3e02dc1/0x3edb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168812544 unmapped: 41009152 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:22.519648+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168820736 unmapped: 41000960 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:23.519793+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.945285797s of 19.968526840s, submitted: 17
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 40992768 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:24.519951+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2043049 data_alloc: 234881024 data_used: 21544960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 40992768 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f583f000/0x0/0x4ffc00000, data 0x3e03dc1/0x3edc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:25.520159+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 40992768 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:26.520405+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 40992768 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f583f000/0x0/0x4ffc00000, data 0x3e03dc1/0x3edc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:27.520588+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 40992768 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:28.520742+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b7849c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b58ea400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168837120 unmapped: 40984576 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:29.520894+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2043181 data_alloc: 234881024 data_used: 21544960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168837120 unmapped: 40984576 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7848c00 session 0x5631b69d0f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7849000 session 0x5631b592de00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:30.521069+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 40976384 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5840000/0x0/0x4ffc00000, data 0x3e03dc1/0x3edc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:31.521280+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 40976384 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:32.521502+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 40976384 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:33.521604+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 40976384 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5840000/0x0/0x4ffc00000, data 0x3e03dc1/0x3edc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:34.521768+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2041877 data_alloc: 234881024 data_used: 21544960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 40976384 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:35.521985+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 40976384 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:36.522165+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 40976384 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5840000/0x0/0x4ffc00000, data 0x3e03dc1/0x3edc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:37.522298+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5840000/0x0/0x4ffc00000, data 0x3e03dc1/0x3edc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168853504 unmapped: 40968192 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:38.522495+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 40960000 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.419866562s of 15.447829247s, submitted: 10
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520c000 session 0x5631b5f67e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b7848c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520b000 session 0x5631b3ff23c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520c800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:39.522672+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2042041 data_alloc: 234881024 data_used: 21544960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 40960000 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:40.522829+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 40960000 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5840000/0x0/0x4ffc00000, data 0x3e03dc1/0x3edc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:41.522987+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 40960000 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7849c00 session 0x5631b3368f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b58ea400 session 0x5631b36703c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:42.523165+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 40960000 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520b000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:43.523295+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520b000 session 0x5631b6bf2d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 47194112 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:44.523492+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1798541 data_alloc: 234881024 data_used: 13398016
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 47194112 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:45.523627+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 47194112 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:46.523969+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6f6f000/0x0/0x4ffc00000, data 0x23c8d2c/0x249e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 47194112 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:47.524152+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 47194112 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:48.524297+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 47194112 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.125593185s of 10.303235054s, submitted: 65
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b59d94a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b2b7b800 session 0x5631b33683c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6f6f000/0x0/0x4ffc00000, data 0x23c8d2c/0x249e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:49.526460+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797753 data_alloc: 234881024 data_used: 13398016
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 162627584 unmapped: 47194112 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:50.526585+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b6968960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:51.526720+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:52.526928+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:53.530547+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:54.530766+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518463 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:55.530920+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:56.531099+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:57.531287+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:58.531423+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:25:59.531570+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518463 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:00.531714+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:01.531843+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:02.531983+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:03.532151+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:04.532384+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518463 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:05.532484+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:06.532625+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:07.532774+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:08.532937+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:09.533072+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518463 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:10.533197+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:11.533450+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:12.533607+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:13.533847+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:14.533990+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518463 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:15.534113+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:16.534276+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:17.534447+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:18.534608+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 56983552 heap: 209821696 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520b000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.706655502s of 29.865404129s, submitted: 61
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520b000 session 0x5631b33725a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b58ea400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b58ea400 session 0x5631b3670d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:19.534722+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b7849000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7849000 session 0x5631b337a000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b7849000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7849000 session 0x5631b5f67680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2b7b800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b2b7b800 session 0x5631b6b38f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1598758 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 60702720 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:20.537438+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x12eec97/0x13c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 60702720 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:21.537582+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x12eec97/0x13c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 60702720 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:22.537771+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520b000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520b000 session 0x5631b6b534a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 60702720 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:23.537938+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 60702720 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:24.538097+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1598758 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 60702720 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b58ea400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b58ea400 session 0x5631b5396f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:25.538246+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 60702720 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:26.538435+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b33625a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x12eec97/0x13c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b5f62000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2b7b800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520b000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 60702720 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:27.538582+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:28.538728+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x12eec97/0x13c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x12eec97/0x13c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:29.538867+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1666026 data_alloc: 234881024 data_used: 11710464
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:30.538995+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:31.539157+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x12eec97/0x13c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:32.539408+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:33.539552+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:34.539965+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1666026 data_alloc: 234881024 data_used: 11710464
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:35.540115+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x12eec97/0x13c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:36.540248+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x12eec97/0x13c1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:37.540401+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 60579840 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.972042084s of 19.176801682s, submitted: 27
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:38.540534+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 162070528 unmapped: 51953664 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:39.540688+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1772822 data_alloc: 234881024 data_used: 13733888
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163127296 unmapped: 50896896 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:40.540875+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163127296 unmapped: 50896896 heap: 214024192 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b7849400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7849400 session 0x5631b6c06780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:41.541023+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6bf2960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bc6b5000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bc6b5000 session 0x5631b6bf23c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bc6b5800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f76a7000/0x0/0x4ffc00000, data 0x1fa2c97/0x2075000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bc6b5800 session 0x5631b6bf2f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bc6b5800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bc6b5800 session 0x5631b65e10e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6aa7000/0x0/0x4ffc00000, data 0x2ba1cf9/0x2c75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 55934976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:42.541334+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 55934976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:43.541494+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 55934976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:44.541639+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1873727 data_alloc: 234881024 data_used: 13791232
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 55934976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:45.541793+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6aa7000/0x0/0x4ffc00000, data 0x2ba1cf9/0x2c75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 55934976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:46.542046+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 55934976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:47.542199+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b65e14a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 55934976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:48.542439+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 55934976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:49.542607+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6aa7000/0x0/0x4ffc00000, data 0x2ba1cf9/0x2c75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1873743 data_alloc: 234881024 data_used: 13791232
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163422208 unmapped: 55926784 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b6b52000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:50.542766+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6aa7000/0x0/0x4ffc00000, data 0x2ba1cf9/0x2c75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163422208 unmapped: 55926784 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6aa7000/0x0/0x4ffc00000, data 0x2ba1cf9/0x2c75000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:51.542893+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b7849400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7849400 session 0x5631b6b52780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bc6b5000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.899291039s of 13.251012802s, submitted: 171
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bc6b5000 session 0x5631b6429a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163446784 unmapped: 55902208 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6aa6000/0x0/0x4ffc00000, data 0x2ba1d1c/0x2c76000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:52.543121+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163446784 unmapped: 55902208 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:53.543272+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 55599104 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:54.543405+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1938729 data_alloc: 234881024 data_used: 23416832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:55.543544+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:56.543692+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:57.543849+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6aa6000/0x0/0x4ffc00000, data 0x2ba1d1c/0x2c76000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:58.544018+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:26:59.544175+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1949521 data_alloc: 234881024 data_used: 25030656
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:00.544404+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6aa6000/0x0/0x4ffc00000, data 0x2ba1d1c/0x2c76000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:01.544581+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:02.544776+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170369024 unmapped: 48979968 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:03.544947+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.591268539s of 12.626220703s, submitted: 11
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 171220992 unmapped: 48128000 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:04.545099+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2059849 data_alloc: 234881024 data_used: 25866240
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 46342144 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:05.545291+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5d13000/0x0/0x4ffc00000, data 0x392ed1c/0x3a03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176078848 unmapped: 43270144 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:06.545466+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176078848 unmapped: 43270144 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:07.545638+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c66000/0x0/0x4ffc00000, data 0x39e1d1c/0x3ab6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176095232 unmapped: 43253760 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:08.545791+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c66000/0x0/0x4ffc00000, data 0x39e1d1c/0x3ab6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:09.546006+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077763 data_alloc: 234881024 data_used: 26071040
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:10.546140+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:11.546279+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:12.546405+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c40000/0x0/0x4ffc00000, data 0x3a06d1c/0x3adb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:13.546546+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:14.546819+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077419 data_alloc: 234881024 data_used: 26071040
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:15.547044+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:16.547240+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:17.547401+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:18.547562+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c40000/0x0/0x4ffc00000, data 0x3a06d1c/0x3adb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:19.547735+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077419 data_alloc: 234881024 data_used: 26071040
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:20.547890+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 43237376 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:21.548031+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 43229184 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:22.548212+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 43229184 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:23.548379+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 43229184 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:24.548567+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.902381897s of 20.219509125s, submitted: 157
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c40000/0x0/0x4ffc00000, data 0x3a06d1c/0x3adb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077259 data_alloc: 234881024 data_used: 26071040
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 43229184 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:25.548713+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 43229184 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:26.548860+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b543f800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 43229184 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:27.549021+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b2b7b800 session 0x5631b5770960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520b000 session 0x5631b6b52f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 43220992 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:28.549195+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176128000 unmapped: 43220992 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:29.549371+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2076815 data_alloc: 234881024 data_used: 26058752
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176136192 unmapped: 43212800 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:30.549505+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 43196416 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:31.549651+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 43196416 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:32.549815+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 43196416 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:33.549974+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 43196416 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:34.550144+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2076815 data_alloc: 234881024 data_used: 26058752
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176152576 unmapped: 43196416 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:35.550323+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176177152 unmapped: 43171840 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:36.550599+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.996477127s of 12.032160759s, submitted: 20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 43155456 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:37.550846+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176201728 unmapped: 43147264 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:38.550994+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176218112 unmapped: 43130880 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:39.551129+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077319 data_alloc: 234881024 data_used: 26058752
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 43114496 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:40.551311+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 43114496 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:41.551452+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:42.551670+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176267264 unmapped: 43081728 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b51c6800 session 0x5631b526f0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7800 session 0x5631b5f630e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c3e000/0x0/0x4ffc00000, data 0x3a09d1c/0x3ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x64df9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:43.551846+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176283648 unmapped: 43065344 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6429680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b69683c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:44.551983+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 43057152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1795577 data_alloc: 234881024 data_used: 13783040
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6bf25a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:45.552144+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 50634752 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:46.552291+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 50634752 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f7294000/0x0/0x4ffc00000, data 0x1fa3c97/0x2076000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:47.552451+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 50634752 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:48.552620+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 50634752 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:49.552794+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 50634752 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1794429 data_alloc: 234881024 data_used: 13778944
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:50.552948+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 50634752 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:51.553093+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 50634752 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.516593933s of 15.824592590s, submitted: 64
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b543f800 session 0x5631b402a5a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b6c070e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:52.553444+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 50634752 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f7294000/0x0/0x4ffc00000, data 0x1fa3c97/0x2076000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2b7b800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:53.553580+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 159072256 unmapped: 60276736 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b2b7b800 session 0x5631b6428d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:54.553791+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551079 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:55.554040+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f88e0000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:56.554286+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:57.554427+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 66K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 5870 syncs, 3.10 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3854 writes, 14K keys, 3854 commit groups, 1.0 writes per commit group, ingest: 16.96 MB, 0.03 MB/s
                                           Interval WAL: 3854 writes, 1517 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:58.554602+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:27:59.554751+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551079 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:00.555000+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:01.555158+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f88e0000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:02.555378+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:03.555524+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:04.555668+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551079 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f88e0000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:05.555817+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f88e0000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:06.555989+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f88e0000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:07.556164+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 61489152 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b543f800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b543f800 session 0x5631b59d9c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b363f860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b592c3c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b639a780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:08.556317+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b51c6800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.014631271s of 16.160272598s, submitted: 39
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158261248 unmapped: 61087744 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b51c6800 session 0x5631b6b38f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b543f800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b543f800 session 0x5631b6b392c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b5f62f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b6bf3c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b6b38d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:09.556423+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158261248 unmapped: 61087744 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f815a000/0x0/0x4ffc00000, data 0x10ddd09/0x11b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616320 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:10.556576+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158261248 unmapped: 61087744 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520b000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520b000 session 0x5631b526e1e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:11.556711+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158261248 unmapped: 61087744 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f815a000/0x0/0x4ffc00000, data 0x10ddd09/0x11b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:12.556884+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158261248 unmapped: 61087744 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:13.557434+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b543f800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b543f800 session 0x5631b3ff23c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158261248 unmapped: 61087744 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:14.557598+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b64172c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158261248 unmapped: 61087744 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b592da40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665cc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619402 data_alloc: 218103808 data_used: 1847296
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:15.557770+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158539776 unmapped: 60809216 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8132000/0x0/0x4ffc00000, data 0x1104d19/0x11da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:16.557938+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158515200 unmapped: 60833792 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8132000/0x0/0x4ffc00000, data 0x1104d19/0x11da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:17.558100+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:18.558262+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:19.558451+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1663482 data_alloc: 218103808 data_used: 8310784
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:20.558610+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8132000/0x0/0x4ffc00000, data 0x1104d19/0x11da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:21.558861+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:22.559071+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:23.559208+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:24.559334+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1663482 data_alloc: 218103808 data_used: 8310784
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f8132000/0x0/0x4ffc00000, data 0x1104d19/0x11da000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:25.559485+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 158547968 unmapped: 60801024 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b33625a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520b800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520b800 session 0x5631b33630e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.562770844s of 17.719614029s, submitted: 39
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b66f9c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b66f9c00 session 0x5631b639a780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b639ab40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:26.559611+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163282944 unmapped: 56066048 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6bf30e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b592d0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520a000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520a000 session 0x5631b65e01e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f62000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b66f9c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b66f9c00 session 0x5631b6429680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:27.559765+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 165060608 unmapped: 54288384 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x20afd29/0x2186000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:28.559920+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 54992896 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:29.560117+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 54992896 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1820172 data_alloc: 218103808 data_used: 8708096
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:30.560275+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 54992896 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:31.560419+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 54992896 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x214bd29/0x2222000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:32.560680+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6429a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164364288 unmapped: 54984704 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:33.560876+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164364288 unmapped: 54984704 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f70c9000/0x0/0x4ffc00000, data 0x216cd29/0x2243000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:34.561036+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164339712 unmapped: 55009280 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1817988 data_alloc: 218103808 data_used: 8712192
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b59d94a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:35.561206+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164339712 unmapped: 55009280 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f70c9000/0x0/0x4ffc00000, data 0x216cd29/0x2243000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f70c9000/0x0/0x4ffc00000, data 0x216cd29/0x2243000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:36.561385+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f4400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f4400 session 0x5631b331cf00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164339712 unmapped: 55009280 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019994736s of 10.569559097s, submitted: 134
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b63a9e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b66f9c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:37.561534+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 164339712 unmapped: 55009280 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:38.561727+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 163799040 unmapped: 55549952 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:39.561890+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167108608 unmapped: 52240384 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f70c7000/0x0/0x4ffc00000, data 0x216cd5c/0x2245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1868135 data_alloc: 234881024 data_used: 15470592
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:40.562073+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:41.562252+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:42.563155+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:43.563308+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:44.563498+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1868135 data_alloc: 234881024 data_used: 15470592
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:45.563676+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f70bc000/0x0/0x4ffc00000, data 0x2177d5c/0x2250000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x68ef9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:46.563936+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:47.564112+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:48.564251+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 52207616 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.977916718s of 12.032196999s, submitted: 11
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:49.564386+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 173375488 unmapped: 45973504 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1962157 data_alloc: 234881024 data_used: 17321984
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:50.564524+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f538f000/0x0/0x4ffc00000, data 0x2d04d5c/0x2ddd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:51.564733+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:52.564966+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:53.565181+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f538f000/0x0/0x4ffc00000, data 0x2d04d5c/0x2ddd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:54.565438+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1971209 data_alloc: 234881024 data_used: 17428480
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:55.566082+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f538f000/0x0/0x4ffc00000, data 0x2d04d5c/0x2ddd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:56.566304+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:57.566499+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:58.566641+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:28:59.566775+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972721 data_alloc: 234881024 data_used: 17428480
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:00.566958+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 43999232 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f536c000/0x0/0x4ffc00000, data 0x2d26d5c/0x2dff000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.251595497s of 12.573555946s, submitted: 153
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:01.567140+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 43991040 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:02.567323+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 43991040 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:03.567541+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f536a000/0x0/0x4ffc00000, data 0x2d29d5c/0x2e02000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 43991040 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:04.567674+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 43991040 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972897 data_alloc: 234881024 data_used: 17428480
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:05.567840+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 43991040 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:06.567995+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 43991040 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f536a000/0x0/0x4ffc00000, data 0x2d29d5c/0x2e02000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:07.568126+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f536a000/0x0/0x4ffc00000, data 0x2d29d5c/0x2e02000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 43991040 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b6428b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665cc00 session 0x5631b6c06d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:08.568268+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175210496 unmapped: 44138496 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:09.568389+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175210496 unmapped: 44138496 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1973017 data_alloc: 234881024 data_used: 17436672
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:10.568548+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5367000/0x0/0x4ffc00000, data 0x2d2cd5c/0x2e05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175210496 unmapped: 44138496 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5367000/0x0/0x4ffc00000, data 0x2d2cd5c/0x2e05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:11.568702+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175210496 unmapped: 44138496 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:12.568858+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175218688 unmapped: 44130304 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5367000/0x0/0x4ffc00000, data 0x2d2cd5c/0x2e05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:13.569018+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175218688 unmapped: 44130304 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.715902328s of 12.738705635s, submitted: 5
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:14.569161+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175226880 unmapped: 44122112 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1973089 data_alloc: 234881024 data_used: 17436672
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:15.569316+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175226880 unmapped: 44122112 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:16.569493+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175226880 unmapped: 44122112 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5367000/0x0/0x4ffc00000, data 0x2d2cd5c/0x2e05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:17.569706+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175226880 unmapped: 44122112 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:18.569859+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175226880 unmapped: 44122112 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:19.569991+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175226880 unmapped: 44122112 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:20.570147+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1973089 data_alloc: 234881024 data_used: 17436672
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175235072 unmapped: 44113920 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5367000/0x0/0x4ffc00000, data 0x2d2cd5c/0x2e05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:21.570259+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175243264 unmapped: 44105728 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b66f9c00 session 0x5631b6c06f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6968780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:22.570421+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 48996352 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b639af00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c65000/0x0/0x4ffc00000, data 0x1999d3c/0x1a70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:23.570585+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 49913856 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:24.570711+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 49913856 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c65000/0x0/0x4ffc00000, data 0x1999d19/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:25.570844+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1764116 data_alloc: 218103808 data_used: 8720384
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 49913856 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:26.570994+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 49913856 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:27.571178+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 49913856 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:28.571320+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 169435136 unmapped: 49913856 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:29.571513+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.325087547s of 15.416718483s, submitted: 41
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b33683c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b59d85a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c65000/0x0/0x4ffc00000, data 0x1999d19/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 169443328 unmapped: 49905664 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:30.571630+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1675576 data_alloc: 218103808 data_used: 4321280
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b636f2c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:31.571747+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:32.571882+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:33.572029+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:34.572186+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f773f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:35.572476+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1585135 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:36.572695+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f773f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:37.572839+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:38.572982+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:39.573129+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:40.573311+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1585135 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.338741302s of 11.458028793s, submitted: 37
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 52862976 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:41.573487+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166526976 unmapped: 52822016 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f7740000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:42.573781+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166567936 unmapped: 52781056 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:43.573865+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167723008 unmapped: 51625984 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f7740000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:44.573994+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167731200 unmapped: 51617792 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:45.574137+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1584843 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167731200 unmapped: 51617792 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:46.574280+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167731200 unmapped: 51617792 heap: 219348992 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665cc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665cc00 session 0x5631b53970e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6b53e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b592c780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665cc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665cc00 session 0x5631b331d680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b636fc20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:47.574404+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 61980672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:48.574533+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 61980672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:49.574749+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 61980672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6cb1000/0x0/0x4ffc00000, data 0x13e8c97/0x14bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:50.574931+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1671846 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 61980672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:51.575083+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b33705a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 61980672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:52.575247+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 61980672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:53.575408+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 61980672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b526fe00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6cb1000/0x0/0x4ffc00000, data 0x13e8c97/0x14bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:54.575551+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 61972480 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:55.575727+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1671846 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b526e780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665cc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.577814102s of 14.764905930s, submitted: 332
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665cc00 session 0x5631b3368f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167886848 unmapped: 61964288 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:56.575878+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 167886848 unmapped: 61964288 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6caf000/0x0/0x4ffc00000, data 0x13e8cca/0x14bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:57.576018+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 166756352 unmapped: 63094784 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:58.576174+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:29:59.576422+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:00.576638+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742394 data_alloc: 234881024 data_used: 11681792
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6caf000/0x0/0x4ffc00000, data 0x13e8cca/0x14bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:01.576852+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:02.577096+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:03.577468+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6caf000/0x0/0x4ffc00000, data 0x13e8cca/0x14bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:04.577950+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6caf000/0x0/0x4ffc00000, data 0x13e8cca/0x14bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:05.578092+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742394 data_alloc: 234881024 data_used: 11681792
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:06.578473+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 168165376 unmapped: 61685760 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6caf000/0x0/0x4ffc00000, data 0x13e8cca/0x14bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x7a8f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.641175270s of 11.688083649s, submitted: 13
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:07.578588+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 172695552 unmapped: 57155584 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b66f9c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b66f9c00 session 0x5631b3370780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df400 session 0x5631b64170e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f62960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:08.579006+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b33681e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665cc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665cc00 session 0x5631b63a90e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b66f9c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b66f9c00 session 0x5631b3362000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55d2400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55d2400 session 0x5631b57712c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5394000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b363e5a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174178304 unmapped: 55672832 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:09.579390+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174186496 unmapped: 55664640 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:10.579572+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1932660 data_alloc: 234881024 data_used: 12902400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 55656448 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44f4000/0x0/0x4ffc00000, data 0x2a02d2c/0x2ad8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:11.579888+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 55656448 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:12.580044+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 55656448 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:13.580278+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44f4000/0x0/0x4ffc00000, data 0x2a02d2c/0x2ad8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 55656448 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:14.580491+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 55656448 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:15.580757+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1932420 data_alloc: 234881024 data_used: 12902400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44f2000/0x0/0x4ffc00000, data 0x2a04d2c/0x2ada000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 55648256 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:16.580926+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174202880 unmapped: 55648256 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:17.581158+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44f2000/0x0/0x4ffc00000, data 0x2a04d2c/0x2ada000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665cc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665cc00 session 0x5631b386ed20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174211072 unmapped: 55640064 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:18.581288+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174211072 unmapped: 55640064 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:19.581457+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.260471344s of 12.649647713s, submitted: 165
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174211072 unmapped: 55640064 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b66f9c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b66f9c00 session 0x5631b592cb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44f2000/0x0/0x4ffc00000, data 0x2a04d2c/0x2ada000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:20.581655+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1932596 data_alloc: 234881024 data_used: 12902400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174211072 unmapped: 55640064 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:21.581881+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55d3400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55d3400 session 0x5631b331da40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5395a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44c8000/0x0/0x4ffc00000, data 0x2a2cd5f/0x2b04000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174546944 unmapped: 55304192 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:22.582097+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665cc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174546944 unmapped: 55304192 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:23.582291+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175800320 unmapped: 54050816 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:24.582451+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178528256 unmapped: 51322880 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:25.582677+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1991563 data_alloc: 234881024 data_used: 20652032
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 51314688 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:26.582970+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 51314688 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44c8000/0x0/0x4ffc00000, data 0x2a2cd5f/0x2b04000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:27.583156+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 51298304 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:28.583408+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 51298304 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:29.583529+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 51298304 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:30.583666+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1991563 data_alloc: 234881024 data_used: 20652032
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178552832 unmapped: 51298304 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:31.583919+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 51290112 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:32.584385+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44c8000/0x0/0x4ffc00000, data 0x2a2cd5f/0x2b04000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178561024 unmapped: 51290112 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:33.584610+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.891810417s of 13.925239563s, submitted: 9
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 187252736 unmapped: 42598400 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f44c8000/0x0/0x4ffc00000, data 0x2a2cd5f/0x2b04000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:34.584768+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 42975232 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:35.584948+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2116005 data_alloc: 234881024 data_used: 21164032
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186236928 unmapped: 43614208 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:36.585126+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e9000/0x0/0x4ffc00000, data 0x3a0bd5f/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186236928 unmapped: 43614208 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:37.585409+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186236928 unmapped: 43614208 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:38.585648+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e9000/0x0/0x4ffc00000, data 0x3a0bd5f/0x3ae3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186236928 unmapped: 43614208 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:39.585824+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186245120 unmapped: 43606016 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:40.586011+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0cd5f/0x3ae4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115605 data_alloc: 234881024 data_used: 21168128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186245120 unmapped: 43606016 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:41.586156+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 43597824 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:42.586397+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 43597824 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:43.586545+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 43597824 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:44.586685+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 43597824 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:45.586863+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115773 data_alloc: 234881024 data_used: 21168128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 43597824 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:46.586995+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0cd5f/0x3ae4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 43597824 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:47.587144+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186261504 unmapped: 43589632 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:48.587309+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186261504 unmapped: 43589632 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:49.587436+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0cd5f/0x3ae4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186269696 unmapped: 43581440 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:50.587578+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115773 data_alloc: 234881024 data_used: 21168128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186269696 unmapped: 43581440 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:51.587713+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186269696 unmapped: 43581440 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:52.587905+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186269696 unmapped: 43581440 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:53.588027+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0cd5f/0x3ae4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186269696 unmapped: 43581440 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:54.588150+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186269696 unmapped: 43581440 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:55.588332+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115773 data_alloc: 234881024 data_used: 21168128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186277888 unmapped: 43573248 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:56.588567+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186277888 unmapped: 43573248 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:57.588794+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186286080 unmapped: 43565056 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0cd5f/0x3ae4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:58.588950+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186286080 unmapped: 43565056 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:30:59.589133+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186294272 unmapped: 43556864 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:00.589392+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115773 data_alloc: 234881024 data_used: 21168128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0cd5f/0x3ae4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186294272 unmapped: 43556864 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:01.589670+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186294272 unmapped: 43556864 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:02.589894+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186294272 unmapped: 43556864 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:03.590046+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186302464 unmapped: 43548672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:04.590211+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186302464 unmapped: 43548672 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:05.590393+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115773 data_alloc: 234881024 data_used: 21168128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.817647934s of 32.097312927s, submitted: 136
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186318848 unmapped: 43532288 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:06.590535+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e6000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186318848 unmapped: 43532288 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:07.590810+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e6000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186318848 unmapped: 43532288 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:08.591025+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e6000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186318848 unmapped: 43532288 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:09.591170+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186335232 unmapped: 43515904 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:10.591385+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2117909 data_alloc: 234881024 data_used: 21155840
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186335232 unmapped: 43515904 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:11.591593+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186343424 unmapped: 43507712 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:12.591807+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b66f9c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186343424 unmapped: 43507712 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:13.591969+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186351616 unmapped: 43499520 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b526f4a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665cc00 session 0x5631b6bf23c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:14.592136+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186351616 unmapped: 43499520 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:15.592281+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115697 data_alloc: 234881024 data_used: 21164032
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186351616 unmapped: 43499520 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:16.592496+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186351616 unmapped: 43499520 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:17.592643+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186351616 unmapped: 43499520 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:18.592794+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186351616 unmapped: 43499520 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:19.592956+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186359808 unmapped: 43491328 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:20.593089+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115697 data_alloc: 234881024 data_used: 21164032
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186359808 unmapped: 43491328 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:21.593212+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43483136 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:22.593397+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43483136 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:23.593573+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43483136 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:24.593704+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43483136 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:25.593925+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115697 data_alloc: 234881024 data_used: 21164032
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43483136 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:26.594059+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 43483136 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:27.594218+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186384384 unmapped: 43466752 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:28.594415+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186384384 unmapped: 43466752 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:29.594531+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f34e7000/0x0/0x4ffc00000, data 0x3a0dd5f/0x3ae5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.677854538s of 23.720617294s, submitted: 22
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b66f9c00 session 0x5631b5f661e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de800 session 0x5631b36703c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186392576 unmapped: 43458560 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:30.594684+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b33683c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1878902 data_alloc: 234881024 data_used: 12890112
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 48070656 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:31.594838+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 48070656 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:32.594997+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 48070656 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:33.595141+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 48070656 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:34.595302+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b41000/0x0/0x4ffc00000, data 0x2165cca/0x223a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 48070656 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:35.595466+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b41000/0x0/0x4ffc00000, data 0x2165cca/0x223a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1878902 data_alloc: 234881024 data_used: 12890112
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:36.595681+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 48070656 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b337b0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b6c070e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:37.595989+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 48070656 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4d92000/0x0/0x4ffc00000, data 0x2165cca/0x223a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b6416b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:38.596244+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:39.596769+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:40.597867+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1620411 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:41.598417+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:42.599275+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:43.599677+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f659f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:44.600049+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:45.600284+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f659f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1620411 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:46.600704+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f659f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:47.600833+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:48.600961+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f659f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:49.601151+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:50.601420+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1620411 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:51.601595+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:52.601813+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:53.601968+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 174833664 unmapped: 55017472 heap: 229851136 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665cc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665cc00 session 0x5631b6b525a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b63a8960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b526f0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6b39680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.831064224s of 24.153444290s, submitted: 122
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b6c06d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b66f9c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b66f9c00 session 0x5631b5396960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b64161e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b6b53e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b636eb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:54.602277+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f659f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:55.602531+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1746780 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:56.602673+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:57.602831+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5551000/0x0/0x4ffc00000, data 0x19a7cf9/0x1a7b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:58.602963+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:59.603118+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b6968b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:00.603253+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1746780 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:01.603391+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de800 session 0x5631b639b680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:02.603543+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f66000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:03.603681+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b5f663c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176635904 unmapped: 64765952 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f554f000/0x0/0x4ffc00000, data 0x19a7d2c/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:04.603821+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176635904 unmapped: 64765952 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:05.603952+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 65847296 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854215 data_alloc: 234881024 data_used: 17510400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f554f000/0x0/0x4ffc00000, data 0x19a7d2c/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:06.604142+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:07.604315+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:08.604523+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f554f000/0x0/0x4ffc00000, data 0x19a7d2c/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:09.604679+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:10.604980+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854215 data_alloc: 234881024 data_used: 17510400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:11.605199+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:12.605449+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:13.605595+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f554f000/0x0/0x4ffc00000, data 0x19a7d2c/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:14.605917+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.179763794s of 21.302459717s, submitted: 51
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:15.606102+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185974784 unmapped: 55427072 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1932627 data_alloc: 234881024 data_used: 18014208
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:16.606235+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185122816 unmapped: 56279040 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b386e960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b69d05a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55cc400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55cc400 session 0x5631b6bf25a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b592dc20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b639ad20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b3372000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b363ef00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3364400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b3364400 session 0x5631b331c960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6c063c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:17.606405+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4ad0000/0x0/0x4ffc00000, data 0x2423d65/0x24fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186703872 unmapped: 58376192 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:18.606611+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:19.606843+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:20.607200+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2101013 data_alloc: 234881024 data_used: 18665472
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:21.607356+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:22.607536+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b53961e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f9000/0x0/0x4ffc00000, data 0x37ebd9e/0x38c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:23.607709+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:24.607918+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b3362b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:25.608114+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f9000/0x0/0x4ffc00000, data 0x37ebd9e/0x38c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2096613 data_alloc: 234881024 data_used: 18669568
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:26.608252+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b33734a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.141558647s of 11.516283989s, submitted: 178
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0800 session 0x5631b6428960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:27.608378+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 58335232 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:28.608579+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192094208 unmapped: 52985856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:29.608748+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:30.608867+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2237908 data_alloc: 251658240 data_used: 38023168
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:31.608992+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f7000/0x0/0x4ffc00000, data 0x37ebdd1/0x38c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f7000/0x0/0x4ffc00000, data 0x37ebdd1/0x38c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:32.609190+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:33.609316+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:34.609447+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:35.609704+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 43081728 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2238540 data_alloc: 251658240 data_used: 38027264
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:36.609914+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 43081728 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f4000/0x0/0x4ffc00000, data 0x37eedd1/0x38c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:37.610132+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 43081728 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.325328827s of 11.357563019s, submitted: 9
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:38.610290+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 37707776 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:39.610406+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:40.610581+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2067000/0x0/0x4ffc00000, data 0x4a73dd1/0x4b4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2398254 data_alloc: 251658240 data_used: 40370176
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:41.610808+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:42.610989+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:43.611122+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:44.611293+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:45.611434+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204e000/0x0/0x4ffc00000, data 0x4a94dd1/0x4b6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392102 data_alloc: 251658240 data_used: 40374272
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:46.611550+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:47.611692+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204e000/0x0/0x4ffc00000, data 0x4a94dd1/0x4b6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:48.611848+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:49.611972+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:50.612102+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204e000/0x0/0x4ffc00000, data 0x4a94dd1/0x4b6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392774 data_alloc: 251658240 data_used: 40374272
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:51.612245+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:52.612462+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:53.612577+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:54.612700+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:55.612901+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204e000/0x0/0x4ffc00000, data 0x4a94dd1/0x4b6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:56.613025+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392774 data_alloc: 251658240 data_used: 40374272
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:57.613177+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.077375412s of 19.483062744s, submitted: 198
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:58.613438+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:59.613622+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:00.613758+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:01.614086+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392782 data_alloc: 251658240 data_used: 40374272
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:02.614227+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:03.614464+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:04.614598+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:05.614719+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:06.614826+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392782 data_alloc: 251658240 data_used: 40374272
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:07.614966+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:08.615106+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:09.615294+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.534425735s of 12.544439316s, submitted: 2
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:10.615596+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:11.615740+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392798 data_alloc: 251658240 data_used: 40370176
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209436672 unmapped: 35643392 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:12.615944+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209436672 unmapped: 35643392 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:13.616086+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209444864 unmapped: 35635200 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:14.616665+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:15.617536+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:16.617697+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392798 data_alloc: 251658240 data_used: 40370176
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:17.617846+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:18.618002+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:19.618139+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 35610624 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:20.618848+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 35610624 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:21.618998+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.372642517s of 11.383035660s, submitted: 4
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b526e960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2393030 data_alloc: 251658240 data_used: 40378368
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b526f0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 35610624 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:22.619656+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:23.620179+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:24.620613+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:25.620738+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:26.621065+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2393030 data_alloc: 251658240 data_used: 40378368
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:27.621361+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:28.621587+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:29.621735+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:30.621906+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:31.622119+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2393030 data_alloc: 251658240 data_used: 40378368
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:32.622305+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:33.622415+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:34.622683+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:35.622805+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209502208 unmapped: 35577856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:36.622944+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2393030 data_alloc: 251658240 data_used: 40378368
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 35569664 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:37.623061+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 35569664 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:38.623135+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209518592 unmapped: 35561472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:39.623256+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209518592 unmapped: 35561472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:40.623424+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.331880569s of 19.336585999s, submitted: 1
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209534976 unmapped: 35545088 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:41.623619+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2394862 data_alloc: 251658240 data_used: 40366080
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209534976 unmapped: 35545088 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:42.623857+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209534976 unmapped: 35545088 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:43.623993+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209543168 unmapped: 35536896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:44.624171+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209543168 unmapped: 35536896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:45.624320+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209551360 unmapped: 35528704 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:46.624454+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2394862 data_alloc: 251658240 data_used: 40366080
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209551360 unmapped: 35528704 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:47.624634+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 35520512 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:48.624763+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3670d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b386f0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:49.624885+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:50.625021+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:51.625221+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392498 data_alloc: 251658240 data_used: 40374272
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:52.625367+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:53.625474+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:54.625623+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:55.625746+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:56.625883+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392498 data_alloc: 251658240 data_used: 40374272
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:57.626032+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:58.626149+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:59.626303+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:00.626434+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:01.626596+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392498 data_alloc: 251658240 data_used: 40374272
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209592320 unmapped: 35487744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:02.626753+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209592320 unmapped: 35487744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:03.626843+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209600512 unmapped: 35479552 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.428400040s of 23.454566956s, submitted: 19
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0800 session 0x5631b3362000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b644da40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:04.626973+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 35446784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:05.627130+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5771e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:06.627307+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1965385 data_alloc: 234881024 data_used: 17801216
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4697000/0x0/0x4ffc00000, data 0x2447d2c/0x251d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:07.627505+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:08.627671+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:09.627865+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:10.628014+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b592cb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b6198960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:11.628208+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4697000/0x0/0x4ffc00000, data 0x2447d2c/0x251d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1965385 data_alloc: 234881024 data_used: 17801216
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:12.628383+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b6b52960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:13.628654+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:14.628782+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:15.628924+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:16.629059+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1660288 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:17.629176+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:18.629330+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:19.629487+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:20.629594+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:21.629753+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1660288 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:22.629943+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:23.630071+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:24.630287+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:25.630437+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:26.630561+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1660288 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:27.630693+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:28.630824+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:29.630998+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:30.631167+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6968960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:31.631294+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b592de00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b6921a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b3372d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.032833099s of 27.336183548s, submitted: 106
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6bf2d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749474 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6b52960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b3362000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b526e960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b33734a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:32.631453+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5782000/0x0/0x4ffc00000, data 0x1365d09/0x143a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:33.631619+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:34.631723+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:35.631831+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:36.632054+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749490 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5782000/0x0/0x4ffc00000, data 0x1365d09/0x143a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:37.632172+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:38.632333+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b53961e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:39.632528+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5782000/0x0/0x4ffc00000, data 0x1365d09/0x143a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:40.632655+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b331c960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:41.632774+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749490 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5782000/0x0/0x4ffc00000, data 0x1365d09/0x143a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:42.632989+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b363ef00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.996141434s of 11.116294861s, submitted: 52
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b3372000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185073664 unmapped: 60006400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:43.633139+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185090048 unmapped: 59990016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:44.633274+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:45.633398+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:46.633514+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1815118 data_alloc: 218103808 data_used: 10960896
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:47.633660+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:48.633857+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:49.634002+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:50.634126+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:51.634215+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1815118 data_alloc: 218103808 data_used: 10960896
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:52.634396+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:53.634533+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:54.634671+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.228917122s of 12.266461372s, submitted: 13
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194617344 unmapped: 50462720 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:55.634801+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 190840832 unmapped: 54239232 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:56.634981+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1922448 data_alloc: 234881024 data_used: 11800576
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b6b530e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb1400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb1400 session 0x5631b63a8780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b363eb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b6968960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 190906368 unmapped: 54173696 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:57.635098+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b3670d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b63a8000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192290816 unmapped: 52789248 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f1000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f1000 session 0x5631b5770000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:58.635236+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f62d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b526fe00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f40f5000/0x0/0x4ffc00000, data 0x29ecd75/0x2ac5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:59.635403+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f40a4000/0x0/0x4ffc00000, data 0x2a3fdae/0x2b18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:00.635516+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:01.635641+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2024138 data_alloc: 234881024 data_used: 12627968
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:02.635852+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:03.636016+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:04.636147+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b592dc20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:05.636566+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4080000/0x0/0x4ffc00000, data 0x2a63dae/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:06.636680+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2024850 data_alloc: 234881024 data_used: 12627968
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b33692c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:07.636785+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4080000/0x0/0x4ffc00000, data 0x2a63dae/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:08.636997+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b58d7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b58d7000 session 0x5631b65e0960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.912911415s of 13.777595520s, submitted: 216
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b636e780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192258048 unmapped: 52822016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:09.637110+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f407f000/0x0/0x4ffc00000, data 0x2a63dbe/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192258048 unmapped: 52822016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:10.637225+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f407f000/0x0/0x4ffc00000, data 0x2a63dbe/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 52813824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:11.637425+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2068033 data_alloc: 234881024 data_used: 18526208
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:12.637591+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:13.637703+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:14.637808+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:15.637939+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:16.638089+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f407f000/0x0/0x4ffc00000, data 0x2a63dbe/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2082697 data_alloc: 234881024 data_used: 20631552
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f407f000/0x0/0x4ffc00000, data 0x2a63dbe/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:17.638265+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:18.638407+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:19.638552+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:20.638696+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:21.638820+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.899151802s of 12.938499451s, submitted: 11
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2203335 data_alloc: 234881024 data_used: 20676608
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:22.638974+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f30eb000/0x0/0x4ffc00000, data 0x39efdbe/0x3ac9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:23.639186+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:24.639325+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:25.639485+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:26.639660+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2207655 data_alloc: 234881024 data_used: 20905984
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:27.639788+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3051000/0x0/0x4ffc00000, data 0x3a91dbe/0x3b6b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:28.639956+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:29.640094+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:30.640205+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:31.640408+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f302b000/0x0/0x4ffc00000, data 0x3ab6dbe/0x3b90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209631 data_alloc: 234881024 data_used: 20905984
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:32.640577+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f302b000/0x0/0x4ffc00000, data 0x3ab6dbe/0x3b90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:33.640657+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.130858421s of 12.511350632s, submitted: 149
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:34.640810+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:35.640896+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:36.641009+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2208935 data_alloc: 234881024 data_used: 20905984
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:37.641134+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3029000/0x0/0x4ffc00000, data 0x3ab9dbe/0x3b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:38.641244+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:39.641369+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3029000/0x0/0x4ffc00000, data 0x3ab9dbe/0x3b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:40.641491+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:41.641646+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2208991 data_alloc: 234881024 data_used: 20905984
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:42.641826+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:43.642022+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.078755379s of 10.091617584s, submitted: 4
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:44.642289+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:45.642422+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:46.642621+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2211159 data_alloc: 234881024 data_used: 20893696
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:47.642781+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:48.642917+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:49.643076+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:50.643213+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:51.643378+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2211159 data_alloc: 234881024 data_used: 20893696
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:52.643532+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199294976 unmapped: 45785088 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:53.643647+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 45776896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:54.644137+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 45776896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:55.644916+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 45776896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:56.645106+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 45776896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209311 data_alloc: 234881024 data_used: 20893696
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:57.645366+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.115762711s of 13.137649536s, submitted: 17
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 45768704 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:58.645644+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 45768704 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b636ef00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b6bf30e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:59.646013+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 45760512 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:00.646281+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 45760512 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:01.646442+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 45760512 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2208947 data_alloc: 234881024 data_used: 20901888
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:02.646655+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:03.646890+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:04.647193+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:05.647534+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:06.647790+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2208947 data_alloc: 234881024 data_used: 20901888
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:07.647953+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:08.648378+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199344128 unmapped: 45735936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.913313866s of 11.937891960s, submitted: 6
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:09.648594+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199344128 unmapped: 45735936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:10.649017+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:11.649273+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209111 data_alloc: 234881024 data_used: 20901888
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:12.649485+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:13.649778+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:14.649932+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:15.650097+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:16.650249+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199368704 unmapped: 45711360 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209111 data_alloc: 234881024 data_used: 20901888
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:17.650585+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199368704 unmapped: 45711360 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:18.650855+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:19.651079+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:20.651289+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:21.651517+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209111 data_alloc: 234881024 data_used: 20901888
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:22.651752+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:23.651894+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:24.652139+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199385088 unmapped: 45694976 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:25.652373+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199385088 unmapped: 45694976 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:26.652558+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.307643890s of 17.318552017s, submitted: 3
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199401472 unmapped: 45678592 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665d800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209679 data_alloc: 234881024 data_used: 20910080
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:27.652673+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199401472 unmapped: 45678592 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:28.652868+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b386f0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0800 session 0x5631b636fc20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199417856 unmapped: 45662208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:29.653772+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199417856 unmapped: 45662208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:30.654211+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199417856 unmapped: 45662208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:31.654871+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199417856 unmapped: 45662208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209251 data_alloc: 234881024 data_used: 20910080
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:32.656097+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199426048 unmapped: 45654016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:33.657383+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199426048 unmapped: 45654016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:34.657790+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:35.658867+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:36.659777+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209251 data_alloc: 234881024 data_used: 20910080
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:37.660524+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:38.661222+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:39.661848+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:40.662402+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:41.662734+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209251 data_alloc: 234881024 data_used: 20910080
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:42.663092+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199442432 unmapped: 45637632 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.144123077s of 16.171749115s, submitted: 8
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b402ab40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b3379680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:43.663240+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199450624 unmapped: 45629440 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3369e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:44.664028+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:45.664775+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4916000/0x0/0x4ffc00000, data 0x2168d3c/0x223f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:46.664949+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1963754 data_alloc: 234881024 data_used: 12619776
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:47.665320+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:48.665881+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:49.666461+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:50.666835+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4916000/0x0/0x4ffc00000, data 0x2168d3c/0x223f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:51.667278+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1963754 data_alloc: 234881024 data_used: 12619776
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:52.667566+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b6c074a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665d800 session 0x5631b69d0f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:53.667852+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.496988297s of 10.720852852s, submitted: 70
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185229312 unmapped: 59850752 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6428780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:54.668009+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:55.668142+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:56.668274+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:57.668406+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:58.668517+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:59.668774+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:00.668918+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:01.669116+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:02.669434+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:03.669553+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:04.669757+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:05.669955+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:06.670085+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:07.670216+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:08.670454+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:09.670810+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:10.671000+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:11.671202+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:12.671414+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:13.671583+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:14.671743+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:15.671896+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:16.672131+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:17.672279+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:18.672541+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:19.672740+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:20.672899+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:21.673073+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:22.673279+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:23.673507+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:24.673720+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:25.673922+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:26.674136+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:27.674320+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:28.674544+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:29.674753+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:30.674911+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:31.675055+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:32.675203+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:33.675369+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:34.675496+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:35.675673+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:36.675858+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:37.676034+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:38.676223+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:39.676357+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.602958679s of 45.905929565s, submitted: 76
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b636e5a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b33692c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b526fe00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b5f62d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:40.676539+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5770000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:41.676791+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f584c000/0x0/0x4ffc00000, data 0x129dc97/0x1370000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:42.677744+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1781801 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:43.677900+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b63a8000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:44.678106+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:45.678300+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:46.678491+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b3670d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f584c000/0x0/0x4ffc00000, data 0x129dc97/0x1370000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:47.678697+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1781801 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b6968960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184139776 unmapped: 60940288 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:48.678848+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b363eb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 60612608 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:49.678971+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 60612608 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:50.679208+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:51.679493+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:52.679722+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1848846 data_alloc: 234881024 data_used: 11169792
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:53.679889+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:54.680053+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:55.680224+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:56.680435+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:57.680826+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1848846 data_alloc: 234881024 data_used: 11169792
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 22K writes, 81K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 22K writes, 7463 syncs, 2.98 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4034 writes, 15K keys, 4034 commit groups, 1.0 writes per commit group, ingest: 17.63 MB, 0.03 MB/s
                                           Interval WAL: 4034 writes, 1593 syncs, 2.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:58.681005+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:59.681164+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.734361649s of 20.849309921s, submitted: 29
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:00.681361+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192380928 unmapped: 52699136 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:01.681453+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194084864 unmapped: 50995200 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b65e14a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b35505a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b6b52960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:02.681572+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1970704 data_alloc: 234881024 data_used: 13291520
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586c00 session 0x5631b65e0960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b40000/0x0/0x4ffc00000, data 0x1f9bc97/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586400 session 0x5631b6b53680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b639ba40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194519040 unmapped: 54239232 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586c00 session 0x5631b331c000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b592d860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:03.681692+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b6c061e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 54231040 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:04.681839+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 54231040 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:05.682016+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f7800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f7800 session 0x5631b69d03c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 54231040 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:06.682191+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 54231040 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:07.682436+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128715 data_alloc: 234881024 data_used: 13291520
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3644000/0x0/0x4ffc00000, data 0x34a5c97/0x3578000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194650112 unmapped: 54108160 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:08.682604+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b5f65c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194650112 unmapped: 54108160 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:09.682784+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586c00 session 0x5631b64283c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b38703c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194658304 unmapped: 54099968 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:10.683039+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.757110596s of 10.213796616s, submitted: 184
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194666496 unmapped: 54091776 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:11.683226+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 195723264 unmapped: 53035008 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:12.683429+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2205903 data_alloc: 234881024 data_used: 24137728
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205684736 unmapped: 43073536 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:13.683640+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3623000/0x0/0x4ffc00000, data 0x34c4cca/0x3599000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:14.683812+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:15.683975+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:16.684108+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:17.684253+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2269247 data_alloc: 251658240 data_used: 30818304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:18.684407+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3619000/0x0/0x4ffc00000, data 0x34cecca/0x35a3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:19.684559+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:20.684695+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.221982002s of 10.237133026s, submitted: 4
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205758464 unmapped: 42999808 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:21.684890+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205758464 unmapped: 42999808 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:22.685073+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2294221 data_alloc: 251658240 data_used: 30855168
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2481000/0x0/0x4ffc00000, data 0x4658cca/0x472d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212967424 unmapped: 35790848 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:23.685204+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:24.685442+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:25.685625+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:26.686473+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:27.686665+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2419059 data_alloc: 251658240 data_used: 32759808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2463000/0x0/0x4ffc00000, data 0x467bcca/0x4750000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:28.686869+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:29.687026+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2463000/0x0/0x4ffc00000, data 0x467bcca/0x4750000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:30.687165+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:31.687297+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:32.687515+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2419075 data_alloc: 251658240 data_used: 32759808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.701073647s of 12.032704353s, submitted: 164
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:33.687696+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:34.687897+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:35.688011+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:36.688164+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:37.688317+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413787 data_alloc: 251658240 data_used: 32759808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:38.688489+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:39.688696+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:40.688856+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:41.689123+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:42.689456+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413787 data_alloc: 251658240 data_used: 32759808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:43.689643+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:44.689840+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:45.689995+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:46.690163+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:47.690462+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413787 data_alloc: 251658240 data_used: 32759808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.345106125s of 15.359903336s, submitted: 5
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:48.690628+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:49.690783+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:50.690974+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:51.691179+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246b000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:52.691393+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2415451 data_alloc: 251658240 data_used: 32747520
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212443136 unmapped: 36315136 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:53.691548+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212443136 unmapped: 36315136 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:54.691705+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 36306944 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:55.691869+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2469000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 36306944 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:56.692041+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 36306944 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:57.692205+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413367 data_alloc: 251658240 data_used: 32747520
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212459520 unmapped: 36298752 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:58.692418+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3372000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.486456871s of 10.526856422s, submitted: 18
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b592c3c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:59.692618+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:00.692782+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:01.692998+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:02.693232+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413235 data_alloc: 251658240 data_used: 32747520
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:03.693436+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212475904 unmapped: 36282368 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:04.693645+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212475904 unmapped: 36282368 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:05.693787+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212484096 unmapped: 36274176 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:06.693926+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212484096 unmapped: 36274176 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:07.694078+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413235 data_alloc: 251658240 data_used: 32747520
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 36257792 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:08.694205+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 36257792 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:09.694427+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:10.694588+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 36257792 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:11.694726+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 36257792 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.050309181s of 13.052925110s, submitted: 1
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b6c07a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa000 session 0x5631b592cb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:12.694869+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212508672 unmapped: 36249600 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1994465 data_alloc: 218103808 data_used: 10506240
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b639a780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:13.695000+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:14.695155+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:15.695383+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a89000/0x0/0x4ffc00000, data 0x205fc97/0x2132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:16.695585+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:17.695723+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1992476 data_alloc: 218103808 data_used: 10502144
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:18.695861+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:19.696038+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:20.696186+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a89000/0x0/0x4ffc00000, data 0x205fc97/0x2132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:21.696472+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:22.696726+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a89000/0x0/0x4ffc00000, data 0x205fc97/0x2132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1992476 data_alloc: 218103808 data_used: 10502144
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b6428d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.049098969s of 11.203976631s, submitted: 69
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa800 session 0x5631b5394960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:23.696861+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6417680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:24.697080+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:25.697287+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:26.697466+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:27.697833+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6190000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742138 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:28.698098+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:29.698308+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:30.698515+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:31.698732+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6190000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:32.698939+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742138 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:33.699073+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df800 session 0x5631b5f641e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:34.699329+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6190000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:35.699538+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:36.699668+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:37.699838+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742138 data_alloc: 218103808 data_used: 1839104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6190000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:38.699988+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:39.700121+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:40.700383+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.434934616s of 17.548748016s, submitted: 42
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 54403072 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:41.700548+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194428928 unmapped: 54329344 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:42.700787+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194428928 unmapped: 54329344 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:43.700919+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 54272000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4be0000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa5ef9c5), peers [1] op hist [0,0,1])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:44.701170+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194584576 unmapped: 54173696 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:45.701439+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:46.701640+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:47.701790+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:48.702027+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:49.702227+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:50.702383+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:51.702562+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:52.702885+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:53.703060+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:54.703251+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:55.703425+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:56.703626+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:57.703786+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:58.703995+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:59.704225+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:00.704432+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:01.704626+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:02.704868+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:03.705009+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:04.705140+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:05.705275+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:06.705452+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:07.705688+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:08.705813+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:09.705987+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:10.706117+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194617344 unmapped: 54140928 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:11.706301+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194617344 unmapped: 54140928 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:12.706510+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194625536 unmapped: 54132736 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:13.706665+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194625536 unmapped: 54132736 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:14.706818+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194625536 unmapped: 54132736 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.943851471s of 34.339908600s, submitted: 376
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa000 session 0x5631b644d680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b5771e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b386eb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b5f66f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6416f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:15.706935+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 58302464 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:16.707049+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 58294272 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:17.707151+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b99000/0x0/0x4ffc00000, data 0x19e0c97/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 58294272 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1872710 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:18.707283+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b64294a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:19.707509+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:20.707648+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa000 session 0x5631b3372d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:21.707950+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b99000/0x0/0x4ffc00000, data 0x19e0c97/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:22.708123+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b63a9c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586c00 session 0x5631b33734a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1873012 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:23.708243+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:24.708390+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193929216 unmapped: 58507264 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:25.708504+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197828608 unmapped: 54607872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b98000/0x0/0x4ffc00000, data 0x19e0ca7/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b98000/0x0/0x4ffc00000, data 0x19e0ca7/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:26.708690+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:27.708833+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1981844 data_alloc: 234881024 data_used: 17993728
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:28.708988+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:29.709119+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b98000/0x0/0x4ffc00000, data 0x19e0ca7/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:30.709275+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:31.709434+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:32.709615+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b98000/0x0/0x4ffc00000, data 0x19e0ca7/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 54591488 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1981844 data_alloc: 234881024 data_used: 17993728
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:33.709752+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 54591488 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.195119858s of 19.356214523s, submitted: 25
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:34.709895+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 47718400 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:35.710033+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 47718400 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:36.710168+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 47079424 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:37.710311+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212074496 unmapped: 40361984 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa000 session 0x5631b69d0f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: mgrc ms_handle_reset ms_handle_reset con 0x5631b66f8c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2285351161
Sep 30 18:56:35 compute-0 ceph-osd[82241]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2285351161,v1:192.168.122.100:6801/2285351161]
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: get_auth_request con 0x5631bb8fa800 auth_method 0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: mgrc handle_mgr_configure stats_period=5
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b636f680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b526e780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b6b381e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb000 session 0x5631b69d0b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2132737 data_alloc: 234881024 data_used: 18255872
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:38.711623+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398a000/0x0/0x4ffc00000, data 0x2be4d09/0x2cb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7848c00 session 0x5631b5f62780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520c800 session 0x5631b644cd20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73dfc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:39.711798+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:40.711964+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398a000/0x0/0x4ffc00000, data 0x2be4d09/0x2cb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:41.712182+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:42.712638+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2125649 data_alloc: 234881024 data_used: 18255872
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:43.712828+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520c800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520c800 session 0x5631b592c000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:44.713052+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:45.713178+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:46.713290+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3990000/0x0/0x4ffc00000, data 0x2be7d09/0x2cbc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b64170e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:47.713442+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 45768704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2125649 data_alloc: 234881024 data_used: 18255872
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:48.713588+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b337b0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.909849167s of 14.555604935s, submitted: 146
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b64283c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206684160 unmapped: 45752320 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fbc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:49.713739+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206684160 unmapped: 45752320 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398e000/0x0/0x4ffc00000, data 0x2be7d3c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:50.713984+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206692352 unmapped: 45744128 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:51.714105+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208068608 unmapped: 44367872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398e000/0x0/0x4ffc00000, data 0x2be7d3c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:52.714256+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208068608 unmapped: 44367872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398e000/0x0/0x4ffc00000, data 0x2be7d3c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2172193 data_alloc: 234881024 data_used: 24395776
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:53.714374+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208068608 unmapped: 44367872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:54.714544+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208068608 unmapped: 44367872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:55.714675+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208076800 unmapped: 44359680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:56.714782+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208076800 unmapped: 44359680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:57.715104+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208076800 unmapped: 44359680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2172193 data_alloc: 234881024 data_used: 24395776
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:58.715288+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398e000/0x0/0x4ffc00000, data 0x2be7d3c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 44351488 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:59.715422+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 44351488 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.638042450s of 11.702260971s, submitted: 12
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:00.715640+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213581824 unmapped: 38854656 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:01.715754+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213827584 unmapped: 38608896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cdd000/0x0/0x4ffc00000, data 0x36f2d3c/0x37c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:02.715875+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213827584 unmapped: 38608896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278387 data_alloc: 234881024 data_used: 25186304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:03.716028+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213835776 unmapped: 38600704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:04.716174+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213835776 unmapped: 38600704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:05.716394+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213835776 unmapped: 38600704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:06.716572+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:07.716711+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278555 data_alloc: 234881024 data_used: 25186304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:08.716892+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:09.717017+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:10.717151+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:11.717311+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213860352 unmapped: 38576128 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:12.717497+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213860352 unmapped: 38576128 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278555 data_alloc: 234881024 data_used: 25186304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:13.717844+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213860352 unmapped: 38576128 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:14.717982+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:15.718171+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:16.718301+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:17.718516+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278555 data_alloc: 234881024 data_used: 25186304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:18.718684+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:19.718847+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213876736 unmapped: 38559744 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:20.718971+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213876736 unmapped: 38559744 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:21.719111+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213893120 unmapped: 38543360 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55cc400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.734886169s of 21.945653915s, submitted: 102
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:22.719290+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272331 data_alloc: 234881024 data_used: 25186304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:23.719555+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f62000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b526fe00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:24.719661+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:25.719782+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbe000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:26.719949+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:27.720196+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215015424 unmapped: 37421056 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272331 data_alloc: 234881024 data_used: 25186304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:28.720369+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215015424 unmapped: 37421056 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:29.720517+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215023616 unmapped: 37412864 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbe000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:30.720642+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbe000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215023616 unmapped: 37412864 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:31.720815+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.074160576s of 10.085861206s, submitted: 13
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:32.720974+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbe000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272635 data_alloc: 234881024 data_used: 25186304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:33.721108+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:34.721298+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:35.721745+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:36.721948+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbc000/0x0/0x4ffc00000, data 0x3718d3c/0x37ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:37.722068+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272635 data_alloc: 234881024 data_used: 25186304
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:38.722243+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbc000/0x0/0x4ffc00000, data 0x3718d3c/0x37ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:39.722409+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:40.722543+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:41.722693+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b2b3b0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fbc00 session 0x5631b6969a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:42.722857+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215048192 unmapped: 37388288 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.691148758s of 10.702739716s, submitted: 3
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2087680 data_alloc: 234881024 data_used: 18255872
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:43.723037+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbd000/0x0/0x4ffc00000, data 0x3718d3c/0x37ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [0,1])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f65860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213442560 unmapped: 38993920 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:44.723197+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213442560 unmapped: 38993920 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:45.723411+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2b13000/0x0/0x4ffc00000, data 0x24f9ca7/0x25cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213442560 unmapped: 38993920 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:46.723535+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2b13000/0x0/0x4ffc00000, data 0x24f9ca7/0x25cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:47.723690+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2085691 data_alloc: 234881024 data_used: 18251776
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:48.723842+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:49.723972+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:50.724131+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2b13000/0x0/0x4ffc00000, data 0x24f9ca7/0x25cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55cc400 session 0x5631b363eb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b33692c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:51.724293+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b59d85a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:52.724460+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775507 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:53.724598+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:54.724776+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:55.724938+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a80000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:56.725098+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:57.725252+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775507 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:58.725396+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a80000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:59.725574+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a80000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:00.725707+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:01.725863+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:02.726022+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775507 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:03.726185+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:04.726360+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a80000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:05.726478+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b6b534a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b33625a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fbc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fbc00 session 0x5631b639a780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3670b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.629732132s of 22.947402954s, submitted: 81
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b331c780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b6b38780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b5f674a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520c800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:06.726610+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520c800 session 0x5631b3372000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6c07a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:07.726749+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4137000/0x0/0x4ffc00000, data 0x12a0d09/0x1375000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:08.726972+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1851177 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:09.727124+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:10.727429+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:11.727641+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:12.727913+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b526f860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4137000/0x0/0x4ffc00000, data 0x12a0d09/0x1375000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:13.728109+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1851177 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:14.728295+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b5f66780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:15.728420+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:16.728593+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b35514a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.606972694s of 10.775773048s, submitted: 43
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b5f66960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200523776 unmapped: 51912704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:17.728789+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200523776 unmapped: 51912704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:18.728927+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1855363 data_alloc: 218103808 data_used: 1900544
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200851456 unmapped: 51585024 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:19.729047+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203472896 unmapped: 48963584 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:20.729334+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203472896 unmapped: 48963584 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:21.729527+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203472896 unmapped: 48963584 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:22.729710+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203472896 unmapped: 48963584 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:23.729834+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1920115 data_alloc: 234881024 data_used: 11452416
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203481088 unmapped: 48955392 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:24.729978+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203481088 unmapped: 48955392 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:25.730109+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203489280 unmapped: 48947200 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:26.730224+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203489280 unmapped: 48947200 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:27.730441+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203489280 unmapped: 48947200 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:28.730597+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.547014236s of 11.612545967s, submitted: 4
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1921175 data_alloc: 234881024 data_used: 11481088
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211107840 unmapped: 41328640 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:29.730692+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211107840 unmapped: 41328640 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:30.730783+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210124800 unmapped: 42311680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:31.730917+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f309a000/0x0/0x4ffc00000, data 0x233cd19/0x2412000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211247104 unmapped: 41189376 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:32.731053+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211247104 unmapped: 41189376 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:33.731189+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2054427 data_alloc: 234881024 data_used: 11579392
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b639ab40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b3870f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b65e14a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b5f665a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b38d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b57714a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b402ab40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 51347456 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b5770f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b402a780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:34.731329+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 51347456 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:35.731540+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 51347456 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:36.731678+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f269e000/0x0/0x4ffc00000, data 0x2d38d19/0x2e0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 51347456 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:37.731825+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211607552 unmapped: 51331072 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:38.731980+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2124507 data_alloc: 234881024 data_used: 11583488
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211607552 unmapped: 51331072 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:39.732078+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f269b000/0x0/0x4ffc00000, data 0x2d3bd19/0x2e11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211623936 unmapped: 51314688 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:40.732261+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f269b000/0x0/0x4ffc00000, data 0x2d3bd19/0x2e11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211623936 unmapped: 51314688 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:41.732403+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211632128 unmapped: 51306496 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:42.732562+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.989665985s of 14.372500420s, submitted: 158
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b5771680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 50970624 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2673000/0x0/0x4ffc00000, data 0x2d62d3c/0x2e39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:43.732695+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2130209 data_alloc: 234881024 data_used: 11583488
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 50970624 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:44.732819+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2673000/0x0/0x4ffc00000, data 0x2d62d3c/0x2e39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212303872 unmapped: 50634752 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:45.732977+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213106688 unmapped: 49831936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:46.733126+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213106688 unmapped: 49831936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:47.733268+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213114880 unmapped: 49823744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:48.733416+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2673000/0x0/0x4ffc00000, data 0x2d62d3c/0x2e39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2199045 data_alloc: 234881024 data_used: 21688320
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213114880 unmapped: 49823744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:49.733536+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:50.733654+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:51.733766+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2673000/0x0/0x4ffc00000, data 0x2d62d3c/0x2e39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:52.733930+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:53.734048+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2199045 data_alloc: 234881024 data_used: 21688320
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:54.734190+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.245738029s of 12.282820702s, submitted: 12
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 48168960 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:55.734299+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 48062464 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:56.734409+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214892544 unmapped: 48046080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:57.734555+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f211f000/0x0/0x4ffc00000, data 0x32aed3c/0x3385000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214900736 unmapped: 48037888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:58.734724+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253889 data_alloc: 234881024 data_used: 21815296
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f211f000/0x0/0x4ffc00000, data 0x32aed3c/0x3385000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214900736 unmapped: 48037888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:59.734887+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214900736 unmapped: 48037888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:00.735058+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214900736 unmapped: 48037888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:01.735234+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214245376 unmapped: 48693248 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:02.735414+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214245376 unmapped: 48693248 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:03.735535+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2247201 data_alloc: 234881024 data_used: 21815296
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214253568 unmapped: 48685056 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:04.735704+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214253568 unmapped: 48685056 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:05.735858+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:06.736001+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:07.736326+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:08.736487+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2247201 data_alloc: 234881024 data_used: 21815296
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:09.736594+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:10.736780+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:11.736955+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.766283035s of 17.103271484s, submitted: 66
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214269952 unmapped: 48668672 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:12.737172+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214269952 unmapped: 48668672 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:13.737397+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2247201 data_alloc: 234881024 data_used: 21815296
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:14.737516+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:15.737680+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:16.737861+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:17.738065+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:18.738281+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2247201 data_alloc: 234881024 data_used: 21815296
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:19.738426+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214294528 unmapped: 48644096 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:20.738605+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214294528 unmapped: 48644096 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:21.738832+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2126000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3369860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b6969e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:22.739042+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:23.739225+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246349 data_alloc: 234881024 data_used: 21819392
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:24.739404+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:25.739538+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:26.739654+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2126000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.018495560s of 15.045900345s, submitted: 6
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:27.739784+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214327296 unmapped: 48611328 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:28.739913+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246989 data_alloc: 234881024 data_used: 21819392
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214335488 unmapped: 48603136 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:29.740056+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214335488 unmapped: 48603136 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:30.740274+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:31.740431+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:32.740629+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:33.740756+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246989 data_alloc: 234881024 data_used: 21819392
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:34.740893+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:35.741016+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:36.741147+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6417a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.072799683s of 10.080914497s, submitted: 3
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b337af00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:37.741289+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209379328 unmapped: 53559296 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:38.741421+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b68d1000 session 0x5631b6b385a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2062857 data_alloc: 234881024 data_used: 11587584
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:39.741555+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3066000/0x0/0x4ffc00000, data 0x236fd19/0x2445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:40.741754+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:41.741900+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3066000/0x0/0x4ffc00000, data 0x236fd19/0x2445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:42.742062+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:43.749460+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2062857 data_alloc: 234881024 data_used: 11587584
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3066000/0x0/0x4ffc00000, data 0x236fd19/0x2445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:44.749613+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:45.749738+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b6c06b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b6b38000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:46.749885+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 53510144 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b337b860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:47.750032+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:48.750170+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:49.750306+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:50.750425+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:51.750561+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:52.750796+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:53.751020+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:54.751180+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:55.751330+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:56.751484+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:57.751630+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:58.751774+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:59.751941+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:00.752138+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:01.752330+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:02.752828+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:03.752962+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:04.753118+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:05.753298+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:06.753422+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:07.753541+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:08.753697+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:09.753863+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:10.753980+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:11.754140+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:12.754327+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:13.754504+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:14.754619+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:15.754819+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:16.755051+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:17.755257+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:18.755417+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:19.755549+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:20.755689+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:21.755856+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:22.756057+0000)
Sep 30 18:56:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:35.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:23.756194+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:24.756334+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:25.756494+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:26.756624+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:27.756740+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:28.756868+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:29.757032+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:30.757160+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:31.757275+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:32.757428+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:33.757666+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:34.757858+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.559127808s of 58.033184052s, submitted: 95
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:35.758032+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b592cb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b6969e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b402a780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205922304 unmapped: 57016320 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b38d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b69690e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:36.758159+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:37.758441+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3784000/0x0/0x4ffc00000, data 0x1c54cf9/0x1d28000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:38.758582+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b3870f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3784000/0x0/0x4ffc00000, data 0x1c54cf9/0x1d28000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946190 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:39.758725+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:40.758896+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b639ab40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:41.759064+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205946880 unmapped: 56991744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:42.759185+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205946880 unmapped: 56991744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b35514a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b5f66780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:43.759540+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205946880 unmapped: 56991744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946648 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:44.759654+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205963264 unmapped: 56975360 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:45.759760+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 54247424 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:46.759840+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 54247424 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:47.759911+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 54247424 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:48.760086+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 54247424 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2072504 data_alloc: 234881024 data_used: 20656128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:49.760222+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208699392 unmapped: 54239232 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:50.760333+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208707584 unmapped: 54231040 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:51.760495+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208707584 unmapped: 54231040 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:52.760633+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208707584 unmapped: 54231040 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:53.760715+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208707584 unmapped: 54231040 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2072960 data_alloc: 234881024 data_used: 20668416
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:54.760823+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.133584976s of 19.291557312s, submitted: 52
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 52740096 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:55.760955+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2940000/0x0/0x4ffc00000, data 0x2a97d09/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213065728 unmapped: 49872896 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2940000/0x0/0x4ffc00000, data 0x2a97d09/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b402b0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b5f65680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3400 session 0x5631b636fe00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:56.761082+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b644d680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b386f0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:57.761234+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:58.761406+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2306968 data_alloc: 234881024 data_used: 21622784
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:59.761554+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:00.761697+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:01.761818+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be7000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:02.762006+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b592dc20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:03.762172+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2306968 data_alloc: 234881024 data_used: 21622784
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:04.762381+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b592d4a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:05.762549+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be7000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:06.762685+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5587800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5587800 session 0x5631b6b53680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.046493530s of 12.419847488s, submitted: 145
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b63a94a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:07.762861+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213336064 unmapped: 49602560 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:08.763017+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213581824 unmapped: 49356800 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2327905 data_alloc: 234881024 data_used: 25317376
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:09.763134+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be8000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 39542784 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:10.763289+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 39542784 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:11.763463+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 39542784 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be8000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:12.763646+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 39526400 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:13.763762+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 39526400 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2388401 data_alloc: 251658240 data_used: 34312192
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:14.763917+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 39526400 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:15.764113+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 39526400 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:16.764254+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be8000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223428608 unmapped: 39510016 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:17.764434+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223428608 unmapped: 39510016 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:18.764565+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.702298164s of 11.732785225s, submitted: 8
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223870976 unmapped: 39067648 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2481871 data_alloc: 251658240 data_used: 34750464
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:19.764674+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229236736 unmapped: 33701888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0fe7000/0x0/0x4ffc00000, data 0x43e1d6b/0x44b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:20.764801+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230006784 unmapped: 32931840 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:21.764909+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b63a9a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b6bf3c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229343232 unmapped: 33595392 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:22.765101+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5587800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5587800 session 0x5631b69d01e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:23.765269+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2214773 data_alloc: 234881024 data_used: 21626880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:24.765455+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:25.765618+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f257d000/0x0/0x4ffc00000, data 0x2aacd09/0x2b81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:26.765769+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:27.765915+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f257b000/0x0/0x4ffc00000, data 0x2aadd09/0x2b82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:28.766073+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2215077 data_alloc: 234881024 data_used: 21626880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:29.766225+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:30.766415+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.652027130s of 12.008926392s, submitted: 172
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b3372000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b5771a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221429760 unmapped: 41508864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:31.766531+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6b52f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:32.766732+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:33.766906+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:34.767048+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:35.767241+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:36.767404+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:37.767564+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:38.767693+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:39.767865+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:40.768038+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:41.768226+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:42.768410+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:43.768612+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:44.768793+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:45.769049+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:46.769182+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:47.769413+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:48.769559+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:49.769680+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:50.769863+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:51.769996+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:52.770198+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:53.770327+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:54.770519+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:55.770696+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:56.770815+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:57.770984+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:58.771132+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:59.771272+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:00.771437+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:01.771568+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:02.771826+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:03.772257+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:04.772404+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:05.772656+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:06.772792+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:07.772992+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:08.773160+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:09.773324+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:10.773534+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:11.773663+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:12.773806+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:13.773922+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:14.774071+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:15.774218+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:16.774417+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:17.774582+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:18.774766+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:19.774926+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:20.775123+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:21.775288+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:22.775462+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:23.775998+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:24.776274+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:25.776449+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:26.776624+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:27.776760+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:28.777017+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:29.777121+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:30.777267+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:31.777457+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:32.777659+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:33.777816+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:34.777969+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:35.778169+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b64172c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b639a5a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b331c000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6c07a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.936187744s of 65.134979248s, submitted: 54
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b392c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b59d9c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b644c780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 60104704 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5587800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5587800 session 0x5631b636f0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3870000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:36.778432+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:37.778575+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:38.779114+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3857000/0x0/0x4ffc00000, data 0x1770d09/0x1845000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:39.779398+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1956484 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:40.779842+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:41.779986+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3857000/0x0/0x4ffc00000, data 0x1770d09/0x1845000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:42.780252+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:43.780396+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b2b3be00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209928192 unmapped: 60891136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:44.780495+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1959882 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209936384 unmapped: 60882944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:45.780608+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:46.780736+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:47.780882+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3856000/0x0/0x4ffc00000, data 0x1770d2c/0x1846000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:48.780989+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:49.781115+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2058074 data_alloc: 234881024 data_used: 16293888
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:50.781303+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3856000/0x0/0x4ffc00000, data 0x1770d2c/0x1846000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:51.781439+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:52.781599+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3856000/0x0/0x4ffc00000, data 0x1770d2c/0x1846000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:53.781776+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:54.781916+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2058074 data_alloc: 234881024 data_used: 16293888
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.958608627s of 19.136236191s, submitted: 64
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 220626944 unmapped: 50192384 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:55.782031+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222814208 unmapped: 48005120 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:56.782143+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b64174a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b639a1e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5843400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5843400 session 0x5631b6c06780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2a6b000/0x0/0x4ffc00000, data 0x254dd2c/0x2623000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6c065a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 46759936 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b53c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b5395a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f29c3000/0x0/0x4ffc00000, data 0x25fbd2c/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:57.782336+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b5394b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5842000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5842000 session 0x5631b69d03c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f623c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223649792 unmapped: 47169536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:58.782497+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c3b000/0x0/0x4ffc00000, data 0x3383d2c/0x3459000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223649792 unmapped: 47169536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:59.782617+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2295141 data_alloc: 234881024 data_used: 17666048
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c3b000/0x0/0x4ffc00000, data 0x3383d2c/0x3459000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223649792 unmapped: 47169536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:00.782759+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c3b000/0x0/0x4ffc00000, data 0x3383d2c/0x3459000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223649792 unmapped: 47169536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:01.782891+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223666176 unmapped: 47153152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b402b680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:02.783023+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223666176 unmapped: 47153152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:03.783155+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1f000/0x0/0x4ffc00000, data 0x33a7d2c/0x347d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223666176 unmapped: 47153152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:04.783303+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2288741 data_alloc: 234881024 data_used: 17670144
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1f000/0x0/0x4ffc00000, data 0x33a7d2c/0x347d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b6bf32c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223666176 unmapped: 47153152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:05.783480+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b35de960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5843000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.674231529s of 11.071226120s, submitted: 215
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5843000 session 0x5631b5f625a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223682560 unmapped: 47136768 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1f000/0x0/0x4ffc00000, data 0x33a7d2c/0x347d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:06.783807+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223682560 unmapped: 47136768 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:07.783956+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223682560 unmapped: 47136768 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:08.784104+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230400000 unmapped: 40419328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:09.784275+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2386331 data_alloc: 251658240 data_used: 30572544
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:10.784651+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:11.784799+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1e000/0x0/0x4ffc00000, data 0x33a7d4f/0x347e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:12.784965+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:13.785264+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:14.785455+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1e000/0x0/0x4ffc00000, data 0x33a7d4f/0x347e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2387131 data_alloc: 251658240 data_used: 30576640
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1b000/0x0/0x4ffc00000, data 0x33aad4f/0x3481000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:15.785609+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:16.785926+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1b000/0x0/0x4ffc00000, data 0x33aad4f/0x3481000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:17.786164+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1b000/0x0/0x4ffc00000, data 0x33aad4f/0x3481000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.891641617s of 11.933318138s, submitted: 14
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 236093440 unmapped: 34725888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:18.786328+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237633536 unmapped: 33185792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:19.786514+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2507695 data_alloc: 251658240 data_used: 31805440
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:20.786738+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:21.786908+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:22.787237+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0de9000/0x0/0x4ffc00000, data 0x41d6d4f/0x42ad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0de9000/0x0/0x4ffc00000, data 0x41d6d4f/0x42ad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:23.787417+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:24.787579+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2507847 data_alloc: 251658240 data_used: 31809536
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0ded000/0x0/0x4ffc00000, data 0x41d8d4f/0x42af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:25.787790+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0ded000/0x0/0x4ffc00000, data 0x41d8d4f/0x42af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:26.787934+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:27.788134+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:28.788278+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:29.788408+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2504911 data_alloc: 251658240 data_used: 31809536
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:30.788581+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.494389534s of 12.792341232s, submitted: 169
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:31.788740+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:32.788914+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:33.789095+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:34.789257+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2505087 data_alloc: 251658240 data_used: 31809536
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:35.789468+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:36.789606+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237617152 unmapped: 33202176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:37.789946+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237617152 unmapped: 33202176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:38.790309+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237641728 unmapped: 33177600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:39.790481+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237641728 unmapped: 33177600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b639b680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b64161e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2503915 data_alloc: 251658240 data_used: 31809536
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:40.790632+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237649920 unmapped: 33169408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:41.790756+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237649920 unmapped: 33169408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:42.790929+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237649920 unmapped: 33169408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:43.791081+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237649920 unmapped: 33169408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:44.791221+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237658112 unmapped: 33161216 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2503915 data_alloc: 251658240 data_used: 31809536
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:45.791390+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237658112 unmapped: 33161216 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:46.791503+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237666304 unmapped: 33153024 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:47.791627+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237666304 unmapped: 33153024 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:48.791754+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237666304 unmapped: 33153024 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:49.791915+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237674496 unmapped: 33144832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2503915 data_alloc: 251658240 data_used: 31809536
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.172214508s of 19.200347900s, submitted: 8
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dea000/0x0/0x4ffc00000, data 0x41dad4f/0x42b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:50.792081+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237682688 unmapped: 33136640 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:51.792216+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237682688 unmapped: 33136640 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:52.792518+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237682688 unmapped: 33136640 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dea000/0x0/0x4ffc00000, data 0x41dad4f/0x42b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:53.792658+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237682688 unmapped: 33136640 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:54.792770+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237699072 unmapped: 33120256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2504723 data_alloc: 251658240 data_used: 31809536
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:55.792937+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237699072 unmapped: 33120256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f645a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b402ab40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:56.793083+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55cb800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237699072 unmapped: 33120256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55cb800 session 0x5631b3670f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0deb000/0x0/0x4ffc00000, data 0x41dad4f/0x42b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:57.793235+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229425152 unmapped: 41394176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 26K writes, 98K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 26K writes, 9158 syncs, 2.90 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4299 writes, 16K keys, 4299 commit groups, 1.0 writes per commit group, ingest: 19.72 MB, 0.03 MB/s
                                           Interval WAL: 4299 writes, 1695 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:58.793480+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:59.793650+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2202981 data_alloc: 234881024 data_used: 16560128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:00.793817+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:01.793985+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:02.794196+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f29a3000/0x0/0x4ffc00000, data 0x2622d2c/0x26f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:03.794386+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:04.794561+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2202981 data_alloc: 234881024 data_used: 16560128
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.812459946s of 15.024492264s, submitted: 87
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b5f67a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b63a9a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:05.794751+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f29a4000/0x0/0x4ffc00000, data 0x2622d2c/0x26f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229548032 unmapped: 41271296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3670960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:06.794886+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959cba/0xa2d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:07.797502+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:08.799112+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:09.799219+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:10.799541+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:11.799754+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:12.799922+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:13.800045+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:14.800187+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:15.800327+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:16.800513+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:17.800692+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:18.800835+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:19.801005+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:20.801166+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:21.801293+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:22.801463+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:23.801676+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:24.801879+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:25.802148+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:26.802395+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:27.802611+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:28.802776+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:29.802947+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:30.803172+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:31.803303+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:32.803455+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:33.803623+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:34.803776+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.031751633s of 30.238092422s, submitted: 78
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:35.803910+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b38712c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b3379a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3ff3680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b53860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b35de1e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:36.804070+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:37.804208+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:38.804406+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:39.804684+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1966912 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b5395c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:40.804828+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c73000/0x0/0x4ffc00000, data 0x1356c97/0x1429000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:41.804990+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:42.805124+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b592c780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:43.805287+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6bf3c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b5397c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:44.805433+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218882048 unmapped: 51937280 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1970570 data_alloc: 218103808 data_used: 1892352
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:45.805639+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218882048 unmapped: 51937280 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:46.805828+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 51445760 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c4b000/0x0/0x4ffc00000, data 0x137dca7/0x1451000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:47.805943+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 51445760 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:48.806065+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219381760 unmapped: 51437568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:49.806201+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219381760 unmapped: 51437568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2032890 data_alloc: 218103808 data_used: 10952704
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c4b000/0x0/0x4ffc00000, data 0x137dca7/0x1451000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:50.806307+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219381760 unmapped: 51437568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c4b000/0x0/0x4ffc00000, data 0x137dca7/0x1451000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:51.806421+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219389952 unmapped: 51429376 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:52.806574+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219389952 unmapped: 51429376 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:53.806766+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219389952 unmapped: 51429376 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:54.806984+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219389952 unmapped: 51429376 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c4b000/0x0/0x4ffc00000, data 0x137dca7/0x1451000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2032890 data_alloc: 218103808 data_used: 10952704
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.829759598s of 19.939464569s, submitted: 36
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:55.807255+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224174080 unmapped: 46645248 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:56.807496+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224174080 unmapped: 46645248 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:57.807620+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224894976 unmapped: 45924352 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:58.807740+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d400 session 0x5631b63a8f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d000 session 0x5631b386f4a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3364800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b3364800 session 0x5631b6b52b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224894976 unmapped: 45924352 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6b53860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d000 session 0x5631b331d680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d400 session 0x5631b2b3a960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b5770780
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e73400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e73400 session 0x5631b636eb40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b35ded20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:59.807869+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2d0b000/0x0/0x4ffc00000, data 0x22bbce0/0x2391000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225058816 unmapped: 45760512 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2169298 data_alloc: 234881024 data_used: 12369920
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:00.808028+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225058816 unmapped: 45760512 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:01.808221+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225058816 unmapped: 45760512 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:02.808432+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225058816 unmapped: 45760512 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d000 session 0x5631b639b860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2d0b000/0x0/0x4ffc00000, data 0x22bbd19/0x2391000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:03.808544+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225067008 unmapped: 45752320 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:04.808697+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225067008 unmapped: 45752320 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2165802 data_alloc: 234881024 data_used: 12369920
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d400 session 0x5631b5f62000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:05.808857+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225067008 unmapped: 45752320 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:06.809082+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225067008 unmapped: 45752320 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b63a94a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.581088066s of 11.953786850s, submitted: 163
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce7000/0x0/0x4ffc00000, data 0x22dfd19/0x23b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [1])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb000 session 0x5631b639ab40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:07.809221+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225075200 unmapped: 45744128 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:08.809363+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225083392 unmapped: 45735936 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:09.809588+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225083392 unmapped: 45735936 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2173129 data_alloc: 234881024 data_used: 12935168
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:10.809766+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce6000/0x0/0x4ffc00000, data 0x22dfd29/0x23b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:11.809940+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:12.810169+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:13.810323+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:14.810467+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2194937 data_alloc: 234881024 data_used: 16080896
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce6000/0x0/0x4ffc00000, data 0x22dfd29/0x23b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:15.810786+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce3000/0x0/0x4ffc00000, data 0x22e2d29/0x23b9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:16.810922+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:17.811093+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce3000/0x0/0x4ffc00000, data 0x22e2d29/0x23b9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:18.811328+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:19.811605+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2194937 data_alloc: 234881024 data_used: 16080896
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:20.811780+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.392265320s of 13.433295250s, submitted: 11
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226746368 unmapped: 44072960 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:21.811962+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229335040 unmapped: 41484288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2179000/0x0/0x4ffc00000, data 0x2e44d29/0x2f1b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:22.812142+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:23.812273+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:24.812429+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298633 data_alloc: 234881024 data_used: 16474112
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:25.812593+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:26.812842+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:27.812998+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2146000/0x0/0x4ffc00000, data 0x2e76d29/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:28.813154+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:29.813532+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2293017 data_alloc: 234881024 data_used: 16478208
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:30.813657+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:31.813775+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:32.813937+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:33.814043+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:34.814211+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2293017 data_alloc: 234881024 data_used: 16478208
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.330355644s of 14.789398193s, submitted: 104
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:35.814376+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:36.814494+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b331cf00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b64e6960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:37.814641+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:38.814777+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:39.814950+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2291885 data_alloc: 234881024 data_used: 16486400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:40.815079+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:41.815238+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229851136 unmapped: 40968192 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:42.815441+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229851136 unmapped: 40968192 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:43.815654+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229892096 unmapped: 40927232 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:44.815824+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230031360 unmapped: 40787968 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2291977 data_alloc: 234881024 data_used: 16486400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:45.815959+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:46.816096+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:47.816494+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:48.816643+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:49.816820+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2291977 data_alloc: 234881024 data_used: 16486400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.061479568s of 14.912302971s, submitted: 309
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b33625a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d000 session 0x5631b402a5a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:50.817015+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fbc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:51.817147+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fbc00 session 0x5631b3551680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:52.817397+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:53.817527+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:54.817654+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2140436 data_alloc: 234881024 data_used: 12378112
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:55.817791+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2f6e000/0x0/0x4ffc00000, data 0x1e30ca7/0x1f04000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:56.817923+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:57.818052+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:58.818170+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d400 session 0x5631b5f663c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b639b680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:59.818307+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6417a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4648000/0x0/0x4ffc00000, data 0x980ca7/0xa54000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:00.818430+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:01.818560+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:02.818773+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:03.818988+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:04.819157+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:05.819290+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:06.819454+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:07.819814+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:08.819937+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:09.820089+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:10.820250+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:11.820412+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:12.820596+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:13.820743+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:14.820914+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:15.821040+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:16.821173+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:17.821386+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:18.821516+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:19.821699+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:20.821826+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:21.821962+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:22.822126+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:23.822268+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:24.822394+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:25.822512+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:26.822640+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:27.822781+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:28.822898+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:29.823012+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:30.823143+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:31.823268+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:32.823425+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:33.823636+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:34.823760+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:35.823892+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:36.824041+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:37.824187+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:38.824377+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:39.824509+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:40.824627+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:41.824773+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:42.824940+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:43.825064+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:44.825248+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:45.825540+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:46.825684+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:47.825814+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.480453491s of 57.782039642s, submitted: 106
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _renew_subs
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 143 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:48.825964+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _renew_subs
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f466b000/0x0/0x4ffc00000, data 0x95bba7/0xa30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 145 ms_handle_reset con 0x5631b441d000 session 0x5631b5397c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:49.826080+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f466b000/0x0/0x4ffc00000, data 0x95bba7/0xa30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:50.826295+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1927667 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:51.826489+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:52.826727+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f4667000/0x0/0x4ffc00000, data 0x95db70/0xa34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 49217536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:53.826947+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 49217536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:54.827127+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 49217536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:55.827270+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1927667 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f4667000/0x0/0x4ffc00000, data 0x95db70/0xa34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 145 ms_handle_reset con 0x5631b5586800 session 0x5631b6bf2d20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 49201152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:56.827428+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 49201152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:57.827609+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b55f3800 session 0x5631b5f623c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b3379a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d000 session 0x5631b33705a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 49201152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d400 session 0x5631b337b0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f4664000/0x0/0x4ffc00000, data 0x95f95e/0xa37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.240273476s of 10.334043503s, submitted: 50
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:58.827710+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b5197400 session 0x5631b636fa40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fbc00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fbc00 session 0x5631b2b3be00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b6921a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d000 session 0x5631b6bf3c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d400 session 0x5631b35dfc20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8f000/0x0/0x4ffc00000, data 0x133595e/0x140d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:59.827840+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:00.827982+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2000623 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:01.828108+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:02.828258+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:03.828383+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:04.828604+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8f000/0x0/0x4ffc00000, data 0x133595e/0x140d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:05.828801+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2000623 data_alloc: 218103808 data_used: 1888256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 49078272 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b5197400 session 0x5631b2b3b0e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:06.828929+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 49078272 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:07.829229+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 49078272 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:08.829402+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8e000/0x0/0x4ffc00000, data 0x1335981/0x140e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 49061888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:09.829505+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222748672 unmapped: 48070656 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:10.829690+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2072012 data_alloc: 234881024 data_used: 12201984
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222748672 unmapped: 48070656 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:11.829808+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.287197113s of 13.350550652s, submitted: 13
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b68d1000 session 0x5631b331d680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222756864 unmapped: 48062464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:12.830031+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 48054272 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:13.830183+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8c000/0x0/0x4ffc00000, data 0x13359f3/0x1410000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222773248 unmapped: 48046080 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:14.830409+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222773248 unmapped: 48046080 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:15.830542+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2075678 data_alloc: 234881024 data_used: 12210176
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222781440 unmapped: 48037888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:16.830716+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8c000/0x0/0x4ffc00000, data 0x13359f3/0x1410000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222781440 unmapped: 48037888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:17.830895+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8c000/0x0/0x4ffc00000, data 0x13359f3/0x1410000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222781440 unmapped: 48037888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:18.831054+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:19.831181+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225452032 unmapped: 45367296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:20.831332+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2134968 data_alloc: 234881024 data_used: 13152256
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226541568 unmapped: 44277760 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:21.831487+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226549760 unmapped: 44269568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:22.831619+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226549760 unmapped: 44269568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:23.831734+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f322e000/0x0/0x4ffc00000, data 0x19839f3/0x1a5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xaf6f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226549760 unmapped: 44269568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:24.831877+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226549760 unmapped: 44269568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:25.832001+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2147658 data_alloc: 234881024 data_used: 13295616
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.303266525s of 14.531414986s, submitted: 67
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225419264 unmapped: 45400064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:26.832141+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225419264 unmapped: 45400064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:27.832281+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225427456 unmapped: 45391872 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:28.832421+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d000 session 0x5631b53974a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:29.832544+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f322a000/0x0/0x4ffc00000, data 0x1985a65/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xaf6f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:30.832686+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146348 data_alloc: 234881024 data_used: 13295616
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:31.832807+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:32.833027+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:33.833144+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:34.833311+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x1986a65/0x1a63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b386f4a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x1986a65/0x1a63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:35.833490+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146426 data_alloc: 234881024 data_used: 13303808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225443840 unmapped: 45375488 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:36.833660+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b5197400 session 0x5631b402a5a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b51c6800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.851206779s of 10.898889542s, submitted: 9
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225452032 unmapped: 45367296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b51c6800 session 0x5631b6b381e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:37.833792+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225452032 unmapped: 45367296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:38.833918+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225452032 unmapped: 45367296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:39.834051+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:40.834197+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146472 data_alloc: 234881024 data_used: 13303808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:41.834321+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:42.834560+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:43.834683+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:44.834833+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:45.834981+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146472 data_alloc: 234881024 data_used: 13303808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:46.835159+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:47.835294+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:48.835420+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:49.835572+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:50.835816+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146472 data_alloc: 234881024 data_used: 13303808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.099124908s of 14.182600975s, submitted: 13
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:51.835967+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:52.836200+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:53.836394+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d400 session 0x5631b6c072c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:54.836572+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:55.836752+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146872 data_alloc: 234881024 data_used: 13303808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:56.836942+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:57.837075+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:58.837244+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:59.837413+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:00.837574+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146872 data_alloc: 234881024 data_used: 13303808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:01.838451+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:02.838670+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:03.838808+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:04.838952+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:05.839077+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146872 data_alloc: 234881024 data_used: 13303808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:06.839228+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:07.839377+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:08.839557+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:09.839690+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:10.839845+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146872 data_alloc: 234881024 data_used: 13303808
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:11.840702+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:12.840877+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:13.841038+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:14.841122+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:15.841283+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b64e74a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.129930496s of 24.143545151s, submitted: 3
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b53970e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2144984 data_alloc: 234881024 data_used: 13299712
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:16.841403+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:17.841580+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fa400 session 0x5631b63a9a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fb400 session 0x5631b3670b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208c000/0x0/0x4ffc00000, data 0x1987981/0x1a60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:18.841791+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b55f7000 session 0x5631b5394000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:19.841932+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:20.842082+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946858 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:21.842250+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:22.842458+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:23.842587+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f95e/0xa37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b5397e00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:24.842704+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218243072 unmapped: 52576256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b3379a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:25.842861+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218243072 unmapped: 52576256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1945316 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:26.842972+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218243072 unmapped: 52576256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:27.843146+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218243072 unmapped: 52576256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.070165634s of 12.286311150s, submitted: 71
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fa400 session 0x5631b3ff3680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:28.843383+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:29.843574+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:30.843767+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1950348 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:31.843948+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:32.844099+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:33.844233+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:34.844432+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:35.844565+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1950348 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:36.844711+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:37.844842+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:38.844968+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:39.845496+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:40.845590+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1950348 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:41.845774+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fb400 session 0x5631b644c1e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b68d1800 session 0x5631b64e7860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b5397c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b636fa40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.203206062s of 14.218482971s, submitted: 4
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b68d1800 session 0x5631b6921a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fa400 session 0x5631b69d01e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fb400 session 0x5631b3362f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b6c07680
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:42.845972+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f652c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f26b9000/0x0/0x4ffc00000, data 0x13589df/0x1433000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:43.846117+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:44.846284+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:45.846459+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2036782 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:46.846655+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:47.846821+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f26b9000/0x0/0x4ffc00000, data 0x13589df/0x1433000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:48.847025+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218537984 unmapped: 52281344 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:49.847212+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218537984 unmapped: 52281344 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:50.847470+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2036782 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f26b9000/0x0/0x4ffc00000, data 0x13589df/0x1433000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:51.847608+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f26b9000/0x0/0x4ffc00000, data 0x13589df/0x1433000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [0,0,1])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b68d1800 session 0x5631b63a94a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218882048 unmapped: 51937280 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.918356895s of 10.087418556s, submitted: 52
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b51c7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:52.847781+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218890240 unmapped: 51929088 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:53.847917+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219987968 unmapped: 50831360 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:54.848106+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 47210496 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:55.848231+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 47210496 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2108895 data_alloc: 234881024 data_used: 12058624
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2d4e400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b2d4e400 session 0x5631b64283c0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:56.848393+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 47202304 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2691000/0x0/0x4ffc00000, data 0x137fa02/0x145b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:57.848536+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 47202304 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:58.848695+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:59.848825+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:00.848986+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2112409 data_alloc: 234881024 data_used: 12062720
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:01.849188+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:02.849433+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bc6b4400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f268f000/0x0/0x4ffc00000, data 0x137fa74/0x145d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:03.849587+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f268f000/0x0/0x4ffc00000, data 0x137fa74/0x145d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.809801102s of 11.819898605s, submitted: 4
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:04.849738+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 228433920 unmapped: 42385408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:05.849880+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229515264 unmapped: 41304064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2225273 data_alloc: 234881024 data_used: 12775424
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1aff000/0x0/0x4ffc00000, data 0x1f0da74/0x1feb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:06.849961+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:07.850114+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1aaa000/0x0/0x4ffc00000, data 0x1f5ca74/0x203a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1aaa000/0x0/0x4ffc00000, data 0x1f5ca74/0x203a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:08.850300+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:09.850476+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:10.850629+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2227097 data_alloc: 234881024 data_used: 12918784
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1aaa000/0x0/0x4ffc00000, data 0x1f5ca74/0x203a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:11.850805+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230383616 unmapped: 40435712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:12.850978+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230383616 unmapped: 40435712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:13.851117+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230383616 unmapped: 40435712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:14.851279+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:15.851403+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222749 data_alloc: 234881024 data_used: 12922880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:16.851533+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8e000/0x0/0x4ffc00000, data 0x1f80a74/0x205e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:17.851725+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8e000/0x0/0x4ffc00000, data 0x1f80a74/0x205e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:18.870909+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8e000/0x0/0x4ffc00000, data 0x1f80a74/0x205e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:19.871070+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230400000 unmapped: 40419328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:20.871220+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230400000 unmapped: 40419328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222749 data_alloc: 234881024 data_used: 12922880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:21.871399+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230400000 unmapped: 40419328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:22.871531+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:23.872220+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.634010315s of 19.986522675s, submitted: 139
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:24.872431+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:25.872591+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222489 data_alloc: 234881024 data_used: 12922880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bc6b4400 session 0x5631b6428b40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:26.872771+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:27.873023+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:28.873138+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:29.873451+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:30.873677+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222489 data_alloc: 234881024 data_used: 12922880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:31.873990+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:32.874208+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:33.874431+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:34.874557+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:35.874723+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222489 data_alloc: 234881024 data_used: 12922880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.534486771s of 12.544783592s, submitted: 2
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:36.874839+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:37.874988+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:38.875147+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:39.875324+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:40.875548+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222657 data_alloc: 234881024 data_used: 12922880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:41.875731+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:42.875930+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:43.876091+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:44.876227+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 40378368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:45.876406+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 40378368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222657 data_alloc: 234881024 data_used: 12922880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:46.876656+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230449152 unmapped: 40370176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:47.876786+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230449152 unmapped: 40370176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2d4e400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b2d4e400 session 0x5631b644c5a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.738879204s of 11.748149872s, submitted: 3
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b3870960
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:48.876903+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230457344 unmapped: 40361984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:49.877076+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230457344 unmapped: 40361984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fa400 session 0x5631b6c070e0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b51c7000 session 0x5631b5f66f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:50.877208+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230465536 unmapped: 40353792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2220696 data_alloc: 234881024 data_used: 12922880
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:51.877310+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8d000/0x0/0x4ffc00000, data 0x1f83a02/0x205f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b6bf3c20
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:52.877487+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:53.877623+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b2000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:54.877804+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:55.877950+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972234 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:56.878125+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:57.878279+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2d4e400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:58.878424+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.059069633s of 10.277308464s, submitted: 75
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b2d4e400 session 0x5631b6b52f00
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b337b860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:59.878592+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:00.878773+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:01.878957+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:02.879238+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:03.879414+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:04.879543+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:05.879739+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:06.879893+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:07.880028+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:08.880180+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:09.880320+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:10.880525+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:11.880728+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:12.880934+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:13.881126+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:14.881253+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:15.881448+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:16.881620+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:17.881822+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:18.881962+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:19.882137+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:20.882302+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:21.882450+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:22.882665+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:23.882828+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:24.882964+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:25.883028+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:26.883157+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:27.883303+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:28.883461+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:29.883581+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:30.883706+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:31.883841+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:32.883999+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:33.884117+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b9fb0400 session 0x5631b6bf25a0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:34.884281+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:35.884396+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:36.884513+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:37.884635+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:38.884776+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:39.884962+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:40.885179+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:41.885395+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:42.885587+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:43.885713+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:44.885962+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:45.886212+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:46.886400+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:47.886566+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:48.886752+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:49.886946+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:50.887143+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:51.887283+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:52.887482+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:53.887608+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:54.887732+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:55.887864+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:56.888026+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:57.888179+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:58.888321+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:59.888420+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:00.888623+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:01.888750+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:02.888907+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:03.889037+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:04.889235+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:05.889409+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:06.889562+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:07.889738+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:08.889871+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:09.890118+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:10.890305+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:11.890421+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:12.890590+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:13.890671+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:14.890822+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:15.890942+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:16.891089+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:17.891276+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:18.891474+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:19.891608+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:20.891797+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:21.892060+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:22.892207+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:23.892387+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:24.892579+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:25.892742+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:26.892891+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:27.892991+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:28.893129+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:29.893248+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:30.893394+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:31.893527+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:32.893684+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:33.893826+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:34.893950+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:35.894107+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:36.894233+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:37.894368+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: mgrc ms_handle_reset ms_handle_reset con 0x5631bb8fa800
Sep 30 18:56:35 compute-0 ceph-osd[82241]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2285351161
Sep 30 18:56:35 compute-0 ceph-osd[82241]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2285351161,v1:192.168.122.100:6801/2285351161]
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: get_auth_request con 0x5631bb8fb400 auth_method 0
Sep 30 18:56:35 compute-0 ceph-osd[82241]: mgrc handle_mgr_configure stats_period=5
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:38.894727+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b9fb0000 session 0x5631b59d9860
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b51c7000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b73dfc00 session 0x5631b3ff3a40
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:39.894837+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:40.894974+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:41.895151+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:42.895322+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:43.895525+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:44.895699+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:45.895877+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:46.896047+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:47.896192+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:48.896395+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:49.896583+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:50.896746+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:51.896889+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:52.897068+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:53.897232+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:54.897359+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:55.897468+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:56.897593+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:57.897710+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:58.897825+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:59.897942+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:00.898048+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222289920 unmapped: 48529408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:01.898169+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 18:56:35 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 18:56:35 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'config diff' '{prefix=config diff}'
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'config show' '{prefix=config show}'
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222019584 unmapped: 48799744 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'counter dump' '{prefix=counter dump}'
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:02.898326+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'counter schema' '{prefix=counter schema}'
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221487104 unmapped: 49332224 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:03.898458+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221806592 unmapped: 49012736 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 18:56:35 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:04.898675+0000)
Sep 30 18:56:35 compute-0 ceph-osd[82241]: do_command 'log dump' '{prefix=log dump}'
Sep 30 18:56:35 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19200 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Sep 30 18:56:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483878862' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 18:56:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2354: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:35 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19204 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:35 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 18:56:35 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 18:56:35 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3075206590' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: from='client.19192 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3094080132' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: from='client.19200 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3483878862' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: pgmap v2354: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 621 B/s rd, 0 op/s
Sep 30 18:56:36 compute-0 ceph-mon[73755]: from='client.19204 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3075206590' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19212 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 18:56:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2855355197' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:56:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2696771638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:56:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2696771638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:56:36 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19220 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:36.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon stat"} v 0)
Sep 30 18:56:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970473258' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 18:56:36 compute-0 crontab[372777]: (root) LIST (root)
Sep 30 18:56:37 compute-0 ceph-mon[73755]: from='client.19212 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2855355197' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 18:56:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2696771638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:56:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2696771638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:56:37 compute-0 ceph-mon[73755]: from='client.19220 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:37 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2970473258' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19236 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:37.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:56:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:56:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:37.447Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19244 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:37 compute-0 nova_compute[265391]: 2025-09-30 18:56:37.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2355: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 614 B/s rd, 0 op/s
Sep 30 18:56:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "node ls"} v 0)
Sep 30 18:56:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3860624165' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 18:56:37 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19252 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-mon[73755]: from='client.19236 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-mon[73755]: from='client.19244 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-mon[73755]: pgmap v2355: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 614 B/s rd, 0 op/s
Sep 30 18:56:38 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3860624165' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-mon[73755]: from='client.19252 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Sep 30 18:56:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3707973157' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19260 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Sep 30 18:56:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357832739' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 18:56:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:38.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Sep 30 18:56:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1477497579' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:38] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:56:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:38] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:56:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Sep 30 18:56:38 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775808468' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 18:56:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:38.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3707973157' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 18:56:39 compute-0 ceph-mon[73755]: from='client.19260 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2357832739' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 18:56:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1477497579' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 18:56:39 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2775808468' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 18:56:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Sep 30 18:56:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3112684509' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:39 compute-0 nova_compute[265391]: 2025-09-30 18:56:39.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Sep 30 18:56:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280220950' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 18:56:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:39.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Sep 30 18:56:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556323966' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 18:56:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Sep 30 18:56:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/519856706' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 18:56:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2356: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Sep 30 18:56:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/233717800' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Sep 30 18:56:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452845119' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3112684509' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3280220950' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3556323966' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/519856706' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: pgmap v2356: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:40 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/233717800' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Sep 30 18:56:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74545643' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 18:56:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710692951' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 18:56:40 compute-0 podman[373252]: 2025-09-30 18:56:40.56517628 +0000 UTC m=+0.104072459 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20250930)
Sep 30 18:56:40 compute-0 podman[373251]: 2025-09-30 18:56:40.565303513 +0000 UTC m=+0.096201205 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:56:40 compute-0 podman[373253]: 2025-09-30 18:56:40.57869335 +0000 UTC m=+0.117374783 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 18:56:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:40.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Sep 30 18:56:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063329717' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 18:56:40 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd utilization"} v 0)
Sep 30 18:56:40 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305935773' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 18:56:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1452845119' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 18:56:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/74545643' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 18:56:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/710692951' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 18:56:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3063329717' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 18:56:41 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3305935773' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 18:56:41 compute-0 systemd[1]: Starting Hostname Service...
Sep 30 18:56:41 compute-0 ceph-osd[82241]: bluestore.MempoolThread fragmentation_score=0.001074 took=0.000030s
Sep 30 18:56:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Sep 30 18:56:41 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1044403615' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 18:56:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:41.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:41 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19320 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:41 compute-0 systemd[1]: Started Hostname Service.
Sep 30 18:56:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:41 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19324 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2357: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:41 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19328 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:42 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19332 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:42 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1044403615' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 18:56:42 compute-0 ceph-mon[73755]: from='client.19320 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:42 compute-0 ceph-mon[73755]: from='client.19324 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:42 compute-0 ceph-mon[73755]: pgmap v2357: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:42 compute-0 ceph-mon[73755]: from='client.19328 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:42 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19340 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:42 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "quorum_status"} v 0)
Sep 30 18:56:42 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252293978' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 18:56:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:42.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:42 compute-0 nova_compute[265391]: 2025-09-30 18:56:42.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:42 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19348 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19356 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "versions"} v 0)
Sep 30 18:56:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223569940' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mon[73755]: from='client.19332 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mon[73755]: from='client.19340 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1252293978' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mon[73755]: from='client.19348 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2223569940' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 18:56:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:43.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:43 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19360 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Sep 30 18:56:43 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698645699' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2358: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:43 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19368 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:43.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Sep 30 18:56:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598323234' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 18:56:44 compute-0 nova_compute[265391]: 2025-09-30 18:56:44.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 18:56:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 18:56:44 compute-0 ceph-mon[73755]: from='client.19356 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:44 compute-0 ceph-mon[73755]: from='client.19360 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1698645699' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:56:44 compute-0 ceph-mon[73755]: pgmap v2358: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:44 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2598323234' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 18:56:44 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 18:56:44 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 18:56:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:56:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:44.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:56:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config dump"} v 0)
Sep 30 18:56:44 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267779373' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 18:56:45 compute-0 sudo[373909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:56:45 compute-0 sudo[373909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:56:45 compute-0 sudo[373909]: pam_unix(sudo:session): session closed for user root
Sep 30 18:56:45 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19392 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:45 compute-0 ceph-mon[73755]: from='client.19368 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:45 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 18:56:45 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 18:56:45 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4267779373' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 18:56:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:45.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Sep 30 18:56:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425410265' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 18:56:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2359: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df"} v 0)
Sep 30 18:56:45 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660580480' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 18:56:46 compute-0 ceph-mon[73755]: from='client.19392 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1425410265' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 18:56:46 compute-0 ceph-mon[73755]: pgmap v2359: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:46 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2660580480' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 18:56:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "fs dump"} v 0)
Sep 30 18:56:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345172031' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 18:56:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:46.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "fs ls"} v 0)
Sep 30 18:56:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197372166' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 18:56:47 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19412 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:47.448Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3345172031' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 18:56:47 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1197372166' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 18:56:47 compute-0 nova_compute[265391]: 2025-09-30 18:56:47.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2360: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:47 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds stat"} v 0)
Sep 30 18:56:47 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1908694273' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 18:56:48 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump"} v 0)
Sep 30 18:56:48 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1935332311' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 18:56:48 compute-0 ceph-mon[73755]: from='client.19412 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:48 compute-0 ceph-mon[73755]: pgmap v2360: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1908694273' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 18:56:48 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1935332311' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 18:56:48 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19424 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:48.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:48] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:56:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:48] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:56:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:48.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Sep 30 18:56:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459810991' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 18:56:49 compute-0 nova_compute[265391]: 2025-09-30 18:56:49.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:49.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:49 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19432 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2361: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:49 compute-0 ceph-mon[73755]: from='client.19424 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:49 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2459810991' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 18:56:49 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19436 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd dump"} v 0)
Sep 30 18:56:50 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595750658' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 18:56:50 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Sep 30 18:56:50 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978494488' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 18:56:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:50.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:50 compute-0 ceph-mon[73755]: from='client.19432 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:50 compute-0 ceph-mon[73755]: pgmap v2361: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:50 compute-0 ceph-mon[73755]: from='client.19436 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1595750658' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 18:56:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3978494488' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19448 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:51.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19452 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:56:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2362: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Sep 30 18:56:51 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274110017' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 18:56:51 compute-0 ceph-mon[73755]: from='client.19448 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:51 compute-0 ceph-mon[73755]: from='client.19452 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:51 compute-0 ceph-mon[73755]: pgmap v2362: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:51 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1274110017' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 18:56:52 compute-0 ovs-appctl[375301]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 18:56:52 compute-0 ovs-appctl[375316]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 18:56:52 compute-0 ovs-appctl[375320]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 18:56:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd stat"} v 0)
Sep 30 18:56:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651078752' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Sep 30 18:56:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:56:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:56:52 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.27865 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:52.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:52 compute-0 nova_compute[265391]: 2025-09-30 18:56:52.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:53 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2651078752' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Sep 30 18:56:53 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:56:53 compute-0 ceph-mon[73755]: from='client.27865 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:53 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19464 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:56:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:53.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:56:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2363: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:53.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Sep 30 18:56:54 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21004290' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Sep 30 18:56:54 compute-0 ceph-mon[73755]: from='client.19464 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 18:56:54 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/575942045' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 18:56:54 compute-0 ceph-mon[73755]: pgmap v2363: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:54 compute-0 nova_compute[265391]: 2025-09-30 18:56:54.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:56:54.356 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:56:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:56:54.357 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:56:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:56:54.357 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:56:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Sep 30 18:56:54 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954428967' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:54.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:54 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19480 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/21004290' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Sep 30 18:56:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2954428967' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:55 compute-0 ceph-mon[73755]: from='client.19480 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Sep 30 18:56:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855589697' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:56:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 18:56:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:55.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 18:56:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Sep 30 18:56:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105430286' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Sep 30 18:56:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2364: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Sep 30 18:56:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2686002295' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/855589697' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:56:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3105430286' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Sep 30 18:56:56 compute-0 ceph-mon[73755]: pgmap v2364: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2686002295' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Sep 30 18:56:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693464253' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Sep 30 18:56:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:56:56 compute-0 podman[376776]: 2025-09-30 18:56:56.553686413 +0000 UTC m=+0.093631107 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible)
Sep 30 18:56:56 compute-0 podman[376780]: 2025-09-30 18:56:56.553889819 +0000 UTC m=+0.089749138 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 18:56:56 compute-0 podman[376779]: 2025-09-30 18:56:56.587456079 +0000 UTC m=+0.127023714 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller)
Sep 30 18:56:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:56.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:56 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19500 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3693464253' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Sep 30 18:56:57 compute-0 ceph-mon[73755]: from='client.19500 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Sep 30 18:56:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4208053099' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Sep 30 18:56:57 compute-0 sshd-session[376409]: Invalid user admin from 185.156.73.233 port 30146
Sep 30 18:56:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:57.449Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:57 compute-0 sshd-session[376409]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 18:56:57 compute-0 sshd-session[376409]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.156.73.233
Sep 30 18:56:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:56:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2810434453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:56:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:56:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2810434453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:56:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2365: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:57 compute-0 nova_compute[265391]: 2025-09-30 18:56:57.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:58 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19520 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4208053099' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Sep 30 18:56:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2810434453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:56:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2810434453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:56:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1576960322' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:58 compute-0 ceph-mon[73755]: pgmap v2365: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:56:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Sep 30 18:56:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905651480' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Sep 30 18:56:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:56:58.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:58 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19528 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:58] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:56:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:56:58] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:56:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:56:58.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:56:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:56:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:56:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:56:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:56:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:56:59 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19532 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:59 compute-0 ceph-mon[73755]: from='client.19520 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2905651480' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Sep 30 18:56:59 compute-0 ceph-mon[73755]: from='client.19528 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:56:59 compute-0 nova_compute[265391]: 2025-09-30 18:56:59.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:56:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:56:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:56:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:56:59.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:56:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Sep 30 18:56:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113064749' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Sep 30 18:56:59 compute-0 sshd-session[376409]: Failed password for invalid user admin from 185.156.73.233 port 30146 ssh2
Sep 30 18:56:59 compute-0 podman[276673]: time="2025-09-30T18:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:56:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:56:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2366: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:56:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10319 "" "Go-http-client/1.1"
Sep 30 18:56:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Sep 30 18:56:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341084094' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Sep 30 18:57:00 compute-0 sshd-session[376409]: Connection closed by invalid user admin 185.156.73.233 port 30146 [preauth]
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19544 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:00 compute-0 ceph-mon[73755]: from='client.19532 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1113064749' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Sep 30 18:57:00 compute-0 ceph-mon[73755]: pgmap v2366: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3341084094' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Sep 30 18:57:00 compute-0 nova_compute[265391]: 2025-09-30 18:57:00.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19548 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:00 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:57:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:00.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Sep 30 18:57:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381867134' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:57:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Sep 30 18:57:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525131232' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Sep 30 18:57:01 compute-0 ceph-mon[73755]: from='client.19544 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:01 compute-0 ceph-mon[73755]: from='client.19548 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2381867134' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 18:57:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/525131232' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Sep 30 18:57:01 compute-0 openstack_network_exporter[279566]: ERROR   18:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:57:01 compute-0 openstack_network_exporter[279566]: ERROR   18:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:57:01 compute-0 openstack_network_exporter[279566]: ERROR   18:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:57:01 compute-0 openstack_network_exporter[279566]: ERROR   18:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:57:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:57:01 compute-0 openstack_network_exporter[279566]: ERROR   18:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:57:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:57:01 compute-0 nova_compute[265391]: 2025-09-30 18:57:01.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:57:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:01 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19560 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2367: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:02 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19564 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:02 compute-0 ceph-mon[73755]: from='client.19560 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:02 compute-0 ceph-mon[73755]: pgmap v2367: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 18:57:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/718661898' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 18:57:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:02.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:02 compute-0 nova_compute[265391]: 2025-09-30 18:57:02.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Sep 30 18:57:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450687706' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Sep 30 18:57:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:03.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:03 compute-0 ceph-mon[73755]: from='client.19564 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 18:57:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/718661898' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 18:57:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/450687706' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Sep 30 18:57:03 compute-0 virtqemud[265263]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Sep 30 18:57:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2368: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:03.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:04 compute-0 nova_compute[265391]: 2025-09-30 18:57:04.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:04 compute-0 ceph-mon[73755]: pgmap v2368: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:04 compute-0 systemd[1]: Starting Time & Date Service...
Sep 30 18:57:04 compute-0 systemd[1]: Started Time & Date Service.
Sep 30 18:57:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:04.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:05 compute-0 sudo[377786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:57:05 compute-0 sudo[377786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:05 compute-0 sudo[377786]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2369: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:06.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:06 compute-0 ceph-mon[73755]: pgmap v2369: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:57:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:57:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:07.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:57:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:57:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:07.451Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2370: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:07 compute-0 nova_compute[265391]: 2025-09-30 18:57:07.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3153050889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:57:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:57:08
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', 'default.rgw.log', '.nfs', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:57:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:08.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:08] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:57:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:08] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:57:08 compute-0 ceph-mon[73755]: pgmap v2370: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:08.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:09 compute-0 nova_compute[265391]: 2025-09-30 18:57:09.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:09.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:09 compute-0 nova_compute[265391]: 2025-09-30 18:57:09.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:57:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2371: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/769490593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:57:09 compute-0 nova_compute[265391]: 2025-09-30 18:57:09.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:57:09 compute-0 nova_compute[265391]: 2025-09-30 18:57:09.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:57:09 compute-0 nova_compute[265391]: 2025-09-30 18:57:09.947 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:57:09 compute-0 nova_compute[265391]: 2025-09-30 18:57:09.948 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:57:09 compute-0 nova_compute[265391]: 2025-09-30 18:57:09.948 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:57:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:57:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594932922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:57:10 compute-0 nova_compute[265391]: 2025-09-30 18:57:10.397 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:57:10 compute-0 nova_compute[265391]: 2025-09-30 18:57:10.577 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:57:10 compute-0 nova_compute[265391]: 2025-09-30 18:57:10.578 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:57:10 compute-0 nova_compute[265391]: 2025-09-30 18:57:10.610 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:57:10 compute-0 nova_compute[265391]: 2025-09-30 18:57:10.611 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4014MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:57:10 compute-0 nova_compute[265391]: 2025-09-30 18:57:10.611 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:57:10 compute-0 nova_compute[265391]: 2025-09-30 18:57:10.611 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:57:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:10.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:10 compute-0 ceph-mon[73755]: pgmap v2371: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1594932922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:57:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:11.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:11 compute-0 podman[377841]: 2025-09-30 18:57:11.543142633 +0000 UTC m=+0.080962990 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 18:57:11 compute-0 podman[377842]: 2025-09-30 18:57:11.543152183 +0000 UTC m=+0.078338892 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Sep 30 18:57:11 compute-0 podman[377840]: 2025-09-30 18:57:11.544931009 +0000 UTC m=+0.081115694 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 18:57:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2372: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:11 compute-0 nova_compute[265391]: 2025-09-30 18:57:11.783 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:57:11 compute-0 nova_compute[265391]: 2025-09-30 18:57:11.783 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:57:10 up  2:00,  0 user,  load average: 1.74, 0.84, 0.77\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:57:11 compute-0 nova_compute[265391]: 2025-09-30 18:57:11.803 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:57:11 compute-0 ceph-mon[73755]: pgmap v2372: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:57:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285837713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:57:12 compute-0 nova_compute[265391]: 2025-09-30 18:57:12.249 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:57:12 compute-0 nova_compute[265391]: 2025-09-30 18:57:12.254 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:57:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:12.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:12 compute-0 nova_compute[265391]: 2025-09-30 18:57:12.793 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:57:12 compute-0 nova_compute[265391]: 2025-09-30 18:57:12.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/285837713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:57:13 compute-0 nova_compute[265391]: 2025-09-30 18:57:13.305 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:57:13 compute-0 nova_compute[265391]: 2025-09-30 18:57:13.306 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.694s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:57:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:13.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2373: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:13.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:13 compute-0 ceph-mon[73755]: pgmap v2373: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:14 compute-0 nova_compute[265391]: 2025-09-30 18:57:14.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:14 compute-0 nova_compute[265391]: 2025-09-30 18:57:14.305 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:57:14 compute-0 nova_compute[265391]: 2025-09-30 18:57:14.306 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:57:14 compute-0 nova_compute[265391]: 2025-09-30 18:57:14.306 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:57:14 compute-0 nova_compute[265391]: 2025-09-30 18:57:14.306 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:57:14 compute-0 nova_compute[265391]: 2025-09-30 18:57:14.306 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:57:14 compute-0 nova_compute[265391]: 2025-09-30 18:57:14.307 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:57:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:14.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:15.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2374: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.449472) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258636449506, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1950, "num_deletes": 256, "total_data_size": 2929798, "memory_usage": 2976016, "flush_reason": "Manual Compaction"}
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258636462156, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2856896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57201, "largest_seqno": 59150, "table_properties": {"data_size": 2847979, "index_size": 5218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22323, "raw_average_key_size": 21, "raw_value_size": 2828651, "raw_average_value_size": 2699, "num_data_blocks": 227, "num_entries": 1048, "num_filter_entries": 1048, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258489, "oldest_key_time": 1759258489, "file_creation_time": 1759258636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 12741 microseconds, and 6718 cpu microseconds.
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.462205) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2856896 bytes OK
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.462233) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.464155) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.464171) EVENT_LOG_v1 {"time_micros": 1759258636464166, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.464189) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2920968, prev total WAL file size 2920968, number of live WAL files 2.
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.465280) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323630' seq:72057594037927935, type:22 .. '6C6F676D0032353132' seq:0, type:0; will stop at (end)
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2789KB)], [134(10MB)]
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258636465378, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 14117825, "oldest_snapshot_seqno": -1}
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8375 keys, 13981811 bytes, temperature: kUnknown
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258636549146, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 13981811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13929888, "index_size": 29858, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20997, "raw_key_size": 221656, "raw_average_key_size": 26, "raw_value_size": 13784536, "raw_average_value_size": 1645, "num_data_blocks": 1162, "num_entries": 8375, "num_filter_entries": 8375, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.549580) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 13981811 bytes
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.551316) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.3 rd, 166.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.7 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(9.8) write-amplify(4.9) OK, records in: 8901, records dropped: 526 output_compression: NoCompression
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.551373) EVENT_LOG_v1 {"time_micros": 1759258636551332, "job": 82, "event": "compaction_finished", "compaction_time_micros": 83876, "compaction_time_cpu_micros": 49511, "output_level": 6, "num_output_files": 1, "total_output_size": 13981811, "num_input_records": 8901, "num_output_records": 8375, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258636552330, "job": 82, "event": "table_file_deletion", "file_number": 136}
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258636556302, "job": 82, "event": "table_file_deletion", "file_number": 134}
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.465126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.556398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.556405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.556408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.556411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:16 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:16.556414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:16 compute-0 ceph-mon[73755]: pgmap v2374: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:17.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:17.454Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2375: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:17 compute-0 nova_compute[265391]: 2025-09-30 18:57:17.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:17 compute-0 ceph-mon[73755]: pgmap v2375: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:18.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:18] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:57:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:18] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:57:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:18.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:19 compute-0 nova_compute[265391]: 2025-09-30 18:57:19.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:19.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2376: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:20.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:20 compute-0 ceph-mon[73755]: pgmap v2376: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:21.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2377: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:57:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:57:22 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-crash-compute-0[79187]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 18:57:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:22 compute-0 nova_compute[265391]: 2025-09-30 18:57:22.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:22 compute-0 ceph-mon[73755]: pgmap v2377: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:57:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:23.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2378: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:23.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:24 compute-0 sshd-session[377923]: error: kex_exchange_identification: read: Connection timed out
Sep 30 18:57:24 compute-0 sshd-session[377923]: banner exchange: Connection from 115.190.39.222 port 45610: Connection timed out
Sep 30 18:57:24 compute-0 nova_compute[265391]: 2025-09-30 18:57:24.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:24 compute-0 ceph-mon[73755]: pgmap v2378: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:57:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:24.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:57:25 compute-0 sudo[377935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:57:25 compute-0 sudo[377935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:25 compute-0 sudo[377935]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:25.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2379: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:26.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:26 compute-0 ceph-mon[73755]: pgmap v2379: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:27.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:27.454Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:27 compute-0 podman[377962]: 2025-09-30 18:57:27.56919056 +0000 UTC m=+0.098309799 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Sep 30 18:57:27 compute-0 podman[377964]: 2025-09-30 18:57:27.588265894 +0000 UTC m=+0.112509867 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Sep 30 18:57:27 compute-0 podman[377963]: 2025-09-30 18:57:27.588265614 +0000 UTC m=+0.114986061 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:57:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2380: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:27 compute-0 nova_compute[265391]: 2025-09-30 18:57:27.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:28.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:28] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:57:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:28] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:57:28 compute-0 ceph-mon[73755]: pgmap v2380: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:28.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:29 compute-0 nova_compute[265391]: 2025-09-30 18:57:29.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:29.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:29 compute-0 podman[276673]: time="2025-09-30T18:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:57:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:57:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10313 "" "Go-http-client/1.1"
Sep 30 18:57:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2381: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:30 compute-0 ceph-mon[73755]: pgmap v2381: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:30.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:31 compute-0 openstack_network_exporter[279566]: ERROR   18:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:57:31 compute-0 openstack_network_exporter[279566]: ERROR   18:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:57:31 compute-0 openstack_network_exporter[279566]: ERROR   18:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:57:31 compute-0 openstack_network_exporter[279566]: ERROR   18:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:57:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:57:31 compute-0 openstack_network_exporter[279566]: ERROR   18:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:57:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:57:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:31.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2382: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:32.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:32 compute-0 sudo[378037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:57:32 compute-0 sudo[378037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:32 compute-0 sudo[378037]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:32 compute-0 ceph-mon[73755]: pgmap v2382: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:32 compute-0 nova_compute[265391]: 2025-09-30 18:57:32.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:32 compute-0 sudo[378062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:57:32 compute-0 sudo[378062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:33.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:33 compute-0 sudo[378062]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:57:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:57:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:57:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2383: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 779 B/s rd, 0 op/s
Sep 30 18:57:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:57:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:57:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:57:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:57:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:57:33 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:57:33 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:57:33 compute-0 sudo[378119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:57:33 compute-0 sudo[378119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:33 compute-0 sudo[378119]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:33 compute-0 sudo[378144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:57:33 compute-0 sudo[378144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:33 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-mon[73755]: pgmap v2383: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 779 B/s rd, 0 op/s
Sep 30 18:57:33 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:57:33 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:57:33 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:57:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:33.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:34 compute-0 podman[378210]: 2025-09-30 18:57:34.120553739 +0000 UTC m=+0.055074368 container create 329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:57:34 compute-0 systemd[1]: Started libpod-conmon-329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca.scope.
Sep 30 18:57:34 compute-0 podman[378210]: 2025-09-30 18:57:34.088356005 +0000 UTC m=+0.022876704 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:57:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:57:34 compute-0 podman[378210]: 2025-09-30 18:57:34.223125318 +0000 UTC m=+0.157645977 container init 329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 18:57:34 compute-0 nova_compute[265391]: 2025-09-30 18:57:34.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:34 compute-0 podman[378210]: 2025-09-30 18:57:34.230865218 +0000 UTC m=+0.165385847 container start 329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:57:34 compute-0 podman[378210]: 2025-09-30 18:57:34.234996165 +0000 UTC m=+0.169516814 container attach 329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 18:57:34 compute-0 stoic_einstein[378227]: 167 167
Sep 30 18:57:34 compute-0 podman[378210]: 2025-09-30 18:57:34.237763997 +0000 UTC m=+0.172284626 container died 329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:57:34 compute-0 systemd[1]: libpod-329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca.scope: Deactivated successfully.
Sep 30 18:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f17e645ed9f61ff97e3418e5203528ce7df3dccbddf8bb1442bd116e312a9ba3-merged.mount: Deactivated successfully.
Sep 30 18:57:34 compute-0 podman[378210]: 2025-09-30 18:57:34.277502527 +0000 UTC m=+0.212023156 container remove 329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:57:34 compute-0 systemd[1]: libpod-conmon-329c795d72c18eeb1a50940925d9de924b36dac76efc6f14cc0a595271abddca.scope: Deactivated successfully.
Sep 30 18:57:34 compute-0 podman[378250]: 2025-09-30 18:57:34.444024863 +0000 UTC m=+0.054700099 container create a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 18:57:34 compute-0 systemd[1]: Started libpod-conmon-a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d.scope.
Sep 30 18:57:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f1ec71c734d9c251b682b1e898e1191d8de6800ec9386ab43fa743b4ccc9bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f1ec71c734d9c251b682b1e898e1191d8de6800ec9386ab43fa743b4ccc9bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f1ec71c734d9c251b682b1e898e1191d8de6800ec9386ab43fa743b4ccc9bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f1ec71c734d9c251b682b1e898e1191d8de6800ec9386ab43fa743b4ccc9bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f1ec71c734d9c251b682b1e898e1191d8de6800ec9386ab43fa743b4ccc9bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:34 compute-0 podman[378250]: 2025-09-30 18:57:34.425827591 +0000 UTC m=+0.036502847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:57:34 compute-0 podman[378250]: 2025-09-30 18:57:34.535139435 +0000 UTC m=+0.145814671 container init a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 18:57:34 compute-0 podman[378250]: 2025-09-30 18:57:34.541475409 +0000 UTC m=+0.152150645 container start a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rubin, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 18:57:34 compute-0 podman[378250]: 2025-09-30 18:57:34.544726493 +0000 UTC m=+0.155401759 container attach a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rubin, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:57:34 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Sep 30 18:57:34 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 18:57:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:34.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:34 compute-0 hopeful_rubin[378266]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:57:34 compute-0 hopeful_rubin[378266]: --> All data devices are unavailable
Sep 30 18:57:34 compute-0 systemd[1]: libpod-a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d.scope: Deactivated successfully.
Sep 30 18:57:34 compute-0 podman[378250]: 2025-09-30 18:57:34.880737602 +0000 UTC m=+0.491412848 container died a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rubin, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-76f1ec71c734d9c251b682b1e898e1191d8de6800ec9386ab43fa743b4ccc9bc-merged.mount: Deactivated successfully.
Sep 30 18:57:34 compute-0 podman[378250]: 2025-09-30 18:57:34.93237312 +0000 UTC m=+0.543048366 container remove a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 18:57:34 compute-0 systemd[1]: libpod-conmon-a05e2a936a5bb882d292bd142b8448d1977282fcd11f067cd1e54c565aff8f1d.scope: Deactivated successfully.
Sep 30 18:57:34 compute-0 sudo[378144]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:35 compute-0 sudo[378297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:57:35 compute-0 sudo[378297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:35 compute-0 sudo[378297]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:35 compute-0 sudo[378322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:57:35 compute-0 sudo[378322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:35.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:35 compute-0 podman[378388]: 2025-09-30 18:57:35.477754226 +0000 UTC m=+0.038641023 container create 5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:57:35 compute-0 systemd[1]: Started libpod-conmon-5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d.scope.
Sep 30 18:57:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:57:35 compute-0 podman[378388]: 2025-09-30 18:57:35.458599529 +0000 UTC m=+0.019486346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:57:35 compute-0 podman[378388]: 2025-09-30 18:57:35.564540405 +0000 UTC m=+0.125427232 container init 5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:57:35 compute-0 podman[378388]: 2025-09-30 18:57:35.573159339 +0000 UTC m=+0.134046136 container start 5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:57:35 compute-0 podman[378388]: 2025-09-30 18:57:35.577037739 +0000 UTC m=+0.137924526 container attach 5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jang, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 18:57:35 compute-0 distracted_jang[378405]: 167 167
Sep 30 18:57:35 compute-0 systemd[1]: libpod-5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d.scope: Deactivated successfully.
Sep 30 18:57:35 compute-0 podman[378388]: 2025-09-30 18:57:35.581238378 +0000 UTC m=+0.142125245 container died 5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:57:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2384: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 519 B/s rd, 0 op/s
Sep 30 18:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-28facdeaecbfd0d36250056b9f1e6a41164b262bdd2ef2a5a353e558f2f9a9f3-merged.mount: Deactivated successfully.
Sep 30 18:57:35 compute-0 podman[378388]: 2025-09-30 18:57:35.634897379 +0000 UTC m=+0.195784186 container remove 5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:57:35 compute-0 systemd[1]: libpod-conmon-5fa663ddf16a48539c306bfead65830e7b1d394e88619d2c2c909e047106543d.scope: Deactivated successfully.
Sep 30 18:57:35 compute-0 podman[378432]: 2025-09-30 18:57:35.814181765 +0000 UTC m=+0.037605475 container create ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:57:35 compute-0 systemd[1]: Started libpod-conmon-ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf.scope.
Sep 30 18:57:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2736f92047285966f7cf0dfae7be128554e849a73d0dd0fd582a0a0325d47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2736f92047285966f7cf0dfae7be128554e849a73d0dd0fd582a0a0325d47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2736f92047285966f7cf0dfae7be128554e849a73d0dd0fd582a0a0325d47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae2736f92047285966f7cf0dfae7be128554e849a73d0dd0fd582a0a0325d47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:35 compute-0 podman[378432]: 2025-09-30 18:57:35.796692982 +0000 UTC m=+0.020116702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:57:35 compute-0 podman[378432]: 2025-09-30 18:57:35.902683439 +0000 UTC m=+0.126107159 container init ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:57:35 compute-0 podman[378432]: 2025-09-30 18:57:35.909026494 +0000 UTC m=+0.132450184 container start ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 18:57:35 compute-0 podman[378432]: 2025-09-30 18:57:35.913462889 +0000 UTC m=+0.136886609 container attach ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]: {
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:     "0": [
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:         {
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "devices": [
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "/dev/loop3"
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             ],
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "lv_name": "ceph_lv0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "lv_size": "21470642176",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "name": "ceph_lv0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "tags": {
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.cluster_name": "ceph",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.crush_device_class": "",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.encrypted": "0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.osd_id": "0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.type": "block",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.vdo": "0",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:                 "ceph.with_tpm": "0"
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             },
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "type": "block",
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:             "vg_name": "ceph_vg0"
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:         }
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]:     ]
Sep 30 18:57:36 compute-0 intelligent_stonebraker[378449]: }
Sep 30 18:57:36 compute-0 systemd[1]: libpod-ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf.scope: Deactivated successfully.
Sep 30 18:57:36 compute-0 podman[378432]: 2025-09-30 18:57:36.230639739 +0000 UTC m=+0.454063439 container died ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae2736f92047285966f7cf0dfae7be128554e849a73d0dd0fd582a0a0325d47-merged.mount: Deactivated successfully.
Sep 30 18:57:36 compute-0 podman[378432]: 2025-09-30 18:57:36.297826051 +0000 UTC m=+0.521249751 container remove ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_stonebraker, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:57:36 compute-0 systemd[1]: libpod-conmon-ef69898a5264430a1adbdd2ce891ede5c60875940cc6b6cfc61e033c4176badf.scope: Deactivated successfully.
Sep 30 18:57:36 compute-0 sudo[378322]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:36 compute-0 sudo[378472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:57:36 compute-0 sudo[378472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:36 compute-0 sudo[378472]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:36 compute-0 sudo[378497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:57:36 compute-0 sudo[378497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:57:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/143161206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:57:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:57:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/143161206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:57:36 compute-0 ceph-mon[73755]: pgmap v2384: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 519 B/s rd, 0 op/s
Sep 30 18:57:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/143161206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:57:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/143161206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:57:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:36.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:37 compute-0 podman[378562]: 2025-09-30 18:57:37.029953535 +0000 UTC m=+0.062715226 container create 0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_babbage, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:57:37 compute-0 systemd[1]: Started libpod-conmon-0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721.scope.
Sep 30 18:57:37 compute-0 podman[378562]: 2025-09-30 18:57:36.993600733 +0000 UTC m=+0.026362504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:57:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:57:37 compute-0 podman[378562]: 2025-09-30 18:57:37.106386256 +0000 UTC m=+0.139147957 container init 0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_babbage, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:57:37 compute-0 podman[378562]: 2025-09-30 18:57:37.112265439 +0000 UTC m=+0.145027140 container start 0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_babbage, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:57:37 compute-0 loving_babbage[378578]: 167 167
Sep 30 18:57:37 compute-0 systemd[1]: libpod-0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721.scope: Deactivated successfully.
Sep 30 18:57:37 compute-0 podman[378562]: 2025-09-30 18:57:37.118037928 +0000 UTC m=+0.150799609 container attach 0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_babbage, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:57:37 compute-0 conmon[378578]: conmon 0620f6d1eeebe1f9c5c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721.scope/container/memory.events
Sep 30 18:57:37 compute-0 podman[378562]: 2025-09-30 18:57:37.118758947 +0000 UTC m=+0.151520628 container died 0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad8b0cb5ba42b4235ece88c1084310b43c3706456ecfc51693c9020296944a15-merged.mount: Deactivated successfully.
Sep 30 18:57:37 compute-0 podman[378562]: 2025-09-30 18:57:37.168994189 +0000 UTC m=+0.201755860 container remove 0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:57:37 compute-0 systemd[1]: libpod-conmon-0620f6d1eeebe1f9c5c738aa5fa32a9a5c5f410f5685d6d0587c0308b12a7721.scope: Deactivated successfully.
Sep 30 18:57:37 compute-0 podman[378601]: 2025-09-30 18:57:37.329821278 +0000 UTC m=+0.043665963 container create b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_carson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:57:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:57:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:57:37 compute-0 systemd[1]: Started libpod-conmon-b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08.scope.
Sep 30 18:57:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd850a14f8384af9d9fa0925ca98039808784bd0ee6bb46a9934222de8ec4e1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd850a14f8384af9d9fa0925ca98039808784bd0ee6bb46a9934222de8ec4e1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd850a14f8384af9d9fa0925ca98039808784bd0ee6bb46a9934222de8ec4e1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd850a14f8384af9d9fa0925ca98039808784bd0ee6bb46a9934222de8ec4e1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:57:37 compute-0 podman[378601]: 2025-09-30 18:57:37.309037029 +0000 UTC m=+0.022881754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:57:37 compute-0 podman[378601]: 2025-09-30 18:57:37.406116405 +0000 UTC m=+0.119961110 container init b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:57:37 compute-0 podman[378601]: 2025-09-30 18:57:37.416781841 +0000 UTC m=+0.130626506 container start b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:57:37 compute-0 podman[378601]: 2025-09-30 18:57:37.420421026 +0000 UTC m=+0.134265731 container attach b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:57:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:37.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:57:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:57:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:37.455Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2385: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 519 B/s rd, 0 op/s
Sep 30 18:57:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:57:37 compute-0 nova_compute[265391]: 2025-09-30 18:57:37.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:38 compute-0 lvm[378694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:57:38 compute-0 lvm[378694]: VG ceph_vg0 finished
Sep 30 18:57:38 compute-0 hungry_carson[378618]: {}
Sep 30 18:57:38 compute-0 systemd[1]: libpod-b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08.scope: Deactivated successfully.
Sep 30 18:57:38 compute-0 systemd[1]: libpod-b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08.scope: Consumed 1.034s CPU time.
Sep 30 18:57:38 compute-0 podman[378601]: 2025-09-30 18:57:38.084645302 +0000 UTC m=+0.798489977 container died b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 18:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd850a14f8384af9d9fa0925ca98039808784bd0ee6bb46a9934222de8ec4e1a-merged.mount: Deactivated successfully.
Sep 30 18:57:38 compute-0 podman[378601]: 2025-09-30 18:57:38.134895074 +0000 UTC m=+0.848739749 container remove b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:57:38 compute-0 systemd[1]: libpod-conmon-b3a44b1c971c90fa98df6dac60a8e74f8f216119ff230763d1631228b1de0d08.scope: Deactivated successfully.
Sep 30 18:57:38 compute-0 sudo[378497]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:57:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:57:38 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:57:38 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:57:38 compute-0 sudo[378712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:57:38 compute-0 sudo[378712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:38 compute-0 sudo[378712]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:38 compute-0 ceph-mon[73755]: pgmap v2385: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 519 B/s rd, 0 op/s
Sep 30 18:57:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:57:38 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:57:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:38.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:38] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:57:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:38] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:57:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:38.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:39 compute-0 nova_compute[265391]: 2025-09-30 18:57:39.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:39.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2386: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 779 B/s rd, 0 op/s
Sep 30 18:57:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:40.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:40 compute-0 ceph-mon[73755]: pgmap v2386: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 779 B/s rd, 0 op/s
Sep 30 18:57:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:41.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2387: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 519 B/s rd, 0 op/s
Sep 30 18:57:42 compute-0 ceph-mon[73755]: pgmap v2387: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 519 B/s rd, 0 op/s
Sep 30 18:57:42 compute-0 podman[378742]: 2025-09-30 18:57:42.552277995 +0000 UTC m=+0.085209650 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 18:57:42 compute-0 podman[378741]: 2025-09-30 18:57:42.558901386 +0000 UTC m=+0.100148446 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4)
Sep 30 18:57:42 compute-0 podman[378743]: 2025-09-30 18:57:42.571095412 +0000 UTC m=+0.103073922 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 18:57:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:42.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:42 compute-0 nova_compute[265391]: 2025-09-30 18:57:42.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:43.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2388: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 779 B/s rd, 0 op/s
Sep 30 18:57:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:43.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:44 compute-0 nova_compute[265391]: 2025-09-30 18:57:44.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:44 compute-0 ceph-mon[73755]: pgmap v2388: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 779 B/s rd, 0 op/s
Sep 30 18:57:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:44.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:45 compute-0 sudo[378801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:57:45 compute-0 sudo[378801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:57:45 compute-0 sudo[378801]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:45.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2389: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:46.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:46 compute-0 ceph-mon[73755]: pgmap v2389: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:47.457Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:47.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2390: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:47 compute-0 nova_compute[265391]: 2025-09-30 18:57:47.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:57:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:48.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:57:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:48] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:57:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:48] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:57:48 compute-0 ceph-mon[73755]: pgmap v2390: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:48.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:49 compute-0 nova_compute[265391]: 2025-09-30 18:57:49.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:49.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2391: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:50 compute-0 ceph-mon[73755]: pgmap v2391: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:50.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:51.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2392: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:57:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:57:52 compute-0 ceph-mon[73755]: pgmap v2392: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:57:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:52.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:52 compute-0 nova_compute[265391]: 2025-09-30 18:57:52.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:53.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2393: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:53.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:54 compute-0 nova_compute[265391]: 2025-09-30 18:57:54.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:57:54.358 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:57:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:57:54.358 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:57:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:57:54.359 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:57:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:54 compute-0 ceph-mon[73755]: pgmap v2393: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:55.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2394: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:56 compute-0 ceph-mon[73755]: pgmap v2394: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.415641) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258676415680, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 595, "num_deletes": 251, "total_data_size": 763898, "memory_usage": 775080, "flush_reason": "Manual Compaction"}
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Sep 30 18:57:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258676496755, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 751663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59151, "largest_seqno": 59745, "table_properties": {"data_size": 748406, "index_size": 1164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7633, "raw_average_key_size": 19, "raw_value_size": 741873, "raw_average_value_size": 1892, "num_data_blocks": 51, "num_entries": 392, "num_filter_entries": 392, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258636, "oldest_key_time": 1759258636, "file_creation_time": 1759258676, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 81176 microseconds, and 3251 cpu microseconds.
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.496812) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 751663 bytes OK
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.496835) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.513305) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.513397) EVENT_LOG_v1 {"time_micros": 1759258676513384, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.513425) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 760658, prev total WAL file size 760658, number of live WAL files 2.
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.514192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(734KB)], [137(13MB)]
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258676514237, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 14733474, "oldest_snapshot_seqno": -1}
Sep 30 18:57:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8252 keys, 12744165 bytes, temperature: kUnknown
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258676812524, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 12744165, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12693992, "index_size": 28428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 219801, "raw_average_key_size": 26, "raw_value_size": 12551614, "raw_average_value_size": 1521, "num_data_blocks": 1095, "num_entries": 8252, "num_filter_entries": 8252, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258676, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.812786) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 12744165 bytes
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.980880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.4 rd, 42.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.3 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(36.6) write-amplify(17.0) OK, records in: 8767, records dropped: 515 output_compression: NoCompression
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.980916) EVENT_LOG_v1 {"time_micros": 1759258676980903, "job": 84, "event": "compaction_finished", "compaction_time_micros": 298389, "compaction_time_cpu_micros": 52451, "output_level": 6, "num_output_files": 1, "total_output_size": 12744165, "num_input_records": 8767, "num_output_records": 8252, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258676981559, "job": 84, "event": "table_file_deletion", "file_number": 139}
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258676984267, "job": 84, "event": "table_file_deletion", "file_number": 137}
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.514100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.984743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.984750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.984751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.984753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:56 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-18:57:56.984754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 18:57:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:57.458Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:57.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2395: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:57:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/906702194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:57:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:57:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/906702194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:57:57 compute-0 sudo[369327]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:57 compute-0 sshd-session[369326]: Received disconnect from 192.168.122.10 port 33270:11: disconnected by user
Sep 30 18:57:57 compute-0 sshd-session[369326]: Disconnected from user zuul 192.168.122.10 port 33270
Sep 30 18:57:57 compute-0 sshd-session[369323]: pam_unix(sshd:session): session closed for user zuul
Sep 30 18:57:57 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Sep 30 18:57:57 compute-0 systemd[1]: session-59.scope: Consumed 3min 18.179s CPU time, 1.0G memory peak, read 311.4M from disk, written 488.1M to disk.
Sep 30 18:57:57 compute-0 systemd-logind[811]: Session 59 logged out. Waiting for processes to exit.
Sep 30 18:57:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/906702194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:57:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/906702194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:57:57 compute-0 systemd-logind[811]: Removed session 59.
Sep 30 18:57:57 compute-0 podman[378841]: 2025-09-30 18:57:57.871213614 +0000 UTC m=+0.056442374 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4)
Sep 30 18:57:57 compute-0 sshd-session[378840]: Accepted publickey for zuul from 192.168.122.10 port 39690 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 18:57:57 compute-0 systemd-logind[811]: New session 60 of user zuul.
Sep 30 18:57:57 compute-0 systemd[1]: Started Session 60 of User zuul.
Sep 30 18:57:57 compute-0 podman[378845]: 2025-09-30 18:57:57.921795445 +0000 UTC m=+0.096633246 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 18:57:57 compute-0 podman[378844]: 2025-09-30 18:57:57.925872661 +0000 UTC m=+0.104253763 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Sep 30 18:57:57 compute-0 sshd-session[378840]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 18:57:57 compute-0 nova_compute[265391]: 2025-09-30 18:57:57.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:58 compute-0 sudo[378910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-09-30-huixzyk.tar.xz
Sep 30 18:57:58 compute-0 sudo[378910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:57:58 compute-0 sudo[378910]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:58 compute-0 sshd-session[378909]: Received disconnect from 192.168.122.10 port 39690:11: disconnected by user
Sep 30 18:57:58 compute-0 sshd-session[378909]: Disconnected from user zuul 192.168.122.10 port 39690
Sep 30 18:57:58 compute-0 sshd-session[378840]: pam_unix(sshd:session): session closed for user zuul
Sep 30 18:57:58 compute-0 systemd-logind[811]: Session 60 logged out. Waiting for processes to exit.
Sep 30 18:57:58 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Sep 30 18:57:58 compute-0 systemd-logind[811]: Removed session 60.
Sep 30 18:57:58 compute-0 sshd-session[378935]: Accepted publickey for zuul from 192.168.122.10 port 39704 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 18:57:58 compute-0 systemd-logind[811]: New session 61 of user zuul.
Sep 30 18:57:58 compute-0 systemd[1]: Started Session 61 of User zuul.
Sep 30 18:57:58 compute-0 sshd-session[378935]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 18:57:58 compute-0 sudo[378939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Sep 30 18:57:58 compute-0 sudo[378939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 18:57:58 compute-0 sudo[378939]: pam_unix(sudo:session): session closed for user root
Sep 30 18:57:58 compute-0 sshd-session[378938]: Received disconnect from 192.168.122.10 port 39704:11: disconnected by user
Sep 30 18:57:58 compute-0 sshd-session[378938]: Disconnected from user zuul 192.168.122.10 port 39704
Sep 30 18:57:58 compute-0 sshd-session[378935]: pam_unix(sshd:session): session closed for user zuul
Sep 30 18:57:58 compute-0 systemd-logind[811]: Session 61 logged out. Waiting for processes to exit.
Sep 30 18:57:58 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Sep 30 18:57:58 compute-0 systemd-logind[811]: Removed session 61.
Sep 30 18:57:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:57:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:58] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:57:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:57:58] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 18:57:58 compute-0 ceph-mon[73755]: pgmap v2395: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:57:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:57:58.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:57:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:57:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:57:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:57:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:57:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:57:59 compute-0 nova_compute[265391]: 2025-09-30 18:57:59.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:57:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:57:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:57:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:57:59.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:57:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2396: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:57:59 compute-0 podman[276673]: time="2025-09-30T18:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:57:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:57:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10321 "" "Go-http-client/1.1"
Sep 30 18:58:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:00.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:00 compute-0 ceph-mon[73755]: pgmap v2396: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:01 compute-0 openstack_network_exporter[279566]: ERROR   18:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:58:01 compute-0 openstack_network_exporter[279566]: ERROR   18:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:58:01 compute-0 openstack_network_exporter[279566]: ERROR   18:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:58:01 compute-0 openstack_network_exporter[279566]: ERROR   18:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:58:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:58:01 compute-0 openstack_network_exporter[279566]: ERROR   18:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:58:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:58:01 compute-0 nova_compute[265391]: 2025-09-30 18:58:01.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:01.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2397: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:02.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:02 compute-0 ceph-mon[73755]: pgmap v2397: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:02 compute-0 nova_compute[265391]: 2025-09-30 18:58:02.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:03 compute-0 nova_compute[265391]: 2025-09-30 18:58:03.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:03.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2398: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:03.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:58:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:03.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:04 compute-0 nova_compute[265391]: 2025-09-30 18:58:04.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:04 compute-0 ceph-mon[73755]: pgmap v2398: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:05 compute-0 sudo[378970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:58:05 compute-0 sudo[378970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:05 compute-0 sudo[378970]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:05.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2399: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:06.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:06 compute-0 ceph-mon[73755]: pgmap v2399: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:58:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:58:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:58:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:07.458Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:07.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2400: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:58:07 compute-0 nova_compute[265391]: 2025-09-30 18:58:07.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:58:08
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'volumes', '.mgr', '.nfs', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:58:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:08] "GET /metrics HTTP/1.1" 200 46748 "" "Prometheus/2.51.0"
Sep 30 18:58:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:08] "GET /metrics HTTP/1.1" 200 46748 "" "Prometheus/2.51.0"
Sep 30 18:58:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:08 compute-0 ceph-mon[73755]: pgmap v2400: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:08.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:09 compute-0 nova_compute[265391]: 2025-09-30 18:58:09.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:09.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2401: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4036824627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:58:10 compute-0 nova_compute[265391]: 2025-09-30 18:58:10.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:10.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:10 compute-0 ceph-mon[73755]: pgmap v2401: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:11 compute-0 nova_compute[265391]: 2025-09-30 18:58:11.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:11.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2402: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:11 compute-0 ceph-mon[73755]: pgmap v2402: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:11 compute-0 nova_compute[265391]: 2025-09-30 18:58:11.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:58:11 compute-0 nova_compute[265391]: 2025-09-30 18:58:11.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:58:11 compute-0 nova_compute[265391]: 2025-09-30 18:58:11.943 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:58:11 compute-0 nova_compute[265391]: 2025-09-30 18:58:11.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:58:11 compute-0 nova_compute[265391]: 2025-09-30 18:58:11.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:58:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:58:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374295777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:58:12 compute-0 nova_compute[265391]: 2025-09-30 18:58:12.394 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:58:12 compute-0 nova_compute[265391]: 2025-09-30 18:58:12.531 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:58:12 compute-0 nova_compute[265391]: 2025-09-30 18:58:12.532 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:58:12 compute-0 nova_compute[265391]: 2025-09-30 18:58:12.549 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:58:12 compute-0 nova_compute[265391]: 2025-09-30 18:58:12.549 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4217MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:58:12 compute-0 nova_compute[265391]: 2025-09-30 18:58:12.549 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:58:12 compute-0 nova_compute[265391]: 2025-09-30 18:58:12.550 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:58:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:12.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2058300386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:58:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/374295777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:58:12 compute-0 nova_compute[265391]: 2025-09-30 18:58:12.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:13.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:13 compute-0 podman[379029]: 2025-09-30 18:58:13.553951813 +0000 UTC m=+0.075978460 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal)
Sep 30 18:58:13 compute-0 podman[379028]: 2025-09-30 18:58:13.574056674 +0000 UTC m=+0.087964091 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Sep 30 18:58:13 compute-0 podman[379027]: 2025-09-30 18:58:13.584089444 +0000 UTC m=+0.103656757 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250930, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:58:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2403: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:13 compute-0 ceph-mon[73755]: pgmap v2403: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:13.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:14 compute-0 nova_compute[265391]: 2025-09-30 18:58:14.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:14 compute-0 nova_compute[265391]: 2025-09-30 18:58:14.333 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:58:14 compute-0 nova_compute[265391]: 2025-09-30 18:58:14.334 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:58:12 up  2:01,  0 user,  load average: 1.49, 0.94, 0.81\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:58:14 compute-0 nova_compute[265391]: 2025-09-30 18:58:14.420 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:58:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:58:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859101451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:58:14 compute-0 nova_compute[265391]: 2025-09-30 18:58:14.849 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:58:14 compute-0 nova_compute[265391]: 2025-09-30 18:58:14.855 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:58:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3859101451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:58:15 compute-0 nova_compute[265391]: 2025-09-30 18:58:15.372 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:58:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:15.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2404: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:15 compute-0 nova_compute[265391]: 2025-09-30 18:58:15.887 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:58:15 compute-0 nova_compute[265391]: 2025-09-30 18:58:15.887 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.338s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:58:15 compute-0 ceph-mon[73755]: pgmap v2404: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:16 compute-0 nova_compute[265391]: 2025-09-30 18:58:16.883 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:16 compute-0 nova_compute[265391]: 2025-09-30 18:58:16.884 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:16 compute-0 nova_compute[265391]: 2025-09-30 18:58:16.884 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:16 compute-0 nova_compute[265391]: 2025-09-30 18:58:16.884 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:16 compute-0 nova_compute[265391]: 2025-09-30 18:58:16.884 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:58:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:17.459Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:17.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2405: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:17 compute-0 nova_compute[265391]: 2025-09-30 18:58:17.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:18 compute-0 ceph-mon[73755]: pgmap v2405: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:18] "GET /metrics HTTP/1.1" 200 46748 "" "Prometheus/2.51.0"
Sep 30 18:58:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:18] "GET /metrics HTTP/1.1" 200 46748 "" "Prometheus/2.51.0"
Sep 30 18:58:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:18.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:19 compute-0 nova_compute[265391]: 2025-09-30 18:58:19.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:19.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2406: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:20 compute-0 ceph-mon[73755]: pgmap v2406: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:21.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2407: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:58:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:58:22 compute-0 ceph-mon[73755]: pgmap v2407: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:58:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:22.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:23 compute-0 nova_compute[265391]: 2025-09-30 18:58:23.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:58:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:23.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:58:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2408: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:23.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:24 compute-0 nova_compute[265391]: 2025-09-30 18:58:24.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:24 compute-0 ceph-mon[73755]: pgmap v2408: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:24.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:25.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:25 compute-0 sudo[379122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:58:25 compute-0 sudo[379122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:25 compute-0 sudo[379122]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2409: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:26 compute-0 ceph-mon[73755]: pgmap v2409: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:26.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:27.459Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:27.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2410: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 18:58:27 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 28K writes, 105K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.83 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2418 writes, 7509 keys, 2418 commit groups, 1.0 writes per commit group, ingest: 6.38 MB, 0.01 MB/s
                                           Interval WAL: 2418 writes, 1061 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 18:58:28 compute-0 nova_compute[265391]: 2025-09-30 18:58:28.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:28 compute-0 podman[379151]: 2025-09-30 18:58:28.51552066 +0000 UTC m=+0.055591152 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:58:28 compute-0 podman[379153]: 2025-09-30 18:58:28.524717158 +0000 UTC m=+0.056852285 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:58:28 compute-0 podman[379152]: 2025-09-30 18:58:28.5591303 +0000 UTC m=+0.093845973 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest)
Sep 30 18:58:28 compute-0 ceph-mon[73755]: pgmap v2410: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:28] "GET /metrics HTTP/1.1" 200 46737 "" "Prometheus/2.51.0"
Sep 30 18:58:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:28] "GET /metrics HTTP/1.1" 200 46737 "" "Prometheus/2.51.0"
Sep 30 18:58:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:28.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:28.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:29 compute-0 nova_compute[265391]: 2025-09-30 18:58:29.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:29.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2411: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:29 compute-0 podman[276673]: time="2025-09-30T18:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:58:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:58:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10320 "" "Go-http-client/1.1"
Sep 30 18:58:30 compute-0 ceph-mon[73755]: pgmap v2411: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:30.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:31 compute-0 openstack_network_exporter[279566]: ERROR   18:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:58:31 compute-0 openstack_network_exporter[279566]: ERROR   18:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:58:31 compute-0 openstack_network_exporter[279566]: ERROR   18:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:58:31 compute-0 openstack_network_exporter[279566]: ERROR   18:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:58:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:58:31 compute-0 openstack_network_exporter[279566]: ERROR   18:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:58:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:58:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:31.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2412: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:32 compute-0 ceph-mon[73755]: pgmap v2412: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:32.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:33 compute-0 nova_compute[265391]: 2025-09-30 18:58:33.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:33 compute-0 nova_compute[265391]: 2025-09-30 18:58:33.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:58:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:33.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2413: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:33.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:34 compute-0 nova_compute[265391]: 2025-09-30 18:58:34.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:34.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:34 compute-0 ceph-mon[73755]: pgmap v2413: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:35.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2414: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:58:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4263349243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:58:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:58:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4263349243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:58:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:36.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:36 compute-0 ceph-mon[73755]: pgmap v2414: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4263349243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:58:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/4263349243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:58:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:58:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:58:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:58:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:37.460Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:37.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2415: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:58:38 compute-0 nova_compute[265391]: 2025-09-30 18:58:38.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:38 compute-0 sudo[379230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:58:38 compute-0 sudo[379230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:38 compute-0 sudo[379230]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:38 compute-0 sudo[379255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:58:38 compute-0 sudo[379255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:38] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:58:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:38] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:58:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:38.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:38 compute-0 ceph-mon[73755]: pgmap v2415: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:38.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:39 compute-0 nova_compute[265391]: 2025-09-30 18:58:39.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:39 compute-0 sudo[379255]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:58:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:58:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:58:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:58:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2416: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 777 B/s rd, 0 op/s
Sep 30 18:58:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:58:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:58:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:58:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:58:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:58:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:58:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:58:39 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:58:39 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:58:39 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:58:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:39.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:39 compute-0 sudo[379313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:58:39 compute-0 sudo[379313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:39 compute-0 sudo[379313]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:39 compute-0 sudo[379339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:58:39 compute-0 sudo[379339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:58:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:58:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:58:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:58:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:58:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:58:39 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:58:40 compute-0 podman[379405]: 2025-09-30 18:58:40.134106032 +0000 UTC m=+0.053876327 container create 2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ride, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 18:58:40 compute-0 systemd[1]: Started libpod-conmon-2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e.scope.
Sep 30 18:58:40 compute-0 podman[379405]: 2025-09-30 18:58:40.111139767 +0000 UTC m=+0.030910082 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:58:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:58:40 compute-0 podman[379405]: 2025-09-30 18:58:40.245810817 +0000 UTC m=+0.165581122 container init 2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ride, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 18:58:40 compute-0 podman[379405]: 2025-09-30 18:58:40.256155975 +0000 UTC m=+0.175926270 container start 2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ride, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:58:40 compute-0 podman[379405]: 2025-09-30 18:58:40.259626815 +0000 UTC m=+0.179397100 container attach 2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ride, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:58:40 compute-0 busy_ride[379422]: 167 167
Sep 30 18:58:40 compute-0 systemd[1]: libpod-2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e.scope: Deactivated successfully.
Sep 30 18:58:40 compute-0 podman[379405]: 2025-09-30 18:58:40.264105671 +0000 UTC m=+0.183875986 container died 2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 18:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-62c6c16e95fd02d3a249ebb6ff55e95379f9864d5418fe46294eb7ff87ac9cdd-merged.mount: Deactivated successfully.
Sep 30 18:58:40 compute-0 podman[379405]: 2025-09-30 18:58:40.311573402 +0000 UTC m=+0.231343687 container remove 2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:58:40 compute-0 systemd[1]: libpod-conmon-2c3025f2fde9cb016ecb57e0eb6356653dc528681e1b08781f8db0cfa2e05f8e.scope: Deactivated successfully.
Sep 30 18:58:40 compute-0 podman[379446]: 2025-09-30 18:58:40.517312354 +0000 UTC m=+0.063383464 container create d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 18:58:40 compute-0 systemd[1]: Started libpod-conmon-d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e.scope.
Sep 30 18:58:40 compute-0 podman[379446]: 2025-09-30 18:58:40.482865771 +0000 UTC m=+0.028936931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:58:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ab1da425051c618009ad86bfec5764ef213b05c05613e7e74a5f5c82d41f86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ab1da425051c618009ad86bfec5764ef213b05c05613e7e74a5f5c82d41f86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ab1da425051c618009ad86bfec5764ef213b05c05613e7e74a5f5c82d41f86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ab1da425051c618009ad86bfec5764ef213b05c05613e7e74a5f5c82d41f86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ab1da425051c618009ad86bfec5764ef213b05c05613e7e74a5f5c82d41f86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:40 compute-0 podman[379446]: 2025-09-30 18:58:40.637770106 +0000 UTC m=+0.183841196 container init d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 18:58:40 compute-0 podman[379446]: 2025-09-30 18:58:40.647706704 +0000 UTC m=+0.193777774 container start d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:58:40 compute-0 podman[379446]: 2025-09-30 18:58:40.650788044 +0000 UTC m=+0.196859134 container attach d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:58:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:40.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:40 compute-0 ceph-mon[73755]: pgmap v2416: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 777 B/s rd, 0 op/s
Sep 30 18:58:41 compute-0 hopeful_jackson[379462]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:58:41 compute-0 hopeful_jackson[379462]: --> All data devices are unavailable
Sep 30 18:58:41 compute-0 systemd[1]: libpod-d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e.scope: Deactivated successfully.
Sep 30 18:58:41 compute-0 podman[379479]: 2025-09-30 18:58:41.108585148 +0000 UTC m=+0.032785551 container died d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-47ab1da425051c618009ad86bfec5764ef213b05c05613e7e74a5f5c82d41f86-merged.mount: Deactivated successfully.
Sep 30 18:58:41 compute-0 podman[379479]: 2025-09-30 18:58:41.164359944 +0000 UTC m=+0.088560337 container remove d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:58:41 compute-0 systemd[1]: libpod-conmon-d741dd1a4006a01211664b1f1f4d0bec94102abb094df31bf6d4616ce232146e.scope: Deactivated successfully.
Sep 30 18:58:41 compute-0 sudo[379339]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:41 compute-0 sudo[379495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:58:41 compute-0 sudo[379495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:41 compute-0 sudo[379495]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:41 compute-0 sudo[379521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:58:41 compute-0 sudo[379521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2417: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 518 B/s rd, 0 op/s
Sep 30 18:58:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:41.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:41 compute-0 podman[379589]: 2025-09-30 18:58:41.867719574 +0000 UTC m=+0.039445044 container create 5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:58:41 compute-0 systemd[1]: Started libpod-conmon-5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e.scope.
Sep 30 18:58:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:58:41 compute-0 podman[379589]: 2025-09-30 18:58:41.851934034 +0000 UTC m=+0.023659524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:58:41 compute-0 podman[379589]: 2025-09-30 18:58:41.970230661 +0000 UTC m=+0.141956201 container init 5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 18:58:41 compute-0 ceph-mon[73755]: pgmap v2417: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 518 B/s rd, 0 op/s
Sep 30 18:58:41 compute-0 podman[379589]: 2025-09-30 18:58:41.977115239 +0000 UTC m=+0.148840749 container start 5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 18:58:41 compute-0 podman[379589]: 2025-09-30 18:58:41.981595085 +0000 UTC m=+0.153320575 container attach 5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:58:41 compute-0 gracious_almeida[379607]: 167 167
Sep 30 18:58:41 compute-0 systemd[1]: libpod-5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e.scope: Deactivated successfully.
Sep 30 18:58:41 compute-0 podman[379589]: 2025-09-30 18:58:41.98411115 +0000 UTC m=+0.155836620 container died 5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 18:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-00bf2262db2efb92be87d64096a1616f49c3970240f132b404aa484cfddb00e5-merged.mount: Deactivated successfully.
Sep 30 18:58:42 compute-0 podman[379589]: 2025-09-30 18:58:42.034148767 +0000 UTC m=+0.205874237 container remove 5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_almeida, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 18:58:42 compute-0 systemd[1]: libpod-conmon-5b8669e7f3770d514cee77d1e96661b2a5dd1631021e2b6030cccd1f2d4b151e.scope: Deactivated successfully.
Sep 30 18:58:42 compute-0 podman[379629]: 2025-09-30 18:58:42.274439335 +0000 UTC m=+0.054974015 container create 29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:58:42 compute-0 systemd[1]: Started libpod-conmon-29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14.scope.
Sep 30 18:58:42 compute-0 podman[379629]: 2025-09-30 18:58:42.245260419 +0000 UTC m=+0.025795159 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:58:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f02eb527242640bd9b9d7bcf45fe435f09f3b1cab3a0c75534c61dc97c38e68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f02eb527242640bd9b9d7bcf45fe435f09f3b1cab3a0c75534c61dc97c38e68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f02eb527242640bd9b9d7bcf45fe435f09f3b1cab3a0c75534c61dc97c38e68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f02eb527242640bd9b9d7bcf45fe435f09f3b1cab3a0c75534c61dc97c38e68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:42 compute-0 podman[379629]: 2025-09-30 18:58:42.389053716 +0000 UTC m=+0.169588416 container init 29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:58:42 compute-0 podman[379629]: 2025-09-30 18:58:42.399562778 +0000 UTC m=+0.180097468 container start 29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sammet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 18:58:42 compute-0 podman[379629]: 2025-09-30 18:58:42.403756047 +0000 UTC m=+0.184290747 container attach 29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sammet, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 18:58:42 compute-0 angry_sammet[379645]: {
Sep 30 18:58:42 compute-0 angry_sammet[379645]:     "0": [
Sep 30 18:58:42 compute-0 angry_sammet[379645]:         {
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "devices": [
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "/dev/loop3"
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             ],
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "lv_name": "ceph_lv0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "lv_size": "21470642176",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "name": "ceph_lv0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "tags": {
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.cluster_name": "ceph",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.crush_device_class": "",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.encrypted": "0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.osd_id": "0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.type": "block",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.vdo": "0",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:                 "ceph.with_tpm": "0"
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             },
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "type": "block",
Sep 30 18:58:42 compute-0 angry_sammet[379645]:             "vg_name": "ceph_vg0"
Sep 30 18:58:42 compute-0 angry_sammet[379645]:         }
Sep 30 18:58:42 compute-0 angry_sammet[379645]:     ]
Sep 30 18:58:42 compute-0 angry_sammet[379645]: }
Sep 30 18:58:42 compute-0 systemd[1]: libpod-29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14.scope: Deactivated successfully.
Sep 30 18:58:42 compute-0 podman[379629]: 2025-09-30 18:58:42.697794818 +0000 UTC m=+0.478329478 container died 29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sammet, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f02eb527242640bd9b9d7bcf45fe435f09f3b1cab3a0c75534c61dc97c38e68-merged.mount: Deactivated successfully.
Sep 30 18:58:42 compute-0 podman[379629]: 2025-09-30 18:58:42.749212701 +0000 UTC m=+0.529747351 container remove 29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 18:58:42 compute-0 systemd[1]: libpod-conmon-29fafda093c95322e03f1d0d97991105e526f7282785428fce0c9b62d7d15c14.scope: Deactivated successfully.
Sep 30 18:58:42 compute-0 sudo[379521]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:42.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:42 compute-0 sudo[379667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:58:42 compute-0 sudo[379667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:42 compute-0 sudo[379667]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:42 compute-0 sudo[379692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:58:42 compute-0 sudo[379692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:43 compute-0 nova_compute[265391]: 2025-09-30 18:58:43.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:43 compute-0 podman[379760]: 2025-09-30 18:58:43.279469224 +0000 UTC m=+0.035720737 container create 5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hamilton, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 18:58:43 compute-0 systemd[1]: Started libpod-conmon-5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f.scope.
Sep 30 18:58:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:58:43 compute-0 podman[379760]: 2025-09-30 18:58:43.263800598 +0000 UTC m=+0.020052131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:58:43 compute-0 podman[379760]: 2025-09-30 18:58:43.366361506 +0000 UTC m=+0.122613019 container init 5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:58:43 compute-0 podman[379760]: 2025-09-30 18:58:43.374913288 +0000 UTC m=+0.131164791 container start 5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hamilton, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:58:43 compute-0 podman[379760]: 2025-09-30 18:58:43.378140852 +0000 UTC m=+0.134392385 container attach 5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 18:58:43 compute-0 brave_hamilton[379777]: 167 167
Sep 30 18:58:43 compute-0 systemd[1]: libpod-5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f.scope: Deactivated successfully.
Sep 30 18:58:43 compute-0 conmon[379777]: conmon 5afc1de6b27a70568e44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f.scope/container/memory.events
Sep 30 18:58:43 compute-0 podman[379760]: 2025-09-30 18:58:43.381676903 +0000 UTC m=+0.137928416 container died 5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-08bb310645ee2cd430b79810314c9102184060a15bb06283878394fbae8eb225-merged.mount: Deactivated successfully.
Sep 30 18:58:43 compute-0 podman[379760]: 2025-09-30 18:58:43.41741544 +0000 UTC m=+0.173666953 container remove 5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_hamilton, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 18:58:43 compute-0 systemd[1]: libpod-conmon-5afc1de6b27a70568e449c41ffb90e3b11eb979d2ae8ad63cee165e5dffff44f.scope: Deactivated successfully.
Sep 30 18:58:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2418: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 518 B/s rd, 0 op/s
Sep 30 18:58:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:43.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:43 compute-0 podman[379801]: 2025-09-30 18:58:43.59104462 +0000 UTC m=+0.045505081 container create 889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wozniak, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:58:43 compute-0 systemd[1]: Started libpod-conmon-889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b.scope.
Sep 30 18:58:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:58:43 compute-0 podman[379801]: 2025-09-30 18:58:43.570663892 +0000 UTC m=+0.025124343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde23c83ed4c72631cec721a74d4c057f63faddf043f98226ffaad071fdba1a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde23c83ed4c72631cec721a74d4c057f63faddf043f98226ffaad071fdba1a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde23c83ed4c72631cec721a74d4c057f63faddf043f98226ffaad071fdba1a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde23c83ed4c72631cec721a74d4c057f63faddf043f98226ffaad071fdba1a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:58:43 compute-0 podman[379801]: 2025-09-30 18:58:43.684073081 +0000 UTC m=+0.138533552 container init 889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wozniak, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:58:43 compute-0 podman[379801]: 2025-09-30 18:58:43.692768716 +0000 UTC m=+0.147229157 container start 889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wozniak, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:58:43 compute-0 podman[379801]: 2025-09-30 18:58:43.699631694 +0000 UTC m=+0.154092175 container attach 889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wozniak, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:58:43 compute-0 podman[379819]: 2025-09-30 18:58:43.705438985 +0000 UTC m=+0.072041358 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 18:58:43 compute-0 podman[379820]: 2025-09-30 18:58:43.716917122 +0000 UTC m=+0.075132488 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, architecture=x86_64)
Sep 30 18:58:43 compute-0 podman[379816]: 2025-09-30 18:58:43.730553426 +0000 UTC m=+0.098257558 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:58:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:43.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:44 compute-0 nova_compute[265391]: 2025-09-30 18:58:44.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:44 compute-0 lvm[379954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:58:44 compute-0 lvm[379954]: VG ceph_vg0 finished
Sep 30 18:58:44 compute-0 thirsty_wozniak[379830]: {}
Sep 30 18:58:44 compute-0 systemd[1]: libpod-889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b.scope: Deactivated successfully.
Sep 30 18:58:44 compute-0 podman[379801]: 2025-09-30 18:58:44.475470052 +0000 UTC m=+0.929930503 container died 889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wozniak, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:58:44 compute-0 systemd[1]: libpod-889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b.scope: Consumed 1.176s CPU time.
Sep 30 18:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dde23c83ed4c72631cec721a74d4c057f63faddf043f98226ffaad071fdba1a8-merged.mount: Deactivated successfully.
Sep 30 18:58:44 compute-0 podman[379801]: 2025-09-30 18:58:44.521448133 +0000 UTC m=+0.975908594 container remove 889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 18:58:44 compute-0 systemd[1]: libpod-conmon-889eed6e13121bafe71a99f1cd35bf8dcb5aa510b1a9ea93616edaf254d0716b.scope: Deactivated successfully.
Sep 30 18:58:44 compute-0 ceph-mon[73755]: pgmap v2418: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 518 B/s rd, 0 op/s
Sep 30 18:58:44 compute-0 sudo[379692]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:58:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:58:44 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:58:44 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:58:44 compute-0 sudo[379969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:58:44 compute-0 sudo[379969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:44 compute-0 sudo[379969]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:44.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2419: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 518 B/s rd, 0 op/s
Sep 30 18:58:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:45.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:58:45 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:58:45 compute-0 sudo[379995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:58:45 compute-0 sudo[379995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:58:45 compute-0 sudo[379995]: pam_unix(sudo:session): session closed for user root
Sep 30 18:58:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:46 compute-0 ceph-mon[73755]: pgmap v2419: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 518 B/s rd, 0 op/s
Sep 30 18:58:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:46.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:47.462Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:47 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2420: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 518 B/s rd, 0 op/s
Sep 30 18:58:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:47.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:48 compute-0 nova_compute[265391]: 2025-09-30 18:58:48.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:48 compute-0 ceph-mon[73755]: pgmap v2420: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 518 B/s rd, 0 op/s
Sep 30 18:58:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:48] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:58:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:48] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:58:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:48.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:48.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:49 compute-0 nova_compute[265391]: 2025-09-30 18:58:49.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:49 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2421: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 777 B/s rd, 0 op/s
Sep 30 18:58:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:49.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:50 compute-0 ceph-mon[73755]: pgmap v2421: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 777 B/s rd, 0 op/s
Sep 30 18:58:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:50.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:51 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2422: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:51.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:58:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:58:52 compute-0 ceph-mon[73755]: pgmap v2422: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:58:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:52.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:53 compute-0 nova_compute[265391]: 2025-09-30 18:58:53.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:53 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2423: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:58:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:53.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:58:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:53.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:54 compute-0 nova_compute[265391]: 2025-09-30 18:58:54.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:58:54.360 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:58:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:58:54.360 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:58:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:58:54.360 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:58:54 compute-0 ceph-mon[73755]: pgmap v2423: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:58:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:54.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:58:55 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2424: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:55.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:56 compute-0 sshd-session[379473]: Connection closed by 199.45.155.78 port 37278 [preauth]
Sep 30 18:58:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:58:56 compute-0 ceph-mon[73755]: pgmap v2424: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:58:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:56.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:58:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:57.464Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:57 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2425: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:57.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:58:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3983763451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:58:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:58:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3983763451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:58:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3983763451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:58:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3983763451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:58:58 compute-0 nova_compute[265391]: 2025-09-30 18:58:58.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:58 compute-0 ceph-mon[73755]: pgmap v2425: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:58:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:58] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:58:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:58:58] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:58:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:58:58.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:58:58.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:58:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:58:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:58:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:58:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:58:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:58:59 compute-0 nova_compute[265391]: 2025-09-30 18:58:59.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:58:59 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2426: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:58:59 compute-0 podman[380034]: 2025-09-30 18:58:59.543363388 +0000 UTC m=+0.074388829 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Sep 30 18:58:59 compute-0 podman[380036]: 2025-09-30 18:58:59.543582394 +0000 UTC m=+0.073689691 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 18:58:59 compute-0 podman[380035]: 2025-09-30 18:58:59.586447715 +0000 UTC m=+0.118177524 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Sep 30 18:58:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:58:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:58:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:58:59.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:58:59 compute-0 podman[276673]: time="2025-09-30T18:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:58:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:58:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10321 "" "Go-http-client/1.1"
Sep 30 18:59:00 compute-0 ceph-mon[73755]: pgmap v2426: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:00.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:01 compute-0 openstack_network_exporter[279566]: ERROR   18:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:59:01 compute-0 openstack_network_exporter[279566]: ERROR   18:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:59:01 compute-0 openstack_network_exporter[279566]: ERROR   18:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:59:01 compute-0 openstack_network_exporter[279566]: ERROR   18:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:59:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:59:01 compute-0 openstack_network_exporter[279566]: ERROR   18:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:59:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:59:01 compute-0 nova_compute[265391]: 2025-09-30 18:59:01.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:01 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2427: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:01.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:02 compute-0 ceph-mon[73755]: pgmap v2427: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:02.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:03 compute-0 nova_compute[265391]: 2025-09-30 18:59:03.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:03 compute-0 nova_compute[265391]: 2025-09-30 18:59:03.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:03 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2428: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:03.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:03.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:04 compute-0 nova_compute[265391]: 2025-09-30 18:59:04.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:04 compute-0 ceph-mon[73755]: pgmap v2428: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:04.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:05 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2429: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:05.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:05 compute-0 sudo[380102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:59:05 compute-0 sudo[380102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:05 compute-0 sudo[380102]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:06 compute-0 ceph-mon[73755]: pgmap v2429: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:06.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:59:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:59:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:59:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:07.465Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:59:07 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2430: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:07.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:59:08 compute-0 nova_compute[265391]: 2025-09-30 18:59:08.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_18:59:08
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['backups', '.nfs', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.control']
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 18:59:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:08] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:59:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:08] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:59:08 compute-0 ceph-mon[73755]: pgmap v2430: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:08.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:08.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:09 compute-0 nova_compute[265391]: 2025-09-30 18:59:09.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:09 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2431: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:09.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:10 compute-0 nova_compute[265391]: 2025-09-30 18:59:10.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:10 compute-0 ceph-mon[73755]: pgmap v2431: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3989022614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:59:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:10.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:11 compute-0 nova_compute[265391]: 2025-09-30 18:59:11.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:11 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2432: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:11.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3138273325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:59:11 compute-0 nova_compute[265391]: 2025-09-30 18:59:11.941 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:59:11 compute-0 nova_compute[265391]: 2025-09-30 18:59:11.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:59:11 compute-0 nova_compute[265391]: 2025-09-30 18:59:11.942 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:59:11 compute-0 nova_compute[265391]: 2025-09-30 18:59:11.943 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 18:59:11 compute-0 nova_compute[265391]: 2025-09-30 18:59:11.943 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:59:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:59:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635788212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:59:12 compute-0 nova_compute[265391]: 2025-09-30 18:59:12.402 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:59:12 compute-0 nova_compute[265391]: 2025-09-30 18:59:12.537 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 18:59:12 compute-0 nova_compute[265391]: 2025-09-30 18:59:12.538 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:59:12 compute-0 nova_compute[265391]: 2025-09-30 18:59:12.554 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:59:12 compute-0 nova_compute[265391]: 2025-09-30 18:59:12.554 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4247MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 18:59:12 compute-0 nova_compute[265391]: 2025-09-30 18:59:12.554 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:59:12 compute-0 nova_compute[265391]: 2025-09-30 18:59:12.555 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:59:12 compute-0 ceph-mon[73755]: pgmap v2432: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/635788212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:59:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:12.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:13 compute-0 nova_compute[265391]: 2025-09-30 18:59:13.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:13 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2433: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:13 compute-0 nova_compute[265391]: 2025-09-30 18:59:13.608 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 18:59:13 compute-0 nova_compute[265391]: 2025-09-30 18:59:13.608 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 18:59:12 up  2:02,  0 user,  load average: 0.60, 0.78, 0.77\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 18:59:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:59:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:13.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:59:13 compute-0 nova_compute[265391]: 2025-09-30 18:59:13.626 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 18:59:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:13.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 18:59:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3855943758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:59:14 compute-0 nova_compute[265391]: 2025-09-30 18:59:14.060 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 18:59:14 compute-0 nova_compute[265391]: 2025-09-30 18:59:14.066 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 18:59:14 compute-0 nova_compute[265391]: 2025-09-30 18:59:14.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:14 compute-0 podman[380181]: 2025-09-30 18:59:14.527222416 +0000 UTC m=+0.062592423 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 18:59:14 compute-0 podman[380183]: 2025-09-30 18:59:14.534542476 +0000 UTC m=+0.067247054 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Sep 30 18:59:14 compute-0 podman[380182]: 2025-09-30 18:59:14.557139732 +0000 UTC m=+0.082136200 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 18:59:14 compute-0 nova_compute[265391]: 2025-09-30 18:59:14.576 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 18:59:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:14 compute-0 ceph-mon[73755]: pgmap v2433: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3855943758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 18:59:15 compute-0 nova_compute[265391]: 2025-09-30 18:59:15.085 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 18:59:15 compute-0 nova_compute[265391]: 2025-09-30 18:59:15.085 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.531s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:59:15 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2434: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:15.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:16 compute-0 nova_compute[265391]: 2025-09-30 18:59:16.082 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:16 compute-0 nova_compute[265391]: 2025-09-30 18:59:16.082 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:16 compute-0 nova_compute[265391]: 2025-09-30 18:59:16.083 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:16 compute-0 nova_compute[265391]: 2025-09-30 18:59:16.083 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:16 compute-0 nova_compute[265391]: 2025-09-30 18:59:16.084 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:16 compute-0 nova_compute[265391]: 2025-09-30 18:59:16.084 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 18:59:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:16.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:16 compute-0 ceph-mon[73755]: pgmap v2434: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:17.467Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:59:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:17.467Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:59:17 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2435: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:17.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:18 compute-0 nova_compute[265391]: 2025-09-30 18:59:18.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:18] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:59:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:18] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:59:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:18 compute-0 ceph-mon[73755]: pgmap v2435: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:18.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:19 compute-0 nova_compute[265391]: 2025-09-30 18:59:19.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:19 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2436: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:19.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:19 compute-0 ceph-mon[73755]: pgmap v2436: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:20.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:21 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2437: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:21.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:59:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:59:22 compute-0 ceph-mon[73755]: pgmap v2437: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:59:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:22.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:23 compute-0 nova_compute[265391]: 2025-09-30 18:59:23.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:23 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2438: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:23.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:23.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:24 compute-0 nova_compute[265391]: 2025-09-30 18:59:24.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:24 compute-0 ceph-mon[73755]: pgmap v2438: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:24.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:25 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2439: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:25 compute-0 sudo[380250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:59:25 compute-0 sudo[380250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:25 compute-0 sudo[380250]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:26 compute-0 ceph-mon[73755]: pgmap v2439: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:26.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:27.468Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:27 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2440: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:27.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:28 compute-0 nova_compute[265391]: 2025-09-30 18:59:28.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:28 compute-0 ceph-mon[73755]: pgmap v2440: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:28] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:59:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:28] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 18:59:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:28.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:28.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:29 compute-0 nova_compute[265391]: 2025-09-30 18:59:29.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:29 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2441: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:29 compute-0 podman[276673]: time="2025-09-30T18:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:59:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:59:29 compute-0 podman[276673]: @ - - [30/Sep/2025:18:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10320 "" "Go-http-client/1.1"
Sep 30 18:59:30 compute-0 podman[380282]: 2025-09-30 18:59:30.522168266 +0000 UTC m=+0.061908635 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 18:59:30 compute-0 podman[380281]: 2025-09-30 18:59:30.547225696 +0000 UTC m=+0.089012158 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 18:59:30 compute-0 podman[380280]: 2025-09-30 18:59:30.547136074 +0000 UTC m=+0.090070546 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:59:30 compute-0 ceph-mon[73755]: pgmap v2441: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:30.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:31 compute-0 openstack_network_exporter[279566]: ERROR   18:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:59:31 compute-0 openstack_network_exporter[279566]: ERROR   18:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 18:59:31 compute-0 openstack_network_exporter[279566]: ERROR   18:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 18:59:31 compute-0 openstack_network_exporter[279566]: ERROR   18:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 18:59:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:59:31 compute-0 openstack_network_exporter[279566]: ERROR   18:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 18:59:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 18:59:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:31 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2442: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:31.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:32 compute-0 ceph-mon[73755]: pgmap v2442: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:33 compute-0 nova_compute[265391]: 2025-09-30 18:59:33.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:33 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2443: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:33.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:33.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:59:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:33.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:34 compute-0 nova_compute[265391]: 2025-09-30 18:59:34.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:34 compute-0 ceph-mon[73755]: pgmap v2443: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:35 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2444: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:59:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:59:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:59:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/715195180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:59:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:59:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/715195180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:59:36 compute-0 ceph-mon[73755]: pgmap v2444: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/715195180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:59:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/715195180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:59:36 compute-0 nova_compute[265391]: 2025-09-30 18:59:36.808 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 18:59:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:36.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:59:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 18:59:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 18:59:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:37.469Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:37 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2445: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 18:59:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:37.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 18:59:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:59:38 compute-0 nova_compute[265391]: 2025-09-30 18:59:38.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:38] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:59:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:38] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:59:38 compute-0 ceph-mon[73755]: pgmap v2445: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:38.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:38.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:39 compute-0 nova_compute[265391]: 2025-09-30 18:59:39.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:39 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2446: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:39.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:40.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:40 compute-0 ceph-mon[73755]: pgmap v2446: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:41 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2447: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:41.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:42 compute-0 ceph-mon[73755]: pgmap v2447: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:42.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:43 compute-0 nova_compute[265391]: 2025-09-30 18:59:43.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:43 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2448: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:43.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:43.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:44 compute-0 nova_compute[265391]: 2025-09-30 18:59:44.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:44 compute-0 ceph-mon[73755]: pgmap v2448: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:44.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:44 compute-0 sudo[380359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:59:44 compute-0 sudo[380359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:45 compute-0 sudo[380359]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:45 compute-0 podman[380383]: 2025-09-30 18:59:45.064517587 +0000 UTC m=+0.068628800 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 18:59:45 compute-0 sudo[380405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 18:59:45 compute-0 sudo[380405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:45 compute-0 podman[380385]: 2025-09-30 18:59:45.082240497 +0000 UTC m=+0.074177124 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public)
Sep 30 18:59:45 compute-0 podman[380384]: 2025-09-30 18:59:45.094374261 +0000 UTC m=+0.092673163 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 18:59:45 compute-0 sudo[380405]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:59:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:59:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:45 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2449: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:45 compute-0 sudo[380487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:59:45 compute-0 sudo[380487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:45 compute-0 sudo[380487]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 18:59:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:45 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 18:59:45 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:45 compute-0 sudo[380512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 18:59:45 compute-0 sudo[380512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:45.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:46 compute-0 sudo[380558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 18:59:46 compute-0 sudo[380558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:46 compute-0 sudo[380558]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:46 compute-0 sudo[380512]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:59:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:59:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 18:59:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:59:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2450: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 18:59:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 18:59:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:46 compute-0 ceph-mon[73755]: pgmap v2449: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 18:59:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:59:46 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 18:59:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 18:59:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:59:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 18:59:46 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:59:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 18:59:46 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:59:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:46 compute-0 sudo[380598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:59:46 compute-0 sudo[380598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:46 compute-0 sudo[380598]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:46 compute-0 sudo[380623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 18:59:46 compute-0 sudo[380623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:46.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:46 compute-0 podman[380690]: 2025-09-30 18:59:46.987946678 +0000 UTC m=+0.049145214 container create ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_maxwell, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 18:59:47 compute-0 systemd[1]: Started libpod-conmon-ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae.scope.
Sep 30 18:59:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:59:47 compute-0 podman[380690]: 2025-09-30 18:59:46.965224829 +0000 UTC m=+0.026423405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:59:47 compute-0 podman[380690]: 2025-09-30 18:59:47.074976224 +0000 UTC m=+0.136174780 container init ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_maxwell, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 18:59:47 compute-0 podman[380690]: 2025-09-30 18:59:47.083667899 +0000 UTC m=+0.144866435 container start ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_maxwell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:59:47 compute-0 podman[380690]: 2025-09-30 18:59:47.088861644 +0000 UTC m=+0.150060210 container attach ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 18:59:47 compute-0 reverent_maxwell[380706]: 167 167
Sep 30 18:59:47 compute-0 systemd[1]: libpod-ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae.scope: Deactivated successfully.
Sep 30 18:59:47 compute-0 podman[380690]: 2025-09-30 18:59:47.091180804 +0000 UTC m=+0.152379390 container died ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 18:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3d3f39f105c4e66a0f96867f6959e30dcfa09e85e1a0837d436fde4b4bd20f8-merged.mount: Deactivated successfully.
Sep 30 18:59:47 compute-0 podman[380690]: 2025-09-30 18:59:47.131450758 +0000 UTC m=+0.192649294 container remove ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 18:59:47 compute-0 systemd[1]: libpod-conmon-ce688110821a0fc3566578f81faba9bafb00e451a92f4f00d20d79e84a1d64ae.scope: Deactivated successfully.
Sep 30 18:59:47 compute-0 podman[380729]: 2025-09-30 18:59:47.293409055 +0000 UTC m=+0.042298777 container create 3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 18:59:47 compute-0 systemd[1]: Started libpod-conmon-3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343.scope.
Sep 30 18:59:47 compute-0 podman[380729]: 2025-09-30 18:59:47.274985688 +0000 UTC m=+0.023875440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:59:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67303ace55ec068a90bca92d13c84c8b9443036670a46de8ae7dfb3d8517833b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67303ace55ec068a90bca92d13c84c8b9443036670a46de8ae7dfb3d8517833b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67303ace55ec068a90bca92d13c84c8b9443036670a46de8ae7dfb3d8517833b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67303ace55ec068a90bca92d13c84c8b9443036670a46de8ae7dfb3d8517833b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67303ace55ec068a90bca92d13c84c8b9443036670a46de8ae7dfb3d8517833b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:47 compute-0 podman[380729]: 2025-09-30 18:59:47.402269307 +0000 UTC m=+0.151159059 container init 3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Sep 30 18:59:47 compute-0 podman[380729]: 2025-09-30 18:59:47.410287165 +0000 UTC m=+0.159176917 container start 3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:59:47 compute-0 podman[380729]: 2025-09-30 18:59:47.414589366 +0000 UTC m=+0.163479108 container attach 3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 18:59:47 compute-0 ceph-mon[73755]: pgmap v2450: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 18:59:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 18:59:47 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 18:59:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:47.470Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:47.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:47 compute-0 clever_cerf[380745]: --> passed data devices: 0 physical, 1 LVM
Sep 30 18:59:47 compute-0 clever_cerf[380745]: --> All data devices are unavailable
Sep 30 18:59:47 compute-0 systemd[1]: libpod-3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343.scope: Deactivated successfully.
Sep 30 18:59:47 compute-0 podman[380729]: 2025-09-30 18:59:47.761044456 +0000 UTC m=+0.509934168 container died 3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-67303ace55ec068a90bca92d13c84c8b9443036670a46de8ae7dfb3d8517833b-merged.mount: Deactivated successfully.
Sep 30 18:59:47 compute-0 podman[380729]: 2025-09-30 18:59:47.810246911 +0000 UTC m=+0.559136633 container remove 3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:59:47 compute-0 systemd[1]: libpod-conmon-3e24744b00e9d04aef6a97f96ab83e52187b59769a21068de004f01b0c3ba343.scope: Deactivated successfully.
Sep 30 18:59:47 compute-0 sudo[380623]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:47 compute-0 sudo[380773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:59:47 compute-0 sudo[380773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:47 compute-0 sudo[380773]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:48 compute-0 sudo[380798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 18:59:48 compute-0 sudo[380798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:48 compute-0 nova_compute[265391]: 2025-09-30 18:59:48.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2451: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:48 compute-0 podman[380865]: 2025-09-30 18:59:48.484298882 +0000 UTC m=+0.052877502 container create 71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 18:59:48 compute-0 systemd[1]: Started libpod-conmon-71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933.scope.
Sep 30 18:59:48 compute-0 podman[380865]: 2025-09-30 18:59:48.457101157 +0000 UTC m=+0.025679827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:59:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:59:48 compute-0 podman[380865]: 2025-09-30 18:59:48.578427211 +0000 UTC m=+0.147005841 container init 71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:59:48 compute-0 podman[380865]: 2025-09-30 18:59:48.58955363 +0000 UTC m=+0.158132210 container start 71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 18:59:48 compute-0 podman[380865]: 2025-09-30 18:59:48.593238395 +0000 UTC m=+0.161816995 container attach 71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 18:59:48 compute-0 hopeful_antonelli[380881]: 167 167
Sep 30 18:59:48 compute-0 systemd[1]: libpod-71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933.scope: Deactivated successfully.
Sep 30 18:59:48 compute-0 podman[380865]: 2025-09-30 18:59:48.59614001 +0000 UTC m=+0.164718590 container died 71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 18:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad6bb486c229e5cae9190c5d05558a372c37dd1204ac5d5a769334ca66313d3c-merged.mount: Deactivated successfully.
Sep 30 18:59:48 compute-0 podman[380865]: 2025-09-30 18:59:48.640422268 +0000 UTC m=+0.209000848 container remove 71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:59:48 compute-0 systemd[1]: libpod-conmon-71873dcab56ac7b918c49c8e7ac5ba988251cf9aa019d7897d26327c5a6e1933.scope: Deactivated successfully.
Sep 30 18:59:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:48] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:59:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:48] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:59:48 compute-0 podman[380905]: 2025-09-30 18:59:48.861725183 +0000 UTC m=+0.066937336 container create a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:59:48 compute-0 systemd[1]: Started libpod-conmon-a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44.scope.
Sep 30 18:59:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:48.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:48 compute-0 podman[380905]: 2025-09-30 18:59:48.833761748 +0000 UTC m=+0.038973951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:59:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ab7a52199a88353fdd83373e4c0ff18ba0d8aa5af949133569a8b72a1c04e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ab7a52199a88353fdd83373e4c0ff18ba0d8aa5af949133569a8b72a1c04e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ab7a52199a88353fdd83373e4c0ff18ba0d8aa5af949133569a8b72a1c04e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ab7a52199a88353fdd83373e4c0ff18ba0d8aa5af949133569a8b72a1c04e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:48 compute-0 podman[380905]: 2025-09-30 18:59:48.97158826 +0000 UTC m=+0.176800413 container init a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:59:48 compute-0 podman[380905]: 2025-09-30 18:59:48.984013232 +0000 UTC m=+0.189225395 container start a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:59:48 compute-0 podman[380905]: 2025-09-30 18:59:48.988751685 +0000 UTC m=+0.193963858 container attach a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:59:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:48.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]: {
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:     "0": [
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:         {
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "devices": [
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "/dev/loop3"
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             ],
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "lv_name": "ceph_lv0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "lv_size": "21470642176",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "name": "ceph_lv0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "tags": {
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.cluster_name": "ceph",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.crush_device_class": "",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.encrypted": "0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.osd_id": "0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.type": "block",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.vdo": "0",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:                 "ceph.with_tpm": "0"
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             },
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "type": "block",
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:             "vg_name": "ceph_vg0"
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:         }
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]:     ]
Sep 30 18:59:49 compute-0 vigilant_torvalds[380922]: }
Sep 30 18:59:49 compute-0 systemd[1]: libpod-a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44.scope: Deactivated successfully.
Sep 30 18:59:49 compute-0 podman[380931]: 2025-09-30 18:59:49.378704282 +0000 UTC m=+0.046456095 container died a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 18:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9ab7a52199a88353fdd83373e4c0ff18ba0d8aa5af949133569a8b72a1c04e4-merged.mount: Deactivated successfully.
Sep 30 18:59:49 compute-0 podman[380931]: 2025-09-30 18:59:49.42143495 +0000 UTC m=+0.089186683 container remove a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_torvalds, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 18:59:49 compute-0 systemd[1]: libpod-conmon-a80a2d88b89d1c34cf352e71786c684d72cacedb9518d2d1b3d815e30785dc44.scope: Deactivated successfully.
Sep 30 18:59:49 compute-0 nova_compute[265391]: 2025-09-30 18:59:49.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:49 compute-0 sudo[380798]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:49 compute-0 ceph-mon[73755]: pgmap v2451: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:49 compute-0 sudo[380946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 18:59:49 compute-0 sudo[380946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:49 compute-0 sudo[380946]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:49 compute-0 sudo[380971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 18:59:49 compute-0 sudo[380971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:49.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:49 compute-0 podman[381040]: 2025-09-30 18:59:49.986896006 +0000 UTC m=+0.053151729 container create 245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_robinson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 18:59:50 compute-0 systemd[1]: Started libpod-conmon-245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae.scope.
Sep 30 18:59:50 compute-0 podman[381040]: 2025-09-30 18:59:49.959207008 +0000 UTC m=+0.025462781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:59:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:59:50 compute-0 podman[381040]: 2025-09-30 18:59:50.068894771 +0000 UTC m=+0.135150554 container init 245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_robinson, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Sep 30 18:59:50 compute-0 podman[381040]: 2025-09-30 18:59:50.076739754 +0000 UTC m=+0.142995447 container start 245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_robinson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:59:50 compute-0 podman[381040]: 2025-09-30 18:59:50.080226135 +0000 UTC m=+0.146481868 container attach 245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 18:59:50 compute-0 angry_robinson[381056]: 167 167
Sep 30 18:59:50 compute-0 systemd[1]: libpod-245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae.scope: Deactivated successfully.
Sep 30 18:59:50 compute-0 podman[381040]: 2025-09-30 18:59:50.083878159 +0000 UTC m=+0.150133862 container died 245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_robinson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 18:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-50fbd0a5cb6ed56275ea7d3dc13be4a0209fa67d048d5dd4ae0eb54e2b024d29-merged.mount: Deactivated successfully.
Sep 30 18:59:50 compute-0 podman[381040]: 2025-09-30 18:59:50.128485446 +0000 UTC m=+0.194741139 container remove 245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 18:59:50 compute-0 systemd[1]: libpod-conmon-245f22a0b7fa4f9c49fb7aa52086ba84734e9744b3a691bfbc34421b2e8b70ae.scope: Deactivated successfully.
Sep 30 18:59:50 compute-0 podman[381080]: 2025-09-30 18:59:50.322643378 +0000 UTC m=+0.036107017 container create 44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 18:59:50 compute-0 systemd[1]: Started libpod-conmon-44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907.scope.
Sep 30 18:59:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2452: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 18:59:50 compute-0 podman[381080]: 2025-09-30 18:59:50.307415993 +0000 UTC m=+0.020879652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 18:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ecc736fbba64dbdc514240d5804d8626de93df38c24957192258ad709aef097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ecc736fbba64dbdc514240d5804d8626de93df38c24957192258ad709aef097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ecc736fbba64dbdc514240d5804d8626de93df38c24957192258ad709aef097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ecc736fbba64dbdc514240d5804d8626de93df38c24957192258ad709aef097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 18:59:50 compute-0 podman[381080]: 2025-09-30 18:59:50.438690196 +0000 UTC m=+0.152153915 container init 44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 18:59:50 compute-0 podman[381080]: 2025-09-30 18:59:50.446142249 +0000 UTC m=+0.159605888 container start 44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Sep 30 18:59:50 compute-0 podman[381080]: 2025-09-30 18:59:50.449586228 +0000 UTC m=+0.163049917 container attach 44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 18:59:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:50.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:51 compute-0 lvm[381172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 18:59:51 compute-0 lvm[381172]: VG ceph_vg0 finished
Sep 30 18:59:51 compute-0 vigorous_johnson[381096]: {}
Sep 30 18:59:51 compute-0 systemd[1]: libpod-44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907.scope: Deactivated successfully.
Sep 30 18:59:51 compute-0 systemd[1]: libpod-44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907.scope: Consumed 1.300s CPU time.
Sep 30 18:59:51 compute-0 podman[381080]: 2025-09-30 18:59:51.223170488 +0000 UTC m=+0.936634127 container died 44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_johnson, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 18:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ecc736fbba64dbdc514240d5804d8626de93df38c24957192258ad709aef097-merged.mount: Deactivated successfully.
Sep 30 18:59:51 compute-0 podman[381080]: 2025-09-30 18:59:51.276201522 +0000 UTC m=+0.989665161 container remove 44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_johnson, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 18:59:51 compute-0 systemd[1]: libpod-conmon-44063a5a72e08ace47d9b3e6219395437600f9d49f529c4aab5d005ea7aa3907.scope: Deactivated successfully.
Sep 30 18:59:51 compute-0 sudo[380971]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 18:59:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 18:59:51 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:51 compute-0 sudo[381190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 18:59:51 compute-0 sudo[381190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 18:59:51 compute-0 sudo[381190]: pam_unix(sudo:session): session closed for user root
Sep 30 18:59:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:51 compute-0 ceph-mon[73755]: pgmap v2452: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:51 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 18:59:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:51.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 18:59:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:59:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2453: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 18:59:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:52.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:53 compute-0 nova_compute[265391]: 2025-09-30 18:59:53.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:53 compute-0 ceph-mon[73755]: pgmap v2453: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:53.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:53.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 18:59:53 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:53.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:59:54.361 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 18:59:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:59:54.362 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 18:59:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 18:59:54.362 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 18:59:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2454: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 847 B/s rd, 0 op/s
Sep 30 18:59:54 compute-0 nova_compute[265391]: 2025-09-30 18:59:54.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:54.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:55 compute-0 ceph-mon[73755]: pgmap v2454: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 847 B/s rd, 0 op/s
Sep 30 18:59:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 18:59:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 18:59:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2455: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 18:59:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:56.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:57.472Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:57 compute-0 ceph-mon[73755]: pgmap v2455: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 564 B/s rd, 0 op/s
Sep 30 18:59:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 18:59:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1234171129' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:59:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 18:59:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1234171129' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:59:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:57.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:58 compute-0 nova_compute[265391]: 2025-09-30 18:59:58.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2456: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1234171129' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 18:59:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1234171129' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 18:59:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:58] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:59:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:18:59:58] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 18:59:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:18:59:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T18:59:58.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 18:59:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 18:59:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 18:59:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 18:59:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 18:59:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 18:59:59 compute-0 nova_compute[265391]: 2025-09-30 18:59:59.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 18:59:59 compute-0 ceph-mon[73755]: pgmap v2456: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 18:59:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 18:59:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 18:59:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:18:59:59.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 18:59:59 compute-0 podman[276673]: time="2025-09-30T18:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 18:59:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 18:59:59 compute-0 podman[276673]: @ - - [30/Sep/2025:18:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10315 "" "Go-http-client/1.1"
Sep 30 19:00:00 compute-0 ceph-mon[73755]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 19:00:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2457: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:00 compute-0 ceph-mon[73755]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 19:00:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:00.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:01 compute-0 openstack_network_exporter[279566]: ERROR   19:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:00:01 compute-0 openstack_network_exporter[279566]: ERROR   19:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:00:01 compute-0 openstack_network_exporter[279566]: ERROR   19:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:00:01 compute-0 openstack_network_exporter[279566]: ERROR   19:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:00:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:00:01 compute-0 openstack_network_exporter[279566]: ERROR   19:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:00:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:00:01 compute-0 nova_compute[265391]: 2025-09-30 19:00:01.431 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:01 compute-0 podman[381226]: 2025-09-30 19:00:01.507088387 +0000 UTC m=+0.051817364 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 19:00:01 compute-0 podman[381228]: 2025-09-30 19:00:01.514642173 +0000 UTC m=+0.053353964 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 19:00:01 compute-0 podman[381227]: 2025-09-30 19:00:01.58823499 +0000 UTC m=+0.130769770 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 19:00:01 compute-0 ceph-mon[73755]: pgmap v2457: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2458: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:02.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:03 compute-0 nova_compute[265391]: 2025-09-30 19:00:03.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:03 compute-0 nova_compute[265391]: 2025-09-30 19:00:03.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:03 compute-0 ceph-mon[73755]: pgmap v2458: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:03 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:03.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2459: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:04 compute-0 nova_compute[265391]: 2025-09-30 19:00:04.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:05 compute-0 ceph-mon[73755]: pgmap v2459: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:06 compute-0 sudo[381301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:00:06 compute-0 sudo[381301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:06 compute-0 sudo[381301]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2460: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:06.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:00:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:00:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:00:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:07.473Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:07 compute-0 ceph-mon[73755]: pgmap v2460: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:00:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:08 compute-0 nova_compute[265391]: 2025-09-30 19:00:08.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2461: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_19:00:08
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', '.nfs', 'backups']
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:00:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:08] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:00:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:08] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:00:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 19:00:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:08.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 19:00:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:08.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:09 compute-0 nova_compute[265391]: 2025-09-30 19:00:09.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:09 compute-0 ceph-mon[73755]: pgmap v2461: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:09.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2462: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:10.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:11 compute-0 nova_compute[265391]: 2025-09-30 19:00:11.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:11 compute-0 ceph-mon[73755]: pgmap v2462: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3259496954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:00:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2463: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:12 compute-0 nova_compute[265391]: 2025-09-30 19:00:12.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:12.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:13 compute-0 nova_compute[265391]: 2025-09-30 19:00:13.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:13 compute-0 nova_compute[265391]: 2025-09-30 19:00:13.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:13 compute-0 ceph-mon[73755]: pgmap v2463: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2437263130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:00:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:13.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:13 compute-0 nova_compute[265391]: 2025-09-30 19:00:13.973 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:00:13 compute-0 nova_compute[265391]: 2025-09-30 19:00:13.974 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:00:13 compute-0 nova_compute[265391]: 2025-09-30 19:00:13.974 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:00:13 compute-0 nova_compute[265391]: 2025-09-30 19:00:13.974 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 19:00:13 compute-0 nova_compute[265391]: 2025-09-30 19:00:13.975 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:00:13 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:13.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2464: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Sep 30 19:00:14 compute-0 nova_compute[265391]: 2025-09-30 19:00:14.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:00:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845511336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:00:14 compute-0 nova_compute[265391]: 2025-09-30 19:00:14.495 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:00:14 compute-0 nova_compute[265391]: 2025-09-30 19:00:14.647 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 19:00:14 compute-0 nova_compute[265391]: 2025-09-30 19:00:14.648 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:00:14 compute-0 nova_compute[265391]: 2025-09-30 19:00:14.667 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:00:14 compute-0 nova_compute[265391]: 2025-09-30 19:00:14.667 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4225MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 19:00:14 compute-0 nova_compute[265391]: 2025-09-30 19:00:14.667 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:00:14 compute-0 nova_compute[265391]: 2025-09-30 19:00:14.668 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:00:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2845511336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:00:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:15 compute-0 podman[381357]: 2025-09-30 19:00:15.517103204 +0000 UTC m=+0.058062366 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 19:00:15 compute-0 podman[381359]: 2025-09-30 19:00:15.528188571 +0000 UTC m=+0.061289840 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Sep 30 19:00:15 compute-0 podman[381358]: 2025-09-30 19:00:15.537573044 +0000 UTC m=+0.075867417 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Sep 30 19:00:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:15 compute-0 ceph-mon[73755]: pgmap v2464: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Sep 30 19:00:15 compute-0 nova_compute[265391]: 2025-09-30 19:00:15.787 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 19:00:15 compute-0 nova_compute[265391]: 2025-09-30 19:00:15.787 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 19:00:14 up  2:03,  0 user,  load average: 0.46, 0.70, 0.74\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 19:00:15 compute-0 nova_compute[265391]: 2025-09-30 19:00:15.806 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing inventories for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Sep 30 19:00:15 compute-0 nova_compute[265391]: 2025-09-30 19:00:15.879 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating ProviderTree inventory for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Sep 30 19:00:15 compute-0 nova_compute[265391]: 2025-09-30 19:00:15.880 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Updating inventory in ProviderTree for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Sep 30 19:00:15 compute-0 nova_compute[265391]: 2025-09-30 19:00:15.893 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing aggregate associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Sep 30 19:00:15 compute-0 nova_compute[265391]: 2025-09-30 19:00:15.952 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Refreshing trait associations for resource provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOUND_MODEL_SB16,COMPUTE_ARCH_X86_64,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIRTIO_PACKED,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,COMPUTE_SECURITY_TPM_TIS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ADDRESS_SPACE_EMULATED,COMPUTE_RESCUE_BFV,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_IGB,HW_ARCH_X86_64,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOUND_MODEL_ES1370,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_CRB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_VIRTIO_FS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_SOUND_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOUND_MODEL_AC97,HW_CPU_X86_SSE42 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Sep 30 19:00:15 compute-0 nova_compute[265391]: 2025-09-30 19:00:15.974 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:00:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:00:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741081939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:00:16 compute-0 nova_compute[265391]: 2025-09-30 19:00:16.387 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:00:16 compute-0 nova_compute[265391]: 2025-09-30 19:00:16.392 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 19:00:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2465: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Sep 30 19:00:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/741081939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:00:16 compute-0 nova_compute[265391]: 2025-09-30 19:00:16.921 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 19:00:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:16.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:17 compute-0 nova_compute[265391]: 2025-09-30 19:00:17.443 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 19:00:17 compute-0 nova_compute[265391]: 2025-09-30 19:00:17.443 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.776s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:00:17 compute-0 nova_compute[265391]: 2025-09-30 19:00:17.444 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:17 compute-0 nova_compute[265391]: 2025-09-30 19:00:17.444 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Sep 30 19:00:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:17.474Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:17.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:17 compute-0 ceph-mon[73755]: pgmap v2465: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 13 op/s
Sep 30 19:00:17 compute-0 nova_compute[265391]: 2025-09-30 19:00:17.953 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Sep 30 19:00:18 compute-0 nova_compute[265391]: 2025-09-30 19:00:18.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2466: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Sep 30 19:00:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:18] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:00:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:18] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:00:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:18.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:18.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:19 compute-0 nova_compute[265391]: 2025-09-30 19:00:19.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:19 compute-0 ceph-mon[73755]: pgmap v2466: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Sep 30 19:00:19 compute-0 nova_compute[265391]: 2025-09-30 19:00:19.954 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:19 compute-0 nova_compute[265391]: 2025-09-30 19:00:19.955 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:19 compute-0 nova_compute[265391]: 2025-09-30 19:00:19.955 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:19 compute-0 nova_compute[265391]: 2025-09-30 19:00:19.955 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 19:00:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2467: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 19:00:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:00:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:20.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:00:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:21.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:21 compute-0 ceph-mon[73755]: pgmap v2467: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 19:00:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:00:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:00:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2468: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 19:00:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:00:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:22.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:23 compute-0 nova_compute[265391]: 2025-09-30 19:00:23.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:23.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:23 compute-0 ceph-mon[73755]: pgmap v2468: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 19:00:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:23.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2469: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 19:00:24 compute-0 nova_compute[265391]: 2025-09-30 19:00:24.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:25 compute-0 podman[276673]: time="2025-09-30T19:00:25Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:00:25 compute-0 podman[276673]: @ - - [30/Sep/2025:19:00:25 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 42191 "" "Go-http-client/1.1"
Sep 30 19:00:25 compute-0 nova_compute[265391]: 2025-09-30 19:00:25.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:25.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:25 compute-0 ceph-mon[73755]: pgmap v2469: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 19:00:26 compute-0 sudo[381451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:00:26 compute-0 sudo[381451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:26 compute-0 sudo[381451]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2470: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Sep 30 19:00:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:26.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:27.475Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:27.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:27 compute-0 ceph-mon[73755]: pgmap v2470: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Sep 30 19:00:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:00:28.218 166158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '9a:d4:a3', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '1e:91:34:f3:56:33'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Sep 30 19:00:28 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:00:28.219 166158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Sep 30 19:00:28 compute-0 nova_compute[265391]: 2025-09-30 19:00:28.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:28 compute-0 nova_compute[265391]: 2025-09-30 19:00:28.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2471: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Sep 30 19:00:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:28] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:00:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:28] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:00:28 compute-0 nova_compute[265391]: 2025-09-30 19:00:28.804 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:28.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:29.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:29 compute-0 nova_compute[265391]: 2025-09-30 19:00:29.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:29 compute-0 podman[276673]: time="2025-09-30T19:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:00:29 compute-0 podman[276673]: @ - - [30/Sep/2025:19:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 19:00:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:29.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:29 compute-0 podman[276673]: @ - - [30/Sep/2025:19:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10318 "" "Go-http-client/1.1"
Sep 30 19:00:29 compute-0 ceph-mon[73755]: pgmap v2471: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Sep 30 19:00:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2472: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Sep 30 19:00:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:30.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:31 compute-0 openstack_network_exporter[279566]: ERROR   19:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:00:31 compute-0 openstack_network_exporter[279566]: ERROR   19:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:00:31 compute-0 openstack_network_exporter[279566]: ERROR   19:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:00:31 compute-0 openstack_network_exporter[279566]: ERROR   19:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:00:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:00:31 compute-0 openstack_network_exporter[279566]: ERROR   19:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:00:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:00:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:31.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:31 compute-0 ceph-mon[73755]: pgmap v2472: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Sep 30 19:00:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2473: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:32 compute-0 podman[381483]: 2025-09-30 19:00:32.559657205 +0000 UTC m=+0.087592071 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 19:00:32 compute-0 podman[381485]: 2025-09-30 19:00:32.565327632 +0000 UTC m=+0.093854854 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 19:00:32 compute-0 podman[381484]: 2025-09-30 19:00:32.59575349 +0000 UTC m=+0.118053570 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Sep 30 19:00:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:32.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:33 compute-0 nova_compute[265391]: 2025-09-30 19:00:33.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:33.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:33 compute-0 ceph-mon[73755]: pgmap v2473: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:33 compute-0 nova_compute[265391]: 2025-09-30 19:00:33.931 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:33 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:33.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2474: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:34 compute-0 nova_compute[265391]: 2025-09-30 19:00:34.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:34.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:35.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:35 compute-0 ceph-mon[73755]: pgmap v2474: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:36 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:00:36.220 166158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b0398922-aff5-46ba-afa7-58d09e28293c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 19:00:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2475: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 19:00:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3465137377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:00:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 19:00:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3465137377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:00:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3465137377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:00:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3465137377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:00:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:36.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:00:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:00:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:00:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:37.476Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:37.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:37 compute-0 ceph-mon[73755]: pgmap v2475: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:00:38 compute-0 nova_compute[265391]: 2025-09-30 19:00:38.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2476: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:38] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 19:00:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:38] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 19:00:38 compute-0 ceph-mon[73755]: pgmap v2476: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:38.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:39.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:39 compute-0 nova_compute[265391]: 2025-09-30 19:00:39.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:39.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2477: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:40.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:41 compute-0 ceph-mon[73755]: pgmap v2477: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:41.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2478: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:42.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:43 compute-0 nova_compute[265391]: 2025-09-30 19:00:43.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:43 compute-0 ceph-mon[73755]: pgmap v2478: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:43.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:43 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:43.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2479: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:44 compute-0 nova_compute[265391]: 2025-09-30 19:00:44.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:00:44 compute-0 nova_compute[265391]: 2025-09-30 19:00:44.428 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Sep 30 19:00:44 compute-0 nova_compute[265391]: 2025-09-30 19:00:44.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:44.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:45 compute-0 ceph-mon[73755]: pgmap v2479: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:45.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:46 compute-0 sudo[381567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:00:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2480: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:46 compute-0 sudo[381567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:46 compute-0 sudo[381567]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:46 compute-0 podman[381591]: 2025-09-30 19:00:46.493873795 +0000 UTC m=+0.070158290 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 19:00:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:46 compute-0 podman[381592]: 2025-09-30 19:00:46.500374853 +0000 UTC m=+0.063672621 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 19:00:46 compute-0 podman[381593]: 2025-09-30 19:00:46.517026495 +0000 UTC m=+0.081232217 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Sep 30 19:00:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:46.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:47.476Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:47 compute-0 ceph-mon[73755]: pgmap v2480: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:48 compute-0 nova_compute[265391]: 2025-09-30 19:00:48.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2481: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:48] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 19:00:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:48] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 19:00:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:48.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:49.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:49 compute-0 nova_compute[265391]: 2025-09-30 19:00:49.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:49 compute-0 ceph-mon[73755]: pgmap v2481: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:49.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2482: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:50.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:51 compute-0 ceph-mon[73755]: pgmap v2482: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:00:51 compute-0 sudo[381654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:00:51 compute-0 sudo[381654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:51 compute-0 sudo[381654]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:51 compute-0 sudo[381679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 19:00:51 compute-0 sudo[381679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:51.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:52 compute-0 sudo[381679]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2483: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 19:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2484: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 19:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 19:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 19:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 19:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:00:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:00:52 compute-0 sudo[381738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:00:52 compute-0 sudo[381738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:52 compute-0 sudo[381738]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 19:00:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:00:52 compute-0 sudo[381763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 19:00:52 compute-0 sudo[381763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:52.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:53 compute-0 podman[381829]: 2025-09-30 19:00:53.104569914 +0000 UTC m=+0.053623311 container create b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 19:00:53 compute-0 systemd[1]: Started libpod-conmon-b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7.scope.
Sep 30 19:00:53 compute-0 podman[381829]: 2025-09-30 19:00:53.07469241 +0000 UTC m=+0.023745857 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:00:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:00:53 compute-0 podman[381829]: 2025-09-30 19:00:53.203271401 +0000 UTC m=+0.152324818 container init b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_galileo, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 19:00:53 compute-0 podman[381829]: 2025-09-30 19:00:53.210012286 +0000 UTC m=+0.159065683 container start b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_galileo, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 19:00:53 compute-0 brave_galileo[381845]: 167 167
Sep 30 19:00:53 compute-0 systemd[1]: libpod-b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7.scope: Deactivated successfully.
Sep 30 19:00:53 compute-0 podman[381829]: 2025-09-30 19:00:53.217430788 +0000 UTC m=+0.166484215 container attach b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 19:00:53 compute-0 podman[381829]: 2025-09-30 19:00:53.217991633 +0000 UTC m=+0.167045040 container died b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_galileo, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:00:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9275850363567c5cd607fc0b142bbdeb0f34a436d14896a94907c60fed6a9bc-merged.mount: Deactivated successfully.
Sep 30 19:00:53 compute-0 podman[381829]: 2025-09-30 19:00:53.293212692 +0000 UTC m=+0.242266109 container remove b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:00:53 compute-0 systemd[1]: libpod-conmon-b2ad8f1911f1f08ec5e5b517d81f1a2d8d3319c3e0d87480d19bfe0ac82e4ee7.scope: Deactivated successfully.
Sep 30 19:00:53 compute-0 nova_compute[265391]: 2025-09-30 19:00:53.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:53 compute-0 podman[381870]: 2025-09-30 19:00:53.512203928 +0000 UTC m=+0.073021653 container create 2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 19:00:53 compute-0 podman[381870]: 2025-09-30 19:00:53.478623998 +0000 UTC m=+0.039441713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:00:53 compute-0 systemd[1]: Started libpod-conmon-2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f.scope.
Sep 30 19:00:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820f2ff5ad9d61d2d4a463c8fcb6a68bb61a107f6082a17631814adf4a38c3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820f2ff5ad9d61d2d4a463c8fcb6a68bb61a107f6082a17631814adf4a38c3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820f2ff5ad9d61d2d4a463c8fcb6a68bb61a107f6082a17631814adf4a38c3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820f2ff5ad9d61d2d4a463c8fcb6a68bb61a107f6082a17631814adf4a38c3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820f2ff5ad9d61d2d4a463c8fcb6a68bb61a107f6082a17631814adf4a38c3aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:53 compute-0 podman[381870]: 2025-09-30 19:00:53.639535909 +0000 UTC m=+0.200353634 container init 2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 19:00:53 compute-0 podman[381870]: 2025-09-30 19:00:53.653147721 +0000 UTC m=+0.213965456 container start 2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Sep 30 19:00:53 compute-0 ceph-mon[73755]: pgmap v2483: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:00:53 compute-0 ceph-mon[73755]: pgmap v2484: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:00:53 compute-0 podman[381870]: 2025-09-30 19:00:53.661135408 +0000 UTC m=+0.221953273 container attach 2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 19:00:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:53.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:53.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:00:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:54.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:00:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:54 compute-0 eloquent_bhaskara[381887]: --> passed data devices: 0 physical, 1 LVM
Sep 30 19:00:54 compute-0 eloquent_bhaskara[381887]: --> All data devices are unavailable
Sep 30 19:00:54 compute-0 systemd[1]: libpod-2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f.scope: Deactivated successfully.
Sep 30 19:00:54 compute-0 podman[381870]: 2025-09-30 19:00:54.072536181 +0000 UTC m=+0.633353876 container died 2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-820f2ff5ad9d61d2d4a463c8fcb6a68bb61a107f6082a17631814adf4a38c3aa-merged.mount: Deactivated successfully.
Sep 30 19:00:54 compute-0 podman[381870]: 2025-09-30 19:00:54.130576346 +0000 UTC m=+0.691394041 container remove 2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_bhaskara, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:00:54 compute-0 systemd[1]: libpod-conmon-2265196fba67b81a007fb4a15b981f10d238373d5e6c62e8362b73b2f011182f.scope: Deactivated successfully.
Sep 30 19:00:54 compute-0 sudo[381763]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:54 compute-0 sudo[381914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:00:54 compute-0 sudo[381914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:54 compute-0 sudo[381914]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:00:54.364 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:00:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:00:54.365 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:00:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:00:54.365 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:00:54 compute-0 sudo[381939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 19:00:54 compute-0 sudo[381939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:54 compute-0 nova_compute[265391]: 2025-09-30 19:00:54.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2485: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:00:54 compute-0 podman[382008]: 2025-09-30 19:00:54.904963437 +0000 UTC m=+0.049078313 container create 8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ganguly, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:00:54 compute-0 systemd[1]: Started libpod-conmon-8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419.scope.
Sep 30 19:00:54 compute-0 podman[382008]: 2025-09-30 19:00:54.883368867 +0000 UTC m=+0.027483733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:00:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:00:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:54.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:54 compute-0 podman[382008]: 2025-09-30 19:00:54.99961742 +0000 UTC m=+0.143732316 container init 8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ganguly, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 19:00:55 compute-0 podman[382008]: 2025-09-30 19:00:55.010444801 +0000 UTC m=+0.154559647 container start 8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 19:00:55 compute-0 podman[382008]: 2025-09-30 19:00:55.01428949 +0000 UTC m=+0.158404346 container attach 8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:00:55 compute-0 trusting_ganguly[382024]: 167 167
Sep 30 19:00:55 compute-0 systemd[1]: libpod-8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419.scope: Deactivated successfully.
Sep 30 19:00:55 compute-0 podman[382008]: 2025-09-30 19:00:55.021321172 +0000 UTC m=+0.165436048 container died 8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ganguly, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 19:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cbdec0bf351b69fd5de34bf74a45aa30d08328210906c43783b7873553e1c33-merged.mount: Deactivated successfully.
Sep 30 19:00:55 compute-0 podman[382008]: 2025-09-30 19:00:55.076041431 +0000 UTC m=+0.220156277 container remove 8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 19:00:55 compute-0 systemd[1]: libpod-conmon-8b986def9107736d6ced57dcf7651e2139882aab88ea46fbdc67c4f67fc49419.scope: Deactivated successfully.
Sep 30 19:00:55 compute-0 podman[382048]: 2025-09-30 19:00:55.271881847 +0000 UTC m=+0.056961508 container create 5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldstine, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 19:00:55 compute-0 systemd[1]: Started libpod-conmon-5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744.scope.
Sep 30 19:00:55 compute-0 podman[382048]: 2025-09-30 19:00:55.249728262 +0000 UTC m=+0.034807953 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:00:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc5d3e55bb6c5caa1bd76337fe886dc742496d1d94e2b8ccd70c2bb139b7ee7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc5d3e55bb6c5caa1bd76337fe886dc742496d1d94e2b8ccd70c2bb139b7ee7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc5d3e55bb6c5caa1bd76337fe886dc742496d1d94e2b8ccd70c2bb139b7ee7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc5d3e55bb6c5caa1bd76337fe886dc742496d1d94e2b8ccd70c2bb139b7ee7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:55 compute-0 podman[382048]: 2025-09-30 19:00:55.37925978 +0000 UTC m=+0.164339541 container init 5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldstine, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 19:00:55 compute-0 podman[382048]: 2025-09-30 19:00:55.386541358 +0000 UTC m=+0.171621049 container start 5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 19:00:55 compute-0 podman[382048]: 2025-09-30 19:00:55.394424323 +0000 UTC m=+0.179504014 container attach 5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldstine, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 19:00:55 compute-0 ceph-mon[73755]: pgmap v2485: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]: {
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:     "0": [
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:         {
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "devices": [
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "/dev/loop3"
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             ],
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "lv_name": "ceph_lv0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "lv_size": "21470642176",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "name": "ceph_lv0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "tags": {
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.cluster_name": "ceph",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.crush_device_class": "",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.encrypted": "0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.osd_id": "0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.type": "block",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.vdo": "0",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:                 "ceph.with_tpm": "0"
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             },
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "type": "block",
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:             "vg_name": "ceph_vg0"
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:         }
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]:     ]
Sep 30 19:00:55 compute-0 gracious_goldstine[382064]: }
Sep 30 19:00:55 compute-0 systemd[1]: libpod-5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744.scope: Deactivated successfully.
Sep 30 19:00:55 compute-0 podman[382048]: 2025-09-30 19:00:55.763025136 +0000 UTC m=+0.548104807 container died 5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldstine, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 19:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cc5d3e55bb6c5caa1bd76337fe886dc742496d1d94e2b8ccd70c2bb139b7ee7-merged.mount: Deactivated successfully.
Sep 30 19:00:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:00:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:55.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:00:55 compute-0 podman[382048]: 2025-09-30 19:00:55.83105832 +0000 UTC m=+0.616137971 container remove 5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldstine, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:00:55 compute-0 systemd[1]: libpod-conmon-5f622774474faab2553588c55ef52307e21f951e1e860359ead528ab8622c744.scope: Deactivated successfully.
Sep 30 19:00:55 compute-0 sudo[381939]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:55 compute-0 sudo[382085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:00:55 compute-0 sudo[382085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:55 compute-0 sudo[382085]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:56 compute-0 sudo[382110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 19:00:56 compute-0 sudo[382110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:56 compute-0 podman[382177]: 2025-09-30 19:00:56.42215731 +0000 UTC m=+0.052887092 container create ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_khorana, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 19:00:56 compute-0 systemd[1]: Started libpod-conmon-ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d.scope.
Sep 30 19:00:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2486: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:00:56 compute-0 podman[382177]: 2025-09-30 19:00:56.393646511 +0000 UTC m=+0.024376363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:00:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:00:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:00:56 compute-0 podman[382177]: 2025-09-30 19:00:56.513717233 +0000 UTC m=+0.144447005 container init ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:00:56 compute-0 podman[382177]: 2025-09-30 19:00:56.521645719 +0000 UTC m=+0.152375471 container start ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_khorana, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 19:00:56 compute-0 podman[382177]: 2025-09-30 19:00:56.524813561 +0000 UTC m=+0.155543383 container attach ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 19:00:56 compute-0 crazy_khorana[382193]: 167 167
Sep 30 19:00:56 compute-0 systemd[1]: libpod-ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d.scope: Deactivated successfully.
Sep 30 19:00:56 compute-0 podman[382177]: 2025-09-30 19:00:56.529853111 +0000 UTC m=+0.160582893 container died ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_khorana, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 19:00:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad17cb5805bb5bf42fa2c7825273b787397842204386934da54db9cf1407af46-merged.mount: Deactivated successfully.
Sep 30 19:00:56 compute-0 podman[382177]: 2025-09-30 19:00:56.578338788 +0000 UTC m=+0.209068540 container remove ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_khorana, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 19:00:56 compute-0 systemd[1]: libpod-conmon-ea34b60757a4769f0398b66fd2ffa26dd9ef2a04b03f84d3fecf9e36dbcb741d.scope: Deactivated successfully.
Sep 30 19:00:56 compute-0 podman[382217]: 2025-09-30 19:00:56.758881736 +0000 UTC m=+0.043021185 container create 429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 19:00:56 compute-0 systemd[1]: Started libpod-conmon-429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2.scope.
Sep 30 19:00:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aec1574437af7b828b0996e346936baa6219d29f6889593f784e24e2a99fce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aec1574437af7b828b0996e346936baa6219d29f6889593f784e24e2a99fce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aec1574437af7b828b0996e346936baa6219d29f6889593f784e24e2a99fce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aec1574437af7b828b0996e346936baa6219d29f6889593f784e24e2a99fce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:00:56 compute-0 podman[382217]: 2025-09-30 19:00:56.825790381 +0000 UTC m=+0.109929860 container init 429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jemison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 19:00:56 compute-0 podman[382217]: 2025-09-30 19:00:56.833233593 +0000 UTC m=+0.117373052 container start 429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jemison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Sep 30 19:00:56 compute-0 podman[382217]: 2025-09-30 19:00:56.740170441 +0000 UTC m=+0.024309900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:00:56 compute-0 podman[382217]: 2025-09-30 19:00:56.840234275 +0000 UTC m=+0.124373724 container attach 429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jemison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 19:00:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:56.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:57 compute-0 lvm[382308]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 19:00:57 compute-0 lvm[382308]: VG ceph_vg0 finished
Sep 30 19:00:57 compute-0 thirsty_jemison[382234]: {}
Sep 30 19:00:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:57.477Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:00:57 compute-0 systemd[1]: libpod-429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2.scope: Deactivated successfully.
Sep 30 19:00:57 compute-0 systemd[1]: libpod-429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2.scope: Consumed 1.077s CPU time.
Sep 30 19:00:57 compute-0 podman[382217]: 2025-09-30 19:00:57.505303393 +0000 UTC m=+0.789442842 container died 429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0aec1574437af7b828b0996e346936baa6219d29f6889593f784e24e2a99fce-merged.mount: Deactivated successfully.
Sep 30 19:00:57 compute-0 podman[382217]: 2025-09-30 19:00:57.567412152 +0000 UTC m=+0.851551601 container remove 429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 19:00:57 compute-0 systemd[1]: libpod-conmon-429f9c28e90ed173ed10c6c2ea57e353639e5e7a173df4a21f1f5e04111e43c2.scope: Deactivated successfully.
Sep 30 19:00:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 19:00:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/108451082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:00:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 19:00:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/108451082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:00:57 compute-0 sudo[382110]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 19:00:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:00:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 19:00:57 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:00:57 compute-0 sudo[382326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 19:00:57 compute-0 sudo[382326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:00:57 compute-0 ceph-mon[73755]: pgmap v2486: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:00:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/108451082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:00:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/108451082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:00:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:00:57 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:00:57 compute-0 sudo[382326]: pam_unix(sudo:session): session closed for user root
Sep 30 19:00:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:57.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:58 compute-0 nova_compute[265391]: 2025-09-30 19:00:58.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2487: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:00:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:58] "GET /metrics HTTP/1.1" 200 46743 "" "Prometheus/2.51.0"
Sep 30 19:00:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:00:58] "GET /metrics HTTP/1.1" 200 46743 "" "Prometheus/2.51.0"
Sep 30 19:00:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:00:58.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:00:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:00:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:00:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:00:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:00:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:00:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:00:59.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:00:59 compute-0 nova_compute[265391]: 2025-09-30 19:00:59.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:00:59 compute-0 ceph-mon[73755]: pgmap v2487: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:00:59 compute-0 podman[276673]: time="2025-09-30T19:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:00:59 compute-0 podman[276673]: @ - - [30/Sep/2025:19:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 19:00:59 compute-0 podman[276673]: @ - - [30/Sep/2025:19:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10322 "" "Go-http-client/1.1"
Sep 30 19:00:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:00:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:00:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:00:59.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2488: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:01:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:00.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:01 compute-0 openstack_network_exporter[279566]: ERROR   19:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:01:01 compute-0 openstack_network_exporter[279566]: ERROR   19:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:01:01 compute-0 openstack_network_exporter[279566]: ERROR   19:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:01:01 compute-0 openstack_network_exporter[279566]: ERROR   19:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:01:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:01:01 compute-0 openstack_network_exporter[279566]: ERROR   19:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:01:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:01:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:01 compute-0 CROND[382356]: (root) CMD (run-parts /etc/cron.hourly)
Sep 30 19:01:01 compute-0 run-parts[382359]: (/etc/cron.hourly) starting 0anacron
Sep 30 19:01:01 compute-0 run-parts[382365]: (/etc/cron.hourly) finished 0anacron
Sep 30 19:01:01 compute-0 CROND[382355]: (root) CMDEND (run-parts /etc/cron.hourly)
Sep 30 19:01:01 compute-0 ceph-mon[73755]: pgmap v2488: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:01:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:01.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2489: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:01:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:02.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:03 compute-0 nova_compute[265391]: 2025-09-30 19:01:03.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:03 compute-0 podman[382367]: 2025-09-30 19:01:03.527115857 +0000 UTC m=+0.064363129 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Sep 30 19:01:03 compute-0 podman[382369]: 2025-09-30 19:01:03.535162236 +0000 UTC m=+0.064834402 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 19:01:03 compute-0 podman[382368]: 2025-09-30 19:01:03.59127168 +0000 UTC m=+0.124401175 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250930, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Sep 30 19:01:03 compute-0 ceph-mon[73755]: pgmap v2489: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 610 B/s rd, 0 op/s
Sep 30 19:01:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:03.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:03 compute-0 nova_compute[265391]: 2025-09-30 19:01:03.937 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:01:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:04.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:04 compute-0 nova_compute[265391]: 2025-09-30 19:01:04.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:01:04 compute-0 nova_compute[265391]: 2025-09-30 19:01:04.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2490: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:04.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:05.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:05 compute-0 ceph-mon[73755]: pgmap v2490: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2491: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:06 compute-0 sudo[382439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:01:06 compute-0 sudo[382439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:01:06 compute-0 sudo[382439]: pam_unix(sudo:session): session closed for user root
Sep 30 19:01:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:06.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:01:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:01:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:01:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:07.479Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:07.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:07 compute-0 ceph-mon[73755]: pgmap v2491: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:01:08 compute-0 nova_compute[265391]: 2025-09-30 19:01:08.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_19:01:08
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.nfs', 'images', 'backups', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr']
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2492: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:01:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:08] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 19:01:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:08] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 19:01:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:09.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:09.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:09 compute-0 ceph-mgr[74051]: [devicehealth INFO root] Check health
Sep 30 19:01:09 compute-0 nova_compute[265391]: 2025-09-30 19:01:09.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:09.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:09 compute-0 ceph-mon[73755]: pgmap v2492: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2493: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:11.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:11 compute-0 nova_compute[265391]: 2025-09-30 19:01:11.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:01:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:11 compute-0 ceph-mon[73755]: pgmap v2493: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2494: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:13.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:13 compute-0 nova_compute[265391]: 2025-09-30 19:01:13.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:13.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:13 compute-0 ceph-mon[73755]: pgmap v2494: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3842974402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:01:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:14.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:14 compute-0 nova_compute[265391]: 2025-09-30 19:01:14.423 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:01:14 compute-0 nova_compute[265391]: 2025-09-30 19:01:14.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2495: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:15.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:15 compute-0 nova_compute[265391]: 2025-09-30 19:01:15.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:01:15 compute-0 nova_compute[265391]: 2025-09-30 19:01:15.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:01:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:15.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:15 compute-0 ceph-mon[73755]: pgmap v2495: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2711137893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:01:15 compute-0 nova_compute[265391]: 2025-09-30 19:01:15.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:01:15 compute-0 nova_compute[265391]: 2025-09-30 19:01:15.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:01:15 compute-0 nova_compute[265391]: 2025-09-30 19:01:15.946 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:01:15 compute-0 nova_compute[265391]: 2025-09-30 19:01:15.947 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 19:01:15 compute-0 nova_compute[265391]: 2025-09-30 19:01:15.947 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:01:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:01:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1333483285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:01:16 compute-0 nova_compute[265391]: 2025-09-30 19:01:16.437 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:01:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2496: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:16 compute-0 nova_compute[265391]: 2025-09-30 19:01:16.592 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 19:01:16 compute-0 nova_compute[265391]: 2025-09-30 19:01:16.593 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:01:16 compute-0 nova_compute[265391]: 2025-09-30 19:01:16.612 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:01:16 compute-0 nova_compute[265391]: 2025-09-30 19:01:16.612 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4260MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 19:01:16 compute-0 nova_compute[265391]: 2025-09-30 19:01:16.612 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:01:16 compute-0 nova_compute[265391]: 2025-09-30 19:01:16.613 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:01:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1333483285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:01:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:17.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:17 compute-0 podman[382498]: 2025-09-30 19:01:17.312757345 +0000 UTC m=+0.088647218 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Sep 30 19:01:17 compute-0 podman[382499]: 2025-09-30 19:01:17.321566773 +0000 UTC m=+0.090720612 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Sep 30 19:01:17 compute-0 podman[382497]: 2025-09-30 19:01:17.335858144 +0000 UTC m=+0.117003444 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 19:01:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:17.481Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:17 compute-0 nova_compute[265391]: 2025-09-30 19:01:17.659 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 19:01:17 compute-0 nova_compute[265391]: 2025-09-30 19:01:17.660 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 19:01:16 up  2:04,  0 user,  load average: 0.23, 0.59, 0.70\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 19:01:17 compute-0 nova_compute[265391]: 2025-09-30 19:01:17.686 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:01:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:17.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:17 compute-0 ceph-mon[73755]: pgmap v2496: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:01:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862913231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:01:18 compute-0 nova_compute[265391]: 2025-09-30 19:01:18.156 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:01:18 compute-0 nova_compute[265391]: 2025-09-30 19:01:18.161 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 19:01:18 compute-0 nova_compute[265391]: 2025-09-30 19:01:18.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2497: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:18 compute-0 nova_compute[265391]: 2025-09-30 19:01:18.676 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 19:01:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:18] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 19:01:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:18] "GET /metrics HTTP/1.1" 200 46742 "" "Prometheus/2.51.0"
Sep 30 19:01:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1862913231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:01:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:19.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:01:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:19.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:01:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:19.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:19 compute-0 nova_compute[265391]: 2025-09-30 19:01:19.188 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 19:01:19 compute-0 nova_compute[265391]: 2025-09-30 19:01:19.189 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.576s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:01:19 compute-0 nova_compute[265391]: 2025-09-30 19:01:19.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:19.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:19 compute-0 ceph-mon[73755]: pgmap v2497: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:20 compute-0 nova_compute[265391]: 2025-09-30 19:01:20.188 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:01:20 compute-0 nova_compute[265391]: 2025-09-30 19:01:20.188 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:01:20 compute-0 nova_compute[265391]: 2025-09-30 19:01:20.189 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 19:01:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2498: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:20 compute-0 ceph-mon[73755]: pgmap v2498: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:21.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:21.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:01:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:01:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:01:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2499: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:23.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:23 compute-0 nova_compute[265391]: 2025-09-30 19:01:23.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:23 compute-0 ceph-mon[73755]: pgmap v2499: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:23 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=cleanup t=2025-09-30T19:01:23.530470627Z level=info msg="Completed cleanup jobs" duration=33.456187ms
Sep 30 19:01:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:23.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:24.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=grafana.update.checker t=2025-09-30T19:01:24.025842766Z level=info msg="Update check succeeded" duration=457.97303ms
Sep 30 19:01:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-grafana-compute-0[104521]: logger=plugins.update.checker t=2025-09-30T19:01:24.028258269Z level=info msg="Update check succeeded" duration=465.021803ms
Sep 30 19:01:24 compute-0 nova_compute[265391]: 2025-09-30 19:01:24.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2500: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:25.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:25 compute-0 ceph-mon[73755]: pgmap v2500: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2501: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:26 compute-0 sudo[382591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:01:26 compute-0 sudo[382591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:01:26 compute-0 sudo[382591]: pam_unix(sudo:session): session closed for user root
Sep 30 19:01:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:27.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:27.482Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:27 compute-0 ceph-mon[73755]: pgmap v2501: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.585695) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258887585759, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2087, "num_deletes": 251, "total_data_size": 3924418, "memory_usage": 3980256, "flush_reason": "Manual Compaction"}
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258887610039, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3797315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59746, "largest_seqno": 61832, "table_properties": {"data_size": 3788050, "index_size": 5758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19360, "raw_average_key_size": 20, "raw_value_size": 3769368, "raw_average_value_size": 3951, "num_data_blocks": 252, "num_entries": 954, "num_filter_entries": 954, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258677, "oldest_key_time": 1759258677, "file_creation_time": 1759258887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 24387 microseconds, and 14691 cpu microseconds.
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.610087) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3797315 bytes OK
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.610109) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.611902) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.611916) EVENT_LOG_v1 {"time_micros": 1759258887611911, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.611933) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3915834, prev total WAL file size 3915834, number of live WAL files 2.
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.613100) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3708KB)], [140(12MB)]
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258887613131, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 16541480, "oldest_snapshot_seqno": -1}
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8688 keys, 14492354 bytes, temperature: kUnknown
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258887676087, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 14492354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14437933, "index_size": 31588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 229541, "raw_average_key_size": 26, "raw_value_size": 14286557, "raw_average_value_size": 1644, "num_data_blocks": 1225, "num_entries": 8688, "num_filter_entries": 8688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.676304) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 14492354 bytes
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.677386) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.5 rd, 230.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.2 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 9206, records dropped: 518 output_compression: NoCompression
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.677401) EVENT_LOG_v1 {"time_micros": 1759258887677394, "job": 86, "event": "compaction_finished", "compaction_time_micros": 63023, "compaction_time_cpu_micros": 28084, "output_level": 6, "num_output_files": 1, "total_output_size": 14492354, "num_input_records": 9206, "num_output_records": 8688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258887678017, "job": 86, "event": "table_file_deletion", "file_number": 142}
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258887680421, "job": 86, "event": "table_file_deletion", "file_number": 140}
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.613040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.680462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.680466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.680468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.680469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:27 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:27.680470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:27.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:28 compute-0 nova_compute[265391]: 2025-09-30 19:01:28.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2502: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:28] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 19:01:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:28] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 19:01:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:29.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:29.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:29 compute-0 nova_compute[265391]: 2025-09-30 19:01:29.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:29 compute-0 ceph-mon[73755]: pgmap v2502: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:29 compute-0 podman[276673]: time="2025-09-30T19:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:01:29 compute-0 podman[276673]: @ - - [30/Sep/2025:19:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 19:01:29 compute-0 podman[276673]: @ - - [30/Sep/2025:19:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10322 "" "Go-http-client/1.1"
Sep 30 19:01:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:29.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2503: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:31.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:31 compute-0 openstack_network_exporter[279566]: ERROR   19:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:01:31 compute-0 openstack_network_exporter[279566]: ERROR   19:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:01:31 compute-0 openstack_network_exporter[279566]: ERROR   19:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:01:31 compute-0 openstack_network_exporter[279566]: ERROR   19:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:01:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:01:31 compute-0 openstack_network_exporter[279566]: ERROR   19:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:01:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:01:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.535046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258891535139, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 276, "num_deletes": 250, "total_data_size": 78390, "memory_usage": 84552, "flush_reason": "Manual Compaction"}
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258891538039, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 77444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61833, "largest_seqno": 62108, "table_properties": {"data_size": 75515, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5343, "raw_average_key_size": 20, "raw_value_size": 71812, "raw_average_value_size": 270, "num_data_blocks": 7, "num_entries": 265, "num_filter_entries": 265, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258888, "oldest_key_time": 1759258888, "file_creation_time": 1759258891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3082 microseconds, and 837 cpu microseconds.
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.538128) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 77444 bytes OK
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.538171) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.539601) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.539614) EVENT_LOG_v1 {"time_micros": 1759258891539610, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.539628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 76304, prev total WAL file size 76304, number of live WAL files 2.
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.540211) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(75KB)], [143(13MB)]
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258891540268, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 14569798, "oldest_snapshot_seqno": -1}
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8446 keys, 10747715 bytes, temperature: kUnknown
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258891589971, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10747715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10699785, "index_size": 25737, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 224699, "raw_average_key_size": 26, "raw_value_size": 10557433, "raw_average_value_size": 1249, "num_data_blocks": 980, "num_entries": 8446, "num_filter_entries": 8446, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759258891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.590237) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10747715 bytes
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.592082) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 293.6 rd, 216.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.8 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(326.9) write-amplify(138.8) OK, records in: 8953, records dropped: 507 output_compression: NoCompression
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.592104) EVENT_LOG_v1 {"time_micros": 1759258891592094, "job": 88, "event": "compaction_finished", "compaction_time_micros": 49633, "compaction_time_cpu_micros": 25312, "output_level": 6, "num_output_files": 1, "total_output_size": 10747715, "num_input_records": 8953, "num_output_records": 8446, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258891592238, "job": 88, "event": "table_file_deletion", "file_number": 145}
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759258891595782, "job": 88, "event": "table_file_deletion", "file_number": 143}
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.540144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.595838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.595845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.595848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.595851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:31 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:01:31.595854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:01:31 compute-0 ceph-mon[73755]: pgmap v2503: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:31.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2504: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:33.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:33 compute-0 nova_compute[265391]: 2025-09-30 19:01:33.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:33 compute-0 ceph-mon[73755]: pgmap v2504: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:33.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:34.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:34 compute-0 nova_compute[265391]: 2025-09-30 19:01:34.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2505: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:34 compute-0 podman[382626]: 2025-09-30 19:01:34.520742787 +0000 UTC m=+0.052316027 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Sep 30 19:01:34 compute-0 podman[382625]: 2025-09-30 19:01:34.557369926 +0000 UTC m=+0.089629184 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 19:01:34 compute-0 podman[382624]: 2025-09-30 19:01:34.557424267 +0000 UTC m=+0.091905913 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 19:01:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:35.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:35 compute-0 ceph-mon[73755]: pgmap v2505: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2506: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 19:01:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1261945660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:01:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 19:01:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1261945660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:01:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1261945660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:01:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1261945660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:01:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:37.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:01:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:01:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:01:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:37.483Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:37 compute-0 ceph-mon[73755]: pgmap v2506: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:01:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:38 compute-0 nova_compute[265391]: 2025-09-30 19:01:38.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2507: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:38] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:01:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:38] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:01:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:39.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:39.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:39 compute-0 nova_compute[265391]: 2025-09-30 19:01:39.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:39 compute-0 ceph-mon[73755]: pgmap v2507: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:39.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2508: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:41.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:41 compute-0 ceph-mon[73755]: pgmap v2508: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:41.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2509: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:43.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:43 compute-0 nova_compute[265391]: 2025-09-30 19:01:43.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:43 compute-0 ceph-mon[73755]: pgmap v2509: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:43.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:44.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:44 compute-0 nova_compute[265391]: 2025-09-30 19:01:44.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2510: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:45.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:45 compute-0 ceph-mon[73755]: pgmap v2510: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:45.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2511: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:46 compute-0 sudo[382699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:01:46 compute-0 sudo[382699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:01:46 compute-0 sudo[382699]: pam_unix(sudo:session): session closed for user root
Sep 30 19:01:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:47.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:47.484Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:47 compute-0 podman[382724]: 2025-09-30 19:01:47.543569666 +0000 UTC m=+0.083092565 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Sep 30 19:01:47 compute-0 podman[382726]: 2025-09-30 19:01:47.547514908 +0000 UTC m=+0.077670264 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Sep 30 19:01:47 compute-0 podman[382725]: 2025-09-30 19:01:47.552283382 +0000 UTC m=+0.075676703 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, container_name=iscsid)
Sep 30 19:01:47 compute-0 ceph-mon[73755]: pgmap v2511: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:47.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:48 compute-0 nova_compute[265391]: 2025-09-30 19:01:48.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2512: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:48] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:01:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:48] "GET /metrics HTTP/1.1" 200 46745 "" "Prometheus/2.51.0"
Sep 30 19:01:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:49.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:01:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:49.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:01:49 compute-0 nova_compute[265391]: 2025-09-30 19:01:49.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:49 compute-0 ceph-mon[73755]: pgmap v2512: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:49.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2513: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:51.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:51 compute-0 ceph-mon[73755]: pgmap v2513: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:51.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:01:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:01:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2514: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:01:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:53.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:53 compute-0 nova_compute[265391]: 2025-09-30 19:01:53.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:53 compute-0 ceph-mon[73755]: pgmap v2514: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:53.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:54.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:01:54.366 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:01:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:01:54.366 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:01:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:01:54.366 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:01:54 compute-0 nova_compute[265391]: 2025-09-30 19:01:54.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2515: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:55.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:55 compute-0 ceph-mon[73755]: pgmap v2515: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:01:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:55.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2516: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:01:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:01:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:57.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:01:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:57.485Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 19:01:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1893448171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:01:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 19:01:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1893448171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:01:57 compute-0 ceph-mon[73755]: pgmap v2516: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1893448171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:01:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1893448171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:01:57 compute-0 sudo[382792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:01:57 compute-0 sudo[382792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:01:57 compute-0 sudo[382792]: pam_unix(sudo:session): session closed for user root
Sep 30 19:01:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:57.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:57 compute-0 sudo[382817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 19:01:57 compute-0 sudo[382817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:01:58 compute-0 nova_compute[265391]: 2025-09-30 19:01:58.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2517: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:58 compute-0 sudo[382817]: pam_unix(sudo:session): session closed for user root
Sep 30 19:01:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 19:01:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:58] "GET /metrics HTTP/1.1" 200 46748 "" "Prometheus/2.51.0"
Sep 30 19:01:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:01:58] "GET /metrics HTTP/1.1" 200 46748 "" "Prometheus/2.51.0"
Sep 30 19:01:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:01:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 19:01:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 19:01:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2518: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 892 B/s rd, 0 op/s
Sep 30 19:01:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:01:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 19:01:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:01:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 19:01:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 19:01:58 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:01:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:01:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:01:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 19:01:58 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:01:58 compute-0 sudo[382872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:01:58 compute-0 sudo[382872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:01:58 compute-0 sudo[382872]: pam_unix(sudo:session): session closed for user root
Sep 30 19:01:58 compute-0 sudo[382897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 19:01:58 compute-0 sudo[382897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:01:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:01:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:01:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:01:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:01:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:01:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:01:59.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:01:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:01:59.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:59 compute-0 podman[382962]: 2025-09-30 19:01:59.34573365 +0000 UTC m=+0.019907677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:01:59 compute-0 nova_compute[265391]: 2025-09-30 19:01:59.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:01:59 compute-0 podman[382962]: 2025-09-30 19:01:59.564792637 +0000 UTC m=+0.238925173 container create 4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 19:01:59 compute-0 systemd[1]: Started libpod-conmon-4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771.scope.
Sep 30 19:01:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:01:59 compute-0 podman[382962]: 2025-09-30 19:01:59.658391543 +0000 UTC m=+0.332524049 container init 4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:01:59 compute-0 podman[382962]: 2025-09-30 19:01:59.664684526 +0000 UTC m=+0.338817032 container start 4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 19:01:59 compute-0 podman[382962]: 2025-09-30 19:01:59.66752912 +0000 UTC m=+0.341661626 container attach 4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:01:59 compute-0 admiring_hellman[382979]: 167 167
Sep 30 19:01:59 compute-0 systemd[1]: libpod-4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771.scope: Deactivated successfully.
Sep 30 19:01:59 compute-0 podman[382962]: 2025-09-30 19:01:59.67022681 +0000 UTC m=+0.344359316 container died 4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 19:01:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0097537e2ce273d54817576aa755c35fda8b8b8c12e0bf579ae5d62383a2b6e2-merged.mount: Deactivated successfully.
Sep 30 19:01:59 compute-0 podman[382962]: 2025-09-30 19:01:59.708920613 +0000 UTC m=+0.383053119 container remove 4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 19:01:59 compute-0 systemd[1]: libpod-conmon-4fc4a4546c8cabee50632a633b1e1475572cc6e5f994bf979ea8eb7419100771.scope: Deactivated successfully.
Sep 30 19:01:59 compute-0 podman[276673]: time="2025-09-30T19:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:01:59 compute-0 podman[276673]: @ - - [30/Sep/2025:19:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 19:01:59 compute-0 podman[276673]: @ - - [30/Sep/2025:19:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10313 "" "Go-http-client/1.1"
Sep 30 19:01:59 compute-0 podman[383002]: 2025-09-30 19:01:59.880440919 +0000 UTC m=+0.043940860 container create 7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 19:01:59 compute-0 ceph-mon[73755]: pgmap v2517: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:01:59 compute-0 ceph-mon[73755]: pgmap v2518: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 892 B/s rd, 0 op/s
Sep 30 19:01:59 compute-0 systemd[1]: Started libpod-conmon-7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a.scope.
Sep 30 19:01:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3de97733f6ff611cacd46763a0572ee4abe4fbc1219c9e774f09b2bdd0fe29d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3de97733f6ff611cacd46763a0572ee4abe4fbc1219c9e774f09b2bdd0fe29d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3de97733f6ff611cacd46763a0572ee4abe4fbc1219c9e774f09b2bdd0fe29d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3de97733f6ff611cacd46763a0572ee4abe4fbc1219c9e774f09b2bdd0fe29d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3de97733f6ff611cacd46763a0572ee4abe4fbc1219c9e774f09b2bdd0fe29d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 19:01:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:01:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:01:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:01:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:01:59 compute-0 podman[383002]: 2025-09-30 19:01:59.859623039 +0000 UTC m=+0.023123000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:01:59 compute-0 podman[383002]: 2025-09-30 19:01:59.970043131 +0000 UTC m=+0.133543122 container init 7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 19:01:59 compute-0 podman[383002]: 2025-09-30 19:01:59.976054307 +0000 UTC m=+0.139554258 container start 7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:01:59 compute-0 podman[383002]: 2025-09-30 19:01:59.979288261 +0000 UTC m=+0.142788232 container attach 7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:02:00 compute-0 hopeful_feynman[383019]: --> passed data devices: 0 physical, 1 LVM
Sep 30 19:02:00 compute-0 hopeful_feynman[383019]: --> All data devices are unavailable
Sep 30 19:02:00 compute-0 systemd[1]: libpod-7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a.scope: Deactivated successfully.
Sep 30 19:02:00 compute-0 podman[383002]: 2025-09-30 19:02:00.331617422 +0000 UTC m=+0.495117383 container died 7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 19:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3de97733f6ff611cacd46763a0572ee4abe4fbc1219c9e774f09b2bdd0fe29d-merged.mount: Deactivated successfully.
Sep 30 19:02:00 compute-0 podman[383002]: 2025-09-30 19:02:00.375013117 +0000 UTC m=+0.538513058 container remove 7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 19:02:00 compute-0 systemd[1]: libpod-conmon-7d98954442798c7120638b67cfa04ab35bef6464108ab28d0053c6fab710157a.scope: Deactivated successfully.
Sep 30 19:02:00 compute-0 sudo[382897]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:00 compute-0 sudo[383045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:02:00 compute-0 sudo[383045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:02:00 compute-0 sudo[383045]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:00 compute-0 sudo[383070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 19:02:00 compute-0 sudo[383070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:02:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2519: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 595 B/s rd, 0 op/s
Sep 30 19:02:00 compute-0 podman[383135]: 2025-09-30 19:02:00.930401162 +0000 UTC m=+0.043646212 container create 9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_merkle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:02:00 compute-0 systemd[1]: Started libpod-conmon-9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28.scope.
Sep 30 19:02:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:02:00 compute-0 podman[383135]: 2025-09-30 19:02:00.984912195 +0000 UTC m=+0.098157245 container init 9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_merkle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 19:02:00 compute-0 podman[383135]: 2025-09-30 19:02:00.990398377 +0000 UTC m=+0.103643417 container start 9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_merkle, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 19:02:00 compute-0 confident_merkle[383151]: 167 167
Sep 30 19:02:00 compute-0 systemd[1]: libpod-9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28.scope: Deactivated successfully.
Sep 30 19:02:00 compute-0 podman[383135]: 2025-09-30 19:02:00.995876969 +0000 UTC m=+0.109122039 container attach 9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 19:02:00 compute-0 podman[383135]: 2025-09-30 19:02:00.996326581 +0000 UTC m=+0.109571621 container died 9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_merkle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 19:02:01 compute-0 podman[383135]: 2025-09-30 19:02:00.91259556 +0000 UTC m=+0.025840630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a7c8e6d6df302cf9ea0771c933ce383b8f382fc0ba9d02fd6ec06a5da86ffec-merged.mount: Deactivated successfully.
Sep 30 19:02:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:01.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:01 compute-0 podman[383135]: 2025-09-30 19:02:01.062375692 +0000 UTC m=+0.175620732 container remove 9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_merkle, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 19:02:01 compute-0 systemd[1]: libpod-conmon-9de71235259cb290e96e406deac47bf80a949e4958310e1a1308d19c84387d28.scope: Deactivated successfully.
Sep 30 19:02:01 compute-0 podman[383174]: 2025-09-30 19:02:01.210794938 +0000 UTC m=+0.037807941 container create 3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_feistel, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 19:02:01 compute-0 systemd[1]: Started libpod-conmon-3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59.scope.
Sep 30 19:02:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97ff40819cdbfa1f4fdba1f528e16aff6d2cd2dbb8d850e16c452d1f297013d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97ff40819cdbfa1f4fdba1f528e16aff6d2cd2dbb8d850e16c452d1f297013d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97ff40819cdbfa1f4fdba1f528e16aff6d2cd2dbb8d850e16c452d1f297013d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97ff40819cdbfa1f4fdba1f528e16aff6d2cd2dbb8d850e16c452d1f297013d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:02:01 compute-0 podman[383174]: 2025-09-30 19:02:01.284707734 +0000 UTC m=+0.111720757 container init 3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_feistel, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 19:02:01 compute-0 podman[383174]: 2025-09-30 19:02:01.194404193 +0000 UTC m=+0.021417216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:02:01 compute-0 podman[383174]: 2025-09-30 19:02:01.290395971 +0000 UTC m=+0.117408974 container start 3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_feistel, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 19:02:01 compute-0 podman[383174]: 2025-09-30 19:02:01.293694387 +0000 UTC m=+0.120707390 container attach 3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:02:01 compute-0 openstack_network_exporter[279566]: ERROR   19:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:02:01 compute-0 openstack_network_exporter[279566]: ERROR   19:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:02:01 compute-0 openstack_network_exporter[279566]: ERROR   19:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:02:01 compute-0 openstack_network_exporter[279566]: ERROR   19:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:02:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:02:01 compute-0 openstack_network_exporter[279566]: ERROR   19:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:02:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:02:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:01 compute-0 adoring_feistel[383191]: {
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:     "0": [
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:         {
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "devices": [
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "/dev/loop3"
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             ],
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "lv_name": "ceph_lv0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "lv_size": "21470642176",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "name": "ceph_lv0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "tags": {
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.cluster_name": "ceph",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.crush_device_class": "",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.encrypted": "0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.osd_id": "0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.type": "block",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.vdo": "0",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:                 "ceph.with_tpm": "0"
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             },
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "type": "block",
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:             "vg_name": "ceph_vg0"
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:         }
Sep 30 19:02:01 compute-0 adoring_feistel[383191]:     ]
Sep 30 19:02:01 compute-0 adoring_feistel[383191]: }
Sep 30 19:02:01 compute-0 systemd[1]: libpod-3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59.scope: Deactivated successfully.
Sep 30 19:02:01 compute-0 conmon[383191]: conmon 3a639f75119d88d0addc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59.scope/container/memory.events
Sep 30 19:02:01 compute-0 podman[383174]: 2025-09-30 19:02:01.586115096 +0000 UTC m=+0.413128099 container died 3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 19:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97ff40819cdbfa1f4fdba1f528e16aff6d2cd2dbb8d850e16c452d1f297013d-merged.mount: Deactivated successfully.
Sep 30 19:02:01 compute-0 podman[383174]: 2025-09-30 19:02:01.744201773 +0000 UTC m=+0.571214816 container remove 3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 19:02:01 compute-0 systemd[1]: libpod-conmon-3a639f75119d88d0addcdf588c4fc8921b6d03d8375bb0ccd9e49143d927aa59.scope: Deactivated successfully.
Sep 30 19:02:01 compute-0 sudo[383070]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:01 compute-0 sudo[383215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:02:01 compute-0 sudo[383215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:02:01 compute-0 sudo[383215]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:01 compute-0 ceph-mon[73755]: pgmap v2519: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 595 B/s rd, 0 op/s
Sep 30 19:02:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:01 compute-0 sudo[383241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 19:02:01 compute-0 sudo[383241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:02:02 compute-0 podman[383306]: 2025-09-30 19:02:02.292038032 +0000 UTC m=+0.033449658 container create 6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_feynman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 19:02:02 compute-0 systemd[1]: Started libpod-conmon-6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350.scope.
Sep 30 19:02:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:02:02 compute-0 podman[383306]: 2025-09-30 19:02:02.369361987 +0000 UTC m=+0.110773633 container init 6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 19:02:02 compute-0 podman[383306]: 2025-09-30 19:02:02.277369872 +0000 UTC m=+0.018781518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:02:02 compute-0 podman[383306]: 2025-09-30 19:02:02.375606758 +0000 UTC m=+0.117018384 container start 6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:02:02 compute-0 podman[383306]: 2025-09-30 19:02:02.378838412 +0000 UTC m=+0.120250038 container attach 6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_feynman, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 19:02:02 compute-0 heuristic_feynman[383324]: 167 167
Sep 30 19:02:02 compute-0 systemd[1]: libpod-6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350.scope: Deactivated successfully.
Sep 30 19:02:02 compute-0 podman[383306]: 2025-09-30 19:02:02.382056316 +0000 UTC m=+0.123467952 container died 6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_feynman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Sep 30 19:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0f783218a3ad7a8046e3a79806fdeaa133fd0543b2a77f790486e42eeab62fc-merged.mount: Deactivated successfully.
Sep 30 19:02:02 compute-0 podman[383306]: 2025-09-30 19:02:02.422689069 +0000 UTC m=+0.164100705 container remove 6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_feynman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:02:02 compute-0 systemd[1]: libpod-conmon-6592823faf2c3f54f5641491931ddfd7b9f51b3b6997e2de7a550fbd284a6350.scope: Deactivated successfully.
Sep 30 19:02:02 compute-0 podman[383348]: 2025-09-30 19:02:02.599535892 +0000 UTC m=+0.039012332 container create 19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 19:02:02 compute-0 systemd[1]: Started libpod-conmon-19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652.scope.
Sep 30 19:02:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a3b3b821d67889cc6ac69cedc82dd21fd6848483e37cebc4fa4e828d8c1e61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a3b3b821d67889cc6ac69cedc82dd21fd6848483e37cebc4fa4e828d8c1e61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a3b3b821d67889cc6ac69cedc82dd21fd6848483e37cebc4fa4e828d8c1e61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a3b3b821d67889cc6ac69cedc82dd21fd6848483e37cebc4fa4e828d8c1e61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:02:02 compute-0 podman[383348]: 2025-09-30 19:02:02.675899092 +0000 UTC m=+0.115375552 container init 19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_yalow, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:02:02 compute-0 podman[383348]: 2025-09-30 19:02:02.584707968 +0000 UTC m=+0.024184428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:02:02 compute-0 podman[383348]: 2025-09-30 19:02:02.687222655 +0000 UTC m=+0.126699105 container start 19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 19:02:02 compute-0 podman[383348]: 2025-09-30 19:02:02.690444139 +0000 UTC m=+0.129920629 container attach 19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_yalow, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:02:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2520: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 595 B/s rd, 0 op/s
Sep 30 19:02:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:03.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:03 compute-0 lvm[383439]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 19:02:03 compute-0 lvm[383439]: VG ceph_vg0 finished
Sep 30 19:02:03 compute-0 suspicious_yalow[383365]: {}
Sep 30 19:02:03 compute-0 systemd[1]: libpod-19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652.scope: Deactivated successfully.
Sep 30 19:02:03 compute-0 systemd[1]: libpod-19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652.scope: Consumed 1.033s CPU time.
Sep 30 19:02:03 compute-0 nova_compute[265391]: 2025-09-30 19:02:03.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:03 compute-0 podman[383443]: 2025-09-30 19:02:03.402940515 +0000 UTC m=+0.022695839 container died 19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 19:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-35a3b3b821d67889cc6ac69cedc82dd21fd6848483e37cebc4fa4e828d8c1e61-merged.mount: Deactivated successfully.
Sep 30 19:02:03 compute-0 podman[383443]: 2025-09-30 19:02:03.434781291 +0000 UTC m=+0.054536605 container remove 19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 19:02:03 compute-0 systemd[1]: libpod-conmon-19d116368abc70f21582740c1350cd1ebdc964925d5b0aa60fb06d4d4c189652.scope: Deactivated successfully.
Sep 30 19:02:03 compute-0 sudo[383241]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 19:02:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:02:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 19:02:03 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:02:03 compute-0 sudo[383458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 19:02:03 compute-0 sudo[383458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:02:03 compute-0 sudo[383458]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:03 compute-0 ceph-mon[73755]: pgmap v2520: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 595 B/s rd, 0 op/s
Sep 30 19:02:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:02:03 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:02:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:04.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:02:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:04.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:02:04 compute-0 nova_compute[265391]: 2025-09-30 19:02:04.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:04 compute-0 nova_compute[265391]: 2025-09-30 19:02:04.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:04 compute-0 nova_compute[265391]: 2025-09-30 19:02:04.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2521: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 595 B/s rd, 0 op/s
Sep 30 19:02:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:05.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:05 compute-0 podman[383487]: 2025-09-30 19:02:05.549649354 +0000 UTC m=+0.071414792 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 19:02:05 compute-0 podman[383485]: 2025-09-30 19:02:05.549662575 +0000 UTC m=+0.069925584 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 19:02:05 compute-0 podman[383486]: 2025-09-30 19:02:05.584082717 +0000 UTC m=+0.104476689 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 19:02:05 compute-0 ceph-mon[73755]: pgmap v2521: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 595 B/s rd, 0 op/s
Sep 30 19:02:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:05.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2522: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 595 B/s rd, 0 op/s
Sep 30 19:02:06 compute-0 sudo[383551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:02:06 compute-0 sudo[383551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:02:06 compute-0 sudo[383551]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:07.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:02:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:02:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:02:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:07.485Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:07 compute-0 ceph-mon[73755]: pgmap v2522: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 595 B/s rd, 0 op/s
Sep 30 19:02:07 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:02:08 compute-0 nova_compute[265391]: 2025-09-30 19:02:08.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_19:02:08
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.nfs', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'vms', 'volumes', 'default.rgw.meta']
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:02:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:08] "GET /metrics HTTP/1.1" 200 46743 "" "Prometheus/2.51.0"
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:08] "GET /metrics HTTP/1.1" 200 46743 "" "Prometheus/2.51.0"
Sep 30 19:02:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2523: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 893 B/s rd, 0 op/s
Sep 30 19:02:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:09.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:09 compute-0 ceph-mon[73755]: pgmap v2523: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 893 B/s rd, 0 op/s
Sep 30 19:02:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:09.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:09 compute-0 nova_compute[265391]: 2025-09-30 19:02:09.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2524: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:11.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:11.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:12 compute-0 ceph-mon[73755]: pgmap v2524: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2525: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:13 compute-0 ceph-mon[73755]: pgmap v2525: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:13.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:13 compute-0 nova_compute[265391]: 2025-09-30 19:02:13.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:13 compute-0 nova_compute[265391]: 2025-09-30 19:02:13.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:13.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:14.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1699144910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:02:14 compute-0 nova_compute[265391]: 2025-09-30 19:02:14.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2526: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:15.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:15 compute-0 ceph-mon[73755]: pgmap v2526: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:15 compute-0 nova_compute[265391]: 2025-09-30 19:02:15.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:15 compute-0 nova_compute[265391]: 2025-09-30 19:02:15.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:15 compute-0 nova_compute[265391]: 2025-09-30 19:02:15.951 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:02:15 compute-0 nova_compute[265391]: 2025-09-30 19:02:15.951 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:02:15 compute-0 nova_compute[265391]: 2025-09-30 19:02:15.951 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:02:15 compute-0 nova_compute[265391]: 2025-09-30 19:02:15.951 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 19:02:15 compute-0 nova_compute[265391]: 2025-09-30 19:02:15.952 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:02:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:15.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2620052722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:02:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:02:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403592302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:02:16 compute-0 nova_compute[265391]: 2025-09-30 19:02:16.397 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:02:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:16 compute-0 nova_compute[265391]: 2025-09-30 19:02:16.562 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 19:02:16 compute-0 nova_compute[265391]: 2025-09-30 19:02:16.563 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:02:16 compute-0 nova_compute[265391]: 2025-09-30 19:02:16.595 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:02:16 compute-0 nova_compute[265391]: 2025-09-30 19:02:16.596 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4232MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 19:02:16 compute-0 nova_compute[265391]: 2025-09-30 19:02:16.596 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:02:16 compute-0 nova_compute[265391]: 2025-09-30 19:02:16.596 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:02:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2527: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:17.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:17 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1403592302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:02:17 compute-0 ceph-mon[73755]: pgmap v2527: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:17.485Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:17 compute-0 nova_compute[265391]: 2025-09-30 19:02:17.641 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 19:02:17 compute-0 nova_compute[265391]: 2025-09-30 19:02:17.641 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 19:02:16 up  2:05,  0 user,  load average: 0.24, 0.53, 0.67\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 19:02:17 compute-0 nova_compute[265391]: 2025-09-30 19:02:17.666 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:02:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:17.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:02:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104838524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:02:18 compute-0 nova_compute[265391]: 2025-09-30 19:02:18.099 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:02:18 compute-0 nova_compute[265391]: 2025-09-30 19:02:18.104 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 19:02:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4104838524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:02:18 compute-0 nova_compute[265391]: 2025-09-30 19:02:18.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:18 compute-0 podman[383633]: 2025-09-30 19:02:18.542105396 +0000 UTC m=+0.079275606 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 19:02:18 compute-0 podman[383635]: 2025-09-30 19:02:18.549737414 +0000 UTC m=+0.078274930 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 19:02:18 compute-0 podman[383634]: 2025-09-30 19:02:18.558592623 +0000 UTC m=+0.089713576 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 19:02:18 compute-0 nova_compute[265391]: 2025-09-30 19:02:18.613 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 19:02:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:18] "GET /metrics HTTP/1.1" 200 46743 "" "Prometheus/2.51.0"
Sep 30 19:02:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:18] "GET /metrics HTTP/1.1" 200 46743 "" "Prometheus/2.51.0"
Sep 30 19:02:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2528: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:19.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:19.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:19 compute-0 nova_compute[265391]: 2025-09-30 19:02:19.134 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 19:02:19 compute-0 nova_compute[265391]: 2025-09-30 19:02:19.134 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.538s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:02:19 compute-0 ceph-mon[73755]: pgmap v2528: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:19 compute-0 nova_compute[265391]: 2025-09-30 19:02:19.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:19.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:20 compute-0 nova_compute[265391]: 2025-09-30 19:02:20.129 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:20 compute-0 nova_compute[265391]: 2025-09-30 19:02:20.130 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:20 compute-0 nova_compute[265391]: 2025-09-30 19:02:20.130 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:20 compute-0 nova_compute[265391]: 2025-09-30 19:02:20.130 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 19:02:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2529: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:21.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:21 compute-0 ceph-mon[73755]: pgmap v2529: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:21.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:02:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:02:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2530: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:02:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:23.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:23 compute-0 nova_compute[265391]: 2025-09-30 19:02:23.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:23 compute-0 ceph-mon[73755]: pgmap v2530: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:23.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:24.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:24 compute-0 nova_compute[265391]: 2025-09-30 19:02:24.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2531: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:25.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:25 compute-0 ceph-mon[73755]: pgmap v2531: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:25.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2532: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:26 compute-0 sudo[383698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:02:26 compute-0 sudo[383698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:02:26 compute-0 sudo[383698]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:27.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:27.486Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:27 compute-0 sshd-session[383320]: Received disconnect from 154.125.120.7 port 53162:11: Bye Bye [preauth]
Sep 30 19:02:27 compute-0 sshd-session[383320]: Disconnected from 154.125.120.7 port 53162 [preauth]
Sep 30 19:02:27 compute-0 ceph-mon[73755]: pgmap v2532: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:27.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:28 compute-0 nova_compute[265391]: 2025-09-30 19:02:28.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:28] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 19:02:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:28] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 19:02:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2533: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:29.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:29 compute-0 ceph-mon[73755]: pgmap v2533: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:29.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:29 compute-0 nova_compute[265391]: 2025-09-30 19:02:29.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:29 compute-0 podman[276673]: time="2025-09-30T19:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:02:29 compute-0 podman[276673]: @ - - [30/Sep/2025:19:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 19:02:29 compute-0 podman[276673]: @ - - [30/Sep/2025:19:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10324 "" "Go-http-client/1.1"
Sep 30 19:02:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:29.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2534: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:31.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:31 compute-0 openstack_network_exporter[279566]: ERROR   19:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:02:31 compute-0 openstack_network_exporter[279566]: ERROR   19:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:02:31 compute-0 openstack_network_exporter[279566]: ERROR   19:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:02:31 compute-0 openstack_network_exporter[279566]: ERROR   19:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:02:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:02:31 compute-0 openstack_network_exporter[279566]: ERROR   19:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:02:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:02:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:31 compute-0 ceph-mon[73755]: pgmap v2534: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:31.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2535: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:33.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:33 compute-0 nova_compute[265391]: 2025-09-30 19:02:33.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:02:33 compute-0 nova_compute[265391]: 2025-09-30 19:02:33.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:33 compute-0 ceph-mon[73755]: pgmap v2535: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:33.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:34.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:34 compute-0 nova_compute[265391]: 2025-09-30 19:02:34.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2536: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:35.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:35 compute-0 ceph-mon[73755]: pgmap v2536: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:35.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:36 compute-0 podman[383733]: 2025-09-30 19:02:36.539980259 +0000 UTC m=+0.072974442 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true)
Sep 30 19:02:36 compute-0 podman[383735]: 2025-09-30 19:02:36.554604218 +0000 UTC m=+0.075376234 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 19:02:36 compute-0 podman[383734]: 2025-09-30 19:02:36.568051947 +0000 UTC m=+0.102215470 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 19:02:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 19:02:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3932686305' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:02:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 19:02:36 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3932686305' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:02:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2537: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3932686305' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:02:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/3932686305' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:02:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:37.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:02:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:02:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:02:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:37.487Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:37 compute-0 ceph-mon[73755]: pgmap v2537: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:02:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:38 compute-0 nova_compute[265391]: 2025-09-30 19:02:38.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:38] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:02:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:38] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:02:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2538: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:38 compute-0 ceph-mon[73755]: pgmap v2538: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:39.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:39 compute-0 nova_compute[265391]: 2025-09-30 19:02:39.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2539: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:41.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:41 compute-0 ceph-mon[73755]: pgmap v2539: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:41.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2540: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:43.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:43 compute-0 nova_compute[265391]: 2025-09-30 19:02:43.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:43 compute-0 ceph-mon[73755]: pgmap v2540: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:43.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:44.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:44 compute-0 nova_compute[265391]: 2025-09-30 19:02:44.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2541: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 19:02:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:45.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 19:02:45 compute-0 ceph-mon[73755]: pgmap v2541: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:45.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2542: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:47 compute-0 sudo[383813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:02:47 compute-0 sudo[383813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:02:47 compute-0 sudo[383813]: pam_unix(sudo:session): session closed for user root
Sep 30 19:02:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:47.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:47.489Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:47 compute-0 ceph-mon[73755]: pgmap v2542: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:48 compute-0 nova_compute[265391]: 2025-09-30 19:02:48.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:48] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:02:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:48] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:02:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2543: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:49.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:49 compute-0 ceph-mon[73755]: pgmap v2543: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:49.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:49 compute-0 podman[383840]: 2025-09-30 19:02:49.542156181 +0000 UTC m=+0.075079217 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Sep 30 19:02:49 compute-0 podman[383842]: 2025-09-30 19:02:49.542869199 +0000 UTC m=+0.077132120 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Sep 30 19:02:49 compute-0 nova_compute[265391]: 2025-09-30 19:02:49.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:49 compute-0 podman[383841]: 2025-09-30 19:02:49.578864852 +0000 UTC m=+0.108079472 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0)
Sep 30 19:02:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:50.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2544: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:51.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:51 compute-0 ceph-mon[73755]: pgmap v2544: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:52.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:02:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:02:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2545: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:02:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:53.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:53 compute-0 nova_compute[265391]: 2025-09-30 19:02:53.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:53 compute-0 ceph-mon[73755]: pgmap v2545: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:02:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:54.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:02:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:54.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:02:54.367 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:02:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:02:54.368 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:02:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:02:54.368 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:02:54 compute-0 nova_compute[265391]: 2025-09-30 19:02:54.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2546: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:55.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:55 compute-0 ceph-mon[73755]: pgmap v2546: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:56.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:02:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2547: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:57 compute-0 ceph-mon[73755]: pgmap v2547: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:02:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:57.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:57.489Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 19:02:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/538484799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:02:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 19:02:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/538484799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:02:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:02:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:02:58.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:02:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/538484799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:02:58 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/538484799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:02:58 compute-0 nova_compute[265391]: 2025-09-30 19:02:58.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:58] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:02:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:02:58] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:02:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2548: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:02:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:02:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:02:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:02:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:02:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:02:59.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:02:59 compute-0 ceph-mon[73755]: pgmap v2548: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:02:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:02:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:02:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:02:59.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:02:59 compute-0 nova_compute[265391]: 2025-09-30 19:02:59.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:02:59 compute-0 podman[276673]: time="2025-09-30T19:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:02:59 compute-0 podman[276673]: @ - - [30/Sep/2025:19:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 19:02:59 compute-0 podman[276673]: @ - - [30/Sep/2025:19:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10313 "" "Go-http-client/1.1"
Sep 30 19:03:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:00.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2549: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:01.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:01 compute-0 openstack_network_exporter[279566]: ERROR   19:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:03:01 compute-0 openstack_network_exporter[279566]: ERROR   19:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:03:01 compute-0 openstack_network_exporter[279566]: ERROR   19:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:03:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:03:01 compute-0 openstack_network_exporter[279566]: ERROR   19:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:03:01 compute-0 openstack_network_exporter[279566]: ERROR   19:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:03:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:03:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:01 compute-0 ceph-mon[73755]: pgmap v2549: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:02.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2550: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:03.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:03 compute-0 nova_compute[265391]: 2025-09-30 19:03:03.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:03 compute-0 sudo[383919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:03:03 compute-0 sudo[383919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:03 compute-0 sudo[383919]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:03 compute-0 sudo[383944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 19:03:03 compute-0 sudo[383944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:03 compute-0 ceph-mon[73755]: pgmap v2550: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:04.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:03:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:04.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:04 compute-0 sudo[383944]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:04 compute-0 nova_compute[265391]: 2025-09-30 19:03:04.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2551: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:05.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:05 compute-0 nova_compute[265391]: 2025-09-30 19:03:05.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:03:05 compute-0 nova_compute[265391]: 2025-09-30 19:03:05.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:03:05 compute-0 ceph-mon[73755]: pgmap v2551: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:06.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2552: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:07 compute-0 ceph-mon[73755]: pgmap v2552: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:07.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:07 compute-0 sudo[384003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:03:07 compute-0 sudo[384003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:07 compute-0 sudo[384003]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:07 compute-0 podman[384029]: 2025-09-30 19:03:07.289383413 +0000 UTC m=+0.060833872 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Sep 30 19:03:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 19:03:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 19:03:07 compute-0 podman[384028]: 2025-09-30 19:03:07.316404711 +0000 UTC m=+0.091730160 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Sep 30 19:03:07 compute-0 podman[384027]: 2025-09-30 19:03:07.316443892 +0000 UTC m=+0.091175776 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Sep 30 19:03:07 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:03:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:03:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:03:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:07.490Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:08.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 19:03:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 19:03:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:03:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:03:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:03:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 19:03:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2553: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 19:03:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 19:03:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 19:03:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 19:03:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 19:03:08 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 19:03:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:03:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:03:08 compute-0 nova_compute[265391]: 2025-09-30 19:03:08.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:03:08 compute-0 sudo[384099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_19:03:08
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.control', '.nfs', 'backups', '.rgw.root', 'images', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data']
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 sudo[384099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:03:08 compute-0 sudo[384099]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:08 compute-0 sudo[384124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 19:03:08 compute-0 sudo[384124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:08] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:03:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:08] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:03:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:09.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:09 compute-0 podman[384188]: 2025-09-30 19:03:09.045381157 +0000 UTC m=+0.041689687 container create 45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 19:03:09 compute-0 systemd[1]: Started libpod-conmon-45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f.scope.
Sep 30 19:03:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:03:09 compute-0 podman[384188]: 2025-09-30 19:03:09.024592821 +0000 UTC m=+0.020901321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:03:09 compute-0 podman[384188]: 2025-09-30 19:03:09.129400137 +0000 UTC m=+0.125708657 container init 45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Sep 30 19:03:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:03:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:03:09 compute-0 podman[384188]: 2025-09-30 19:03:09.139374285 +0000 UTC m=+0.135682795 container start 45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 19:03:09 compute-0 podman[384188]: 2025-09-30 19:03:09.14307659 +0000 UTC m=+0.139385090 container attach 45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 19:03:09 compute-0 stoic_bardeen[384204]: 167 167
Sep 30 19:03:09 compute-0 systemd[1]: libpod-45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f.scope: Deactivated successfully.
Sep 30 19:03:09 compute-0 podman[384188]: 2025-09-30 19:03:09.144811445 +0000 UTC m=+0.141119945 container died 45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:03:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc74eb23db361836fd1593bb7e1ec26dc19d3f6dab70df511ef019897ca2ae7-merged.mount: Deactivated successfully.
Sep 30 19:03:09 compute-0 podman[384188]: 2025-09-30 19:03:09.181196704 +0000 UTC m=+0.177505204 container remove 45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bardeen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 19:03:09 compute-0 systemd[1]: libpod-conmon-45abc03bde9e6e537e91f4f4afe8fa559c75018ace587f617adcaac310ccc75f.scope: Deactivated successfully.
Sep 30 19:03:09 compute-0 podman[384226]: 2025-09-30 19:03:09.3266726 +0000 UTC m=+0.040991039 container create 08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chatterjee, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:03:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 19:03:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:03:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 19:03:09 compute-0 ceph-mon[73755]: pgmap v2553: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 19:03:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 19:03:09 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:03:09 compute-0 systemd[1]: Started libpod-conmon-08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af.scope.
Sep 30 19:03:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:03:09 compute-0 podman[384226]: 2025-09-30 19:03:09.307596178 +0000 UTC m=+0.021914647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1abb2ef79e4f34ec29e806934bcd73545421fc03381b0ace78dc9315322b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1abb2ef79e4f34ec29e806934bcd73545421fc03381b0ace78dc9315322b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1abb2ef79e4f34ec29e806934bcd73545421fc03381b0ace78dc9315322b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1abb2ef79e4f34ec29e806934bcd73545421fc03381b0ace78dc9315322b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1abb2ef79e4f34ec29e806934bcd73545421fc03381b0ace78dc9315322b6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:09 compute-0 podman[384226]: 2025-09-30 19:03:09.425717028 +0000 UTC m=+0.140035507 container init 08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chatterjee, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 19:03:09 compute-0 podman[384226]: 2025-09-30 19:03:09.434572146 +0000 UTC m=+0.148890585 container start 08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chatterjee, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 19:03:09 compute-0 podman[384226]: 2025-09-30 19:03:09.437652556 +0000 UTC m=+0.151971005 container attach 08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:03:09 compute-0 nova_compute[265391]: 2025-09-30 19:03:09.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:09 compute-0 trusting_chatterjee[384242]: --> passed data devices: 0 physical, 1 LVM
Sep 30 19:03:09 compute-0 trusting_chatterjee[384242]: --> All data devices are unavailable
Sep 30 19:03:09 compute-0 systemd[1]: libpod-08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af.scope: Deactivated successfully.
Sep 30 19:03:09 compute-0 podman[384226]: 2025-09-30 19:03:09.88393777 +0000 UTC m=+0.598256209 container died 08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chatterjee, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 19:03:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d1abb2ef79e4f34ec29e806934bcd73545421fc03381b0ace78dc9315322b6a-merged.mount: Deactivated successfully.
Sep 30 19:03:09 compute-0 podman[384226]: 2025-09-30 19:03:09.933010918 +0000 UTC m=+0.647329377 container remove 08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 19:03:09 compute-0 systemd[1]: libpod-conmon-08f607aee6277a7e4f3afc02aaf45ee6a0447a34bdd090ee566b34ac8599f6af.scope: Deactivated successfully.
Sep 30 19:03:09 compute-0 sudo[384124]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:10 compute-0 sudo[384269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:03:10 compute-0 sudo[384269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:10 compute-0 sudo[384269]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:10.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:10 compute-0 sudo[384294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 19:03:10 compute-0 sudo[384294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2554: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:10 compute-0 podman[384358]: 2025-09-30 19:03:10.47721834 +0000 UTC m=+0.035012775 container create 90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 19:03:10 compute-0 systemd[1]: Started libpod-conmon-90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d.scope.
Sep 30 19:03:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:03:10 compute-0 podman[384358]: 2025-09-30 19:03:10.461961087 +0000 UTC m=+0.019755502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:03:10 compute-0 podman[384358]: 2025-09-30 19:03:10.56006922 +0000 UTC m=+0.117863665 container init 90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 19:03:10 compute-0 podman[384358]: 2025-09-30 19:03:10.56819023 +0000 UTC m=+0.125984655 container start 90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:03:10 compute-0 podman[384358]: 2025-09-30 19:03:10.573011534 +0000 UTC m=+0.130805949 container attach 90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:03:10 compute-0 condescending_perlman[384374]: 167 167
Sep 30 19:03:10 compute-0 systemd[1]: libpod-90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d.scope: Deactivated successfully.
Sep 30 19:03:10 compute-0 podman[384358]: 2025-09-30 19:03:10.578448585 +0000 UTC m=+0.136243040 container died 90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Sep 30 19:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e75c0ea3f2d5ec50fa9b47d457180224550b84ee98008f56c9a3490f07d80306-merged.mount: Deactivated successfully.
Sep 30 19:03:10 compute-0 podman[384358]: 2025-09-30 19:03:10.623527819 +0000 UTC m=+0.181322274 container remove 90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 19:03:10 compute-0 systemd[1]: libpod-conmon-90a2756d30e8c2512c11007571e6e83b6eabb7c2650358f6e6ca44d1b065659d.scope: Deactivated successfully.
Sep 30 19:03:10 compute-0 podman[384400]: 2025-09-30 19:03:10.783209302 +0000 UTC m=+0.048769900 container create 8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 19:03:10 compute-0 systemd[1]: Started libpod-conmon-8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a.scope.
Sep 30 19:03:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6ecb49675d0d2513fd7504d3e4a952b92790d3785cbc9ec3283ef1668ab253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6ecb49675d0d2513fd7504d3e4a952b92790d3785cbc9ec3283ef1668ab253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6ecb49675d0d2513fd7504d3e4a952b92790d3785cbc9ec3283ef1668ab253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6ecb49675d0d2513fd7504d3e4a952b92790d3785cbc9ec3283ef1668ab253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:10 compute-0 podman[384400]: 2025-09-30 19:03:10.8582722 +0000 UTC m=+0.123832818 container init 8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:03:10 compute-0 podman[384400]: 2025-09-30 19:03:10.764068418 +0000 UTC m=+0.029629036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:03:10 compute-0 podman[384400]: 2025-09-30 19:03:10.865518038 +0000 UTC m=+0.131078636 container start 8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feynman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 19:03:10 compute-0 podman[384400]: 2025-09-30 19:03:10.869292435 +0000 UTC m=+0.134853033 container attach 8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]: {
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:     "0": [
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:         {
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "devices": [
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "/dev/loop3"
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             ],
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "lv_name": "ceph_lv0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "lv_size": "21470642176",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=63d32c6a-fa18-54ed-8711-9a3915cc367b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a6bb176f-a2ce-4022-8226-399d42b79f3f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "lv_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "name": "ceph_lv0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "tags": {
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.block_uuid": "TYRjq0-fmBF-kxts-Z7YC-MdDX-o8nm-7CedAv",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.cluster_fsid": "63d32c6a-fa18-54ed-8711-9a3915cc367b",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.cluster_name": "ceph",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.crush_device_class": "",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.encrypted": "0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.osd_fsid": "a6bb176f-a2ce-4022-8226-399d42b79f3f",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.osd_id": "0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.type": "block",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.vdo": "0",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:                 "ceph.with_tpm": "0"
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             },
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "type": "block",
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:             "vg_name": "ceph_vg0"
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:         }
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]:     ]
Sep 30 19:03:11 compute-0 sleepy_feynman[384417]: }
Sep 30 19:03:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:11.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:11 compute-0 systemd[1]: libpod-8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a.scope: Deactivated successfully.
Sep 30 19:03:11 compute-0 podman[384400]: 2025-09-30 19:03:11.136618498 +0000 UTC m=+0.402179096 container died 8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feynman, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 19:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e6ecb49675d0d2513fd7504d3e4a952b92790d3785cbc9ec3283ef1668ab253-merged.mount: Deactivated successfully.
Sep 30 19:03:11 compute-0 podman[384400]: 2025-09-30 19:03:11.188370024 +0000 UTC m=+0.453930642 container remove 8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_feynman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:03:11 compute-0 systemd[1]: libpod-conmon-8b75cb702a48bd9955bfa78323cc1f2a0d66879c07abc4026fb586731966126a.scope: Deactivated successfully.
Sep 30 19:03:11 compute-0 sudo[384294]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:11 compute-0 sudo[384438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:03:11 compute-0 sudo[384438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:11 compute-0 sudo[384438]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:11 compute-0 sudo[384463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- raw list --format json
Sep 30 19:03:11 compute-0 sudo[384463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:11 compute-0 ceph-mon[73755]: pgmap v2554: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:11 compute-0 podman[384530]: 2025-09-30 19:03:11.799917296 +0000 UTC m=+0.043605267 container create e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 19:03:11 compute-0 systemd[1]: Started libpod-conmon-e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7.scope.
Sep 30 19:03:11 compute-0 podman[384530]: 2025-09-30 19:03:11.781415799 +0000 UTC m=+0.025103750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:03:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:03:11 compute-0 podman[384530]: 2025-09-30 19:03:11.905119673 +0000 UTC m=+0.148807704 container init e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 19:03:11 compute-0 podman[384530]: 2025-09-30 19:03:11.914836604 +0000 UTC m=+0.158524575 container start e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 19:03:11 compute-0 podman[384530]: 2025-09-30 19:03:11.919294949 +0000 UTC m=+0.162982970 container attach e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:03:11 compute-0 silly_moser[384547]: 167 167
Sep 30 19:03:11 compute-0 systemd[1]: libpod-e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7.scope: Deactivated successfully.
Sep 30 19:03:11 compute-0 podman[384530]: 2025-09-30 19:03:11.925875639 +0000 UTC m=+0.169563590 container died e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 19:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa82a0e5aaaccab10774dc1b0e90d1ec922dc8b82a3af3306559e1251f2f3f33-merged.mount: Deactivated successfully.
Sep 30 19:03:11 compute-0 podman[384530]: 2025-09-30 19:03:11.986665489 +0000 UTC m=+0.230353470 container remove e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_moser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 19:03:11 compute-0 systemd[1]: libpod-conmon-e664796fc952a44abde98d8942e65d72bbc38dbaa435fa858ea17eadcc7e71d7.scope: Deactivated successfully.
Sep 30 19:03:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:12.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:12 compute-0 podman[384572]: 2025-09-30 19:03:12.236998733 +0000 UTC m=+0.055831673 container create a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 19:03:12 compute-0 systemd[1]: Started libpod-conmon-a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14.scope.
Sep 30 19:03:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9414ba78beaa66d3481abc12fd0d82590f7509651bec86e47a7e236e0a2734b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9414ba78beaa66d3481abc12fd0d82590f7509651bec86e47a7e236e0a2734b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9414ba78beaa66d3481abc12fd0d82590f7509651bec86e47a7e236e0a2734b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9414ba78beaa66d3481abc12fd0d82590f7509651bec86e47a7e236e0a2734b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:03:12 compute-0 podman[384572]: 2025-09-30 19:03:12.214398879 +0000 UTC m=+0.033231839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:03:12 compute-0 podman[384572]: 2025-09-30 19:03:12.312486012 +0000 UTC m=+0.131318942 container init a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 19:03:12 compute-0 podman[384572]: 2025-09-30 19:03:12.321647129 +0000 UTC m=+0.140480079 container start a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 19:03:12 compute-0 podman[384572]: 2025-09-30 19:03:12.325518509 +0000 UTC m=+0.144351449 container attach a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_khayyam, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 19:03:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2555: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:13 compute-0 lvm[384663]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 19:03:13 compute-0 lvm[384663]: VG ceph_vg0 finished
Sep 30 19:03:13 compute-0 boring_khayyam[384588]: {}
Sep 30 19:03:13 compute-0 systemd[1]: libpod-a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14.scope: Deactivated successfully.
Sep 30 19:03:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:13 compute-0 systemd[1]: libpod-a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14.scope: Consumed 1.358s CPU time.
Sep 30 19:03:13 compute-0 podman[384572]: 2025-09-30 19:03:13.135998717 +0000 UTC m=+0.954831667 container died a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 19:03:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:03:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:13.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:03:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9414ba78beaa66d3481abc12fd0d82590f7509651bec86e47a7e236e0a2734b1-merged.mount: Deactivated successfully.
Sep 30 19:03:13 compute-0 podman[384572]: 2025-09-30 19:03:13.211075595 +0000 UTC m=+1.029908545 container remove a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 19:03:13 compute-0 systemd[1]: libpod-conmon-a226510a1ec350dd644f90ac13fa7cf774da263480ce3d961608c9af70ed0c14.scope: Deactivated successfully.
Sep 30 19:03:13 compute-0 sudo[384463]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 19:03:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 19:03:13 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:13 compute-0 sudo[384679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 19:03:13 compute-0 sudo[384679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:13 compute-0 sudo[384679]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:13 compute-0 nova_compute[265391]: 2025-09-30 19:03:13.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:03:13 compute-0 ceph-mon[73755]: pgmap v2555: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:13 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:13 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:03:13 compute-0 nova_compute[265391]: 2025-09-30 19:03:13.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:14.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:14.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2556: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 801 B/s rd, 0 op/s
Sep 30 19:03:14 compute-0 nova_compute[265391]: 2025-09-30 19:03:14.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:15 compute-0 nova_compute[265391]: 2025-09-30 19:03:15.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:03:15 compute-0 nova_compute[265391]: 2025-09-30 19:03:15.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:03:15 compute-0 ceph-mon[73755]: pgmap v2556: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 801 B/s rd, 0 op/s
Sep 30 19:03:15 compute-0 nova_compute[265391]: 2025-09-30 19:03:15.956 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:03:15 compute-0 nova_compute[265391]: 2025-09-30 19:03:15.956 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:03:15 compute-0 nova_compute[265391]: 2025-09-30 19:03:15.957 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:03:15 compute-0 nova_compute[265391]: 2025-09-30 19:03:15.957 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 19:03:15 compute-0 nova_compute[265391]: 2025-09-30 19:03:15.957 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:03:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:16.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2557: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:03:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659476507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:03:16 compute-0 nova_compute[265391]: 2025-09-30 19:03:16.426 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:03:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4272262692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:03:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2659476507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:03:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:16 compute-0 nova_compute[265391]: 2025-09-30 19:03:16.567 2 WARNING nova.virt.libvirt.driver [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 19:03:16 compute-0 nova_compute[265391]: 2025-09-30 19:03:16.568 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:03:16 compute-0 nova_compute[265391]: 2025-09-30 19:03:16.585 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:03:16 compute-0 nova_compute[265391]: 2025-09-30 19:03:16.585 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4198MB free_disk=39.992183685302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Sep 30 19:03:16 compute-0 nova_compute[265391]: 2025-09-30 19:03:16.585 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:03:16 compute-0 nova_compute[265391]: 2025-09-30 19:03:16.586 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:03:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:17.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:17 compute-0 ceph-mon[73755]: pgmap v2557: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:17.491Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:17 compute-0 nova_compute[265391]: 2025-09-30 19:03:17.723 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Sep 30 19:03:17 compute-0 nova_compute[265391]: 2025-09-30 19:03:17.723 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=39GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 19:03:16 up  2:06,  0 user,  load average: 0.28, 0.48, 0.64\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Sep 30 19:03:17 compute-0 nova_compute[265391]: 2025-09-30 19:03:17.741 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:03:18 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:18 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:18 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:18.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:18 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:03:18 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3878414782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:03:18 compute-0 nova_compute[265391]: 2025-09-30 19:03:18.172 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:03:18 compute-0 nova_compute[265391]: 2025-09-30 19:03:18.177 2 DEBUG nova.compute.provider_tree [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed in ProviderTree for provider: 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Sep 30 19:03:18 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2558: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:18 compute-0 nova_compute[265391]: 2025-09-30 19:03:18.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3741651663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:03:18 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3878414782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:03:18 compute-0 nova_compute[265391]: 2025-09-30 19:03:18.685 2 DEBUG nova.scheduler.client.report [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Inventory has not changed for provider 5403d2fc-3ca9-4ee2-946b-a15032cca0c2 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 39, 'reserved': 1, 'min_unit': 1, 'max_unit': 39, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Sep 30 19:03:18 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:18] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:03:18 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:18] "GET /metrics HTTP/1.1" 200 46746 "" "Prometheus/2.51.0"
Sep 30 19:03:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:18 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:19 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:19.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:03:19 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:19.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:19 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:19 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:03:19 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:19.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:03:19 compute-0 nova_compute[265391]: 2025-09-30 19:03:19.198 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Sep 30 19:03:19 compute-0 nova_compute[265391]: 2025-09-30 19:03:19.198 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.613s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:03:19 compute-0 ceph-mon[73755]: pgmap v2558: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 534 B/s rd, 0 op/s
Sep 30 19:03:19 compute-0 nova_compute[265391]: 2025-09-30 19:03:19.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:20 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:20 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:20 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:20.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:20 compute-0 nova_compute[265391]: 2025-09-30 19:03:20.194 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:03:20 compute-0 nova_compute[265391]: 2025-09-30 19:03:20.195 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:03:20 compute-0 nova_compute[265391]: 2025-09-30 19:03:20.195 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:03:20 compute-0 nova_compute[265391]: 2025-09-30 19:03:20.195 2 DEBUG nova.compute.manager [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Sep 30 19:03:20 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2559: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:20 compute-0 podman[384758]: 2025-09-30 19:03:20.520371419 +0000 UTC m=+0.060325408 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Sep 30 19:03:20 compute-0 podman[384757]: 2025-09-30 19:03:20.536129466 +0000 UTC m=+0.079737800 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 19:03:20 compute-0 podman[384759]: 2025-09-30 19:03:20.550137748 +0000 UTC m=+0.082233615 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Sep 30 19:03:21 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:21 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:21 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:21.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:21 compute-0 ceph-mon[73755]: pgmap v2559: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:21 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:22 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:22 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:22 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:22.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:22 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:03:22 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:03:22 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2560: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:22 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:03:23 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:23 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:23 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:23 compute-0 nova_compute[265391]: 2025-09-30 19:03:23.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:23 compute-0 ceph-mon[73755]: pgmap v2560: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:23 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:24 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:24 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:24.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:24 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:24 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:24 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:24.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:24 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2561: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:24 compute-0 nova_compute[265391]: 2025-09-30 19:03:24.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:25 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:25 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:03:25 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:25.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:03:25 compute-0 ceph-mon[73755]: pgmap v2561: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:26 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:26 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:26 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:26 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2562: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:26 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:27 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:27 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:27 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:27.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:27 compute-0 sudo[384824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:03:27 compute-0 sudo[384824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:27 compute-0 sudo[384824]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:27 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:27.492Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:27 compute-0 ceph-mon[73755]: pgmap v2562: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:28 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:28 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:28 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:28.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:28 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2563: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:28 compute-0 nova_compute[265391]: 2025-09-30 19:03:28.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:28 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:28] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 19:03:28 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:28] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 19:03:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:28 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:29 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:29 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:29.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:29 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:29 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:29 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:29.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:29 compute-0 ceph-mon[73755]: pgmap v2563: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:29 compute-0 nova_compute[265391]: 2025-09-30 19:03:29.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:29 compute-0 podman[276673]: time="2025-09-30T19:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:03:29 compute-0 podman[276673]: @ - - [30/Sep/2025:19:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 19:03:29 compute-0 podman[276673]: @ - - [30/Sep/2025:19:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10316 "" "Go-http-client/1.1"
Sep 30 19:03:30 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:30 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:03:30 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:03:30 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2564: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:31 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:31 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 19:03:31 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:31.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 19:03:31 compute-0 openstack_network_exporter[279566]: ERROR   19:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:03:31 compute-0 openstack_network_exporter[279566]: ERROR   19:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:03:31 compute-0 openstack_network_exporter[279566]: ERROR   19:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:03:31 compute-0 openstack_network_exporter[279566]: ERROR   19:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:03:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:03:31 compute-0 openstack_network_exporter[279566]: ERROR   19:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:03:31 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:03:31 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:31 compute-0 ceph-mon[73755]: pgmap v2564: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:32 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:32 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:32 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:32.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:32 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2565: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:33 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:33 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:33 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:33.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:33 compute-0 nova_compute[265391]: 2025-09-30 19:03:33.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:33 compute-0 ceph-mon[73755]: pgmap v2565: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:33 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:34 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:34 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:34.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:34 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:34 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:03:34 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:34.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:03:34 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2566: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:34 compute-0 nova_compute[265391]: 2025-09-30 19:03:34.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:35 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:35 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:35 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:35.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:35 compute-0 ceph-mon[73755]: pgmap v2566: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:36 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:36 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:36 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:36.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:36 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2567: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:36 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1763888224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:03:36 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/1763888224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:03:37 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:37 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:37 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:37.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:37 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:03:37 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:03:37 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:03:37 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:37.493Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:37 compute-0 podman[384859]: 2025-09-30 19:03:37.541325274 +0000 UTC m=+0.078567679 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Sep 30 19:03:37 compute-0 podman[384866]: 2025-09-30 19:03:37.558252912 +0000 UTC m=+0.072215526 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Sep 30 19:03:37 compute-0 podman[384860]: 2025-09-30 19:03:37.572153591 +0000 UTC m=+0.099194553 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Sep 30 19:03:37 compute-0 ceph-mon[73755]: pgmap v2567: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:37 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:03:38 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:38 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:38 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:38.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:38 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2568: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:38 compute-0 nova_compute[265391]: 2025-09-30 19:03:38.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:38 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:38] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 19:03:38 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:38] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 19:03:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:38 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:39 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:39 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:39.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:39 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:39 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:39 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:39.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:39 compute-0 nova_compute[265391]: 2025-09-30 19:03:39.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:39 compute-0 ceph-mon[73755]: pgmap v2568: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:40 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:40 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:40 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:40 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2569: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:41 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:41 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:41 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:41.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:41 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:41 compute-0 ceph-mon[73755]: pgmap v2569: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:42 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:42 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:42 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:42.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:42 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2570: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:43 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:43 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:43 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:43.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:43 compute-0 nova_compute[265391]: 2025-09-30 19:03:43.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:43 compute-0 ceph-mon[73755]: pgmap v2570: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:43 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:44 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:44 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:44.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:44 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:44 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:44 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:44 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2571: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:44 compute-0 nova_compute[265391]: 2025-09-30 19:03:44.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:45 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:45 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:45 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:45 compute-0 ceph-mon[73755]: pgmap v2571: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:45 compute-0 sshd-session[384934]: Accepted publickey for zuul from 192.168.122.10 port 53292 ssh2: ECDSA SHA256:COqAmbdBUMx+UyDa5XAop4Bi6qvwvYhJQ16rCseuHkQ
Sep 30 19:03:45 compute-0 systemd-logind[811]: New session 62 of user zuul.
Sep 30 19:03:45 compute-0 systemd[1]: Started Session 62 of User zuul.
Sep 30 19:03:45 compute-0 sshd-session[384934]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 19:03:46 compute-0 sudo[384939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Sep 30 19:03:46 compute-0 sudo[384939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 19:03:46 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:46 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:03:46 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:46.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:03:46 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2572: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:46 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:47 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:47 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:47 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:47.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:47 compute-0 sudo[385021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:03:47 compute-0 sudo[385021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:03:47 compute-0 sudo[385021]: pam_unix(sudo:session): session closed for user root
Sep 30 19:03:47 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:47.494Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:47 compute-0 ceph-mon[73755]: pgmap v2572: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:48 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:48 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:48 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:48.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:48 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2573: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:48 compute-0 nova_compute[265391]: 2025-09-30 19:03:48.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:48 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19748 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:48 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:48] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 19:03:48 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:48] "GET /metrics HTTP/1.1" 200 46747 "" "Prometheus/2.51.0"
Sep 30 19:03:48 compute-0 ceph-mon[73755]: pgmap v2573: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:48 compute-0 ceph-mon[73755]: from='client.19748 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:48 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:49 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:49.017Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:03:49 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:49.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:49 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19752 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:49 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28023 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:49 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:49 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:49 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:49.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:49 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status"} v 0)
Sep 30 19:03:49 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2211727363' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 19:03:49 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28027 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:49 compute-0 nova_compute[265391]: 2025-09-30 19:03:49.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:50 compute-0 ceph-mon[73755]: from='client.19752 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:50 compute-0 ceph-mon[73755]: from='client.28023 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:50 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2211727363' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 19:03:50 compute-0 ceph-mon[73755]: from='client.28027 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:50 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:50 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:50 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:50.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:50 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2574: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:51 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2120423508' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 19:03:51 compute-0 ceph-mon[73755]: pgmap v2574: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:03:51 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:51 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:03:51 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:51.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:03:51 compute-0 podman[385274]: 2025-09-30 19:03:51.537297754 +0000 UTC m=+0.069039523 container health_status 6b16716cad24b72ce5b9db5d6094fc17c7ada2e554c9cf62821659fa45ddebe1 (image=38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 19:03:51 compute-0 podman[385275]: 2025-09-30 19:03:51.537428188 +0000 UTC m=+0.068601653 container health_status 88a10098d1abf1c275e660837859008eff74c58ebcc82531e2d74d56f87d5a20 (image=38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, io.buildah.version=1.41.4)
Sep 30 19:03:51 compute-0 podman[385276]: 2025-09-30 19:03:51.546504232 +0000 UTC m=+0.062539436 container health_status dfca32d6caed59f7f9a35743e7f1134cee5fb6cfada338e860d7e153cbf87047 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release=1755695350, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Sep 30 19:03:51 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:52 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:52 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:52 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:52.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:52 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:03:52 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:03:52 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2575: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:52 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:03:52 compute-0 ovs-vsctl[385365]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Sep 30 19:03:53 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:53 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:53 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:53.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:53 compute-0 virtqemud[265263]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Sep 30 19:03:53 compute-0 virtqemud[265263]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Sep 30 19:03:53 compute-0 nova_compute[265391]: 2025-09-30 19:03:53.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:53 compute-0 virtqemud[265263]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Sep 30 19:03:53 compute-0 ceph-mon[73755]: pgmap v2575: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:53 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:54 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:54 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:54.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:54 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: cache status {prefix=cache status} (starting...)
Sep 30 19:03:54 compute-0 lvm[385671]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 19:03:54 compute-0 lvm[385671]: VG ceph_vg0 finished
Sep 30 19:03:54 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:54 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:54 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:54.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:54 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: client ls {prefix=client ls} (starting...)
Sep 30 19:03:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:03:54.368 166158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:03:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:03:54.369 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:03:54 compute-0 ovn_metadata_agent[166152]: 2025-09-30 19:03:54.369 166158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:03:54 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2576: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 766 B/s rd, 0 op/s
Sep 30 19:03:54 compute-0 nova_compute[265391]: 2025-09-30 19:03:54.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:54 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19768 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:54 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: damage ls {prefix=damage ls} (starting...)
Sep 30 19:03:54 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump loads {prefix=dump loads} (starting...)
Sep 30 19:03:54 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 19:03:54 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535653558' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 19:03:55 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Sep 30 19:03:55 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:55 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:55 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:55.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:55 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19776 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:55 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Sep 30 19:03:55 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Sep 30 19:03:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:03:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1829078619' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:03:55 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Sep 30 19:03:55 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19784 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:55 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Sep 30 19:03:55 compute-0 ceph-mon[73755]: pgmap v2576: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 766 B/s rd, 0 op/s
Sep 30 19:03:55 compute-0 ceph-mon[73755]: from='client.19768 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/535653558' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 19:03:55 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1829078619' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:03:55 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: get subtrees {prefix=get subtrees} (starting...)
Sep 30 19:03:55 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config log"} v 0)
Sep 30 19:03:55 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934291758' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 19:03:55 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19792 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: ops {prefix=ops} (starting...)
Sep 30 19:03:56 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:56 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:56 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:56.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config-key dump"} v 0)
Sep 30 19:03:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028912149' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Sep 30 19:03:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2666808499' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2577: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:03:56 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19806 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: session ls {prefix=session ls} (starting...)
Sep 30 19:03:56 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr dump"} v 0)
Sep 30 19:03:56 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985771736' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mds[97072]: mds.cephfs.compute-0.vrwlru asok_command: status {prefix=status} (starting...)
Sep 30 19:03:56 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28039 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mon[73755]: from='client.19776 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mon[73755]: from='client.19784 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3934291758' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mon[73755]: from='client.19792 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1028912149' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 19:03:56 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2666808499' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 19:03:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19814 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:57 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:57 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:03:57 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:57.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:03:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 19:03:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3598764719' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28049 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:57 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:57.496Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:03:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 19:03:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105764110' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 19:03:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2619271459' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 19:03:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2619271459' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Sep 30 19:03:57 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990649454' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28057 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: pgmap v2577: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.19806 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1985771736' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.28039 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/183509289' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3598764719' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/588273531' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4105764110' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2619271459' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.10:0/2619271459' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3990649454' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 19:03:57 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/297771280' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 19:03:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Sep 30 19:03:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145171778' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 19:03:58 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:58 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:58 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:03:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 19:03:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2693945981' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 19:03:58 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28071 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:58 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2578: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:03:58 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19852 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:58 compute-0 ceph-mgr[74051]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 19:03:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T19:03:58.484+0000 7faab05c1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 19:03:58 compute-0 nova_compute[265391]: 2025-09-30 19:03:58.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr stat"} v 0)
Sep 30 19:03:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612254954' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 19:03:58 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28083 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:58 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:58] "GET /metrics HTTP/1.1" 200 46741 "" "Prometheus/2.51.0"
Sep 30 19:03:58 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:03:58] "GET /metrics HTTP/1.1" 200 46741 "" "Prometheus/2.51.0"
Sep 30 19:03:58 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Sep 30 19:03:58 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2731985441' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:03:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:03:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:58 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:03:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:03:59 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.19814 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.28049 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.28057 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1145171778' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2693945981' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/315444950' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3612254954' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2590978549' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2731985441' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:03:59.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:03:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 19:03:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3405482568' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28091 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:59 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:03:59 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:03:59 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:03:59.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:03:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Sep 30 19:03:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3409006296' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19874 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 19:03:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1973545497' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 19:03:59 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 19:03:59 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 19:03:59 compute-0 nova_compute[265391]: 2025-09-30 19:03:59.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:03:59 compute-0 podman[276673]: time="2025-09-30T19:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Sep 30 19:03:59 compute-0 podman[276673]: @ - - [30/Sep/2025:19:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 41377 "" "Go-http-client/1.1"
Sep 30 19:03:59 compute-0 podman[276673]: @ - - [30/Sep/2025:19:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10315 "" "Go-http-client/1.1"
Sep 30 19:03:59 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19882 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.28071 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: pgmap v2578: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.19852 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.28083 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3405482568' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.28091 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3830742536' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3409006296' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.19874 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1973545497' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3405605842' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.19882 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/268009520' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2113007865' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 19:04:00 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:00 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:00 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:00.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19890 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 19:04:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71466818' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2579: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 766 B/s rd, 0 op/s
Sep 30 19:04:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 19:04:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3851135064' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 19:04:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28115 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: 2025-09-30T19:04:00.415+0000 7faab05c1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 19:04:00 compute-0 ceph-mgr[74051]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:54.602277+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f659f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:55.602531+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1746780 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:56.602673+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:57.602831+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5551000/0x0/0x4ffc00000, data 0x19a7cf9/0x1a7b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:58.602963+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:31:59.603118+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b6968b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:00.603253+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1746780 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:01.603391+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de800 session 0x5631b639b680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:02.603543+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176603136 unmapped: 64798720 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f66000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:03.603681+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b5f663c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176635904 unmapped: 64765952 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f554f000/0x0/0x4ffc00000, data 0x19a7d2c/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:04.603821+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 176635904 unmapped: 64765952 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:05.603952+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 65847296 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854215 data_alloc: 234881024 data_used: 17510400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f554f000/0x0/0x4ffc00000, data 0x19a7d2c/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:06.604142+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:07.604315+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:08.604523+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f554f000/0x0/0x4ffc00000, data 0x19a7d2c/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:09.604679+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:10.604980+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854215 data_alloc: 234881024 data_used: 17510400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:11.605199+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:12.605449+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:13.605595+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f554f000/0x0/0x4ffc00000, data 0x19a7d2c/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:14.605917+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 178962432 unmapped: 62439424 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.179763794s of 21.302459717s, submitted: 51
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:15.606102+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185974784 unmapped: 55427072 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1932627 data_alloc: 234881024 data_used: 18014208
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:16.606235+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185122816 unmapped: 56279040 heap: 241401856 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b386e960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b69d05a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55cc400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55cc400 session 0x5631b6bf25a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b592dc20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b639ad20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b3372000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b363ef00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3364400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b3364400 session 0x5631b331c960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6c063c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:17.606405+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4ad0000/0x0/0x4ffc00000, data 0x2423d65/0x24fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x8c2f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186703872 unmapped: 58376192 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:18.606611+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:19.606843+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:20.607200+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2101013 data_alloc: 234881024 data_used: 18665472
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:21.607356+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:22.607536+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b53961e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f9000/0x0/0x4ffc00000, data 0x37ebd9e/0x38c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:23.607709+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:24.607918+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b3362b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:25.608114+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f9000/0x0/0x4ffc00000, data 0x37ebd9e/0x38c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2096613 data_alloc: 234881024 data_used: 18669568
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:26.608252+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b33734a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 58368000 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.141558647s of 11.516283989s, submitted: 178
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0800 session 0x5631b6428960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:27.608378+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 58335232 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:28.608579+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192094208 unmapped: 52985856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:29.608748+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:30.608867+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2237908 data_alloc: 251658240 data_used: 38023168
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:31.608992+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f7000/0x0/0x4ffc00000, data 0x37ebdd1/0x38c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f7000/0x0/0x4ffc00000, data 0x37ebdd1/0x38c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:32.609190+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:33.609316+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:34.609447+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201973760 unmapped: 43106304 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:35.609704+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 43081728 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2238540 data_alloc: 251658240 data_used: 38027264
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:36.609914+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 43081728 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f32f4000/0x0/0x4ffc00000, data 0x37eedd1/0x38c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:37.610132+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 43081728 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.325328827s of 11.357563019s, submitted: 9
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:38.610290+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 37707776 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:39.610406+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:40.610581+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2067000/0x0/0x4ffc00000, data 0x4a73dd1/0x4b4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2398254 data_alloc: 251658240 data_used: 40370176
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:41.610808+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:42.610989+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:43.611122+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:44.611293+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:45.611434+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204e000/0x0/0x4ffc00000, data 0x4a94dd1/0x4b6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392102 data_alloc: 251658240 data_used: 40374272
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:46.611550+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:47.611692+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204e000/0x0/0x4ffc00000, data 0x4a94dd1/0x4b6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:48.611848+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:49.611972+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:50.612102+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204e000/0x0/0x4ffc00000, data 0x4a94dd1/0x4b6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392774 data_alloc: 251658240 data_used: 40374272
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:51.612245+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:52.612462+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:53.612577+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:54.612700+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:55.612901+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204e000/0x0/0x4ffc00000, data 0x4a94dd1/0x4b6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:56.613025+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392774 data_alloc: 251658240 data_used: 40374272
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:57.613177+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.077375412s of 19.483062744s, submitted: 198
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:58.613438+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:32:59.613622+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:00.613758+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:01.614086+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392782 data_alloc: 251658240 data_used: 40374272
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:02.614227+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:03.614464+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:04.614598+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:05.614719+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:06.614826+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392782 data_alloc: 251658240 data_used: 40374272
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:07.614966+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:08.615106+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:09.615294+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.534425735s of 12.544439316s, submitted: 2
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:10.615596+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 35651584 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:11.615740+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392798 data_alloc: 251658240 data_used: 40370176
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209436672 unmapped: 35643392 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:12.615944+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209436672 unmapped: 35643392 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:13.616086+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209444864 unmapped: 35635200 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:14.616665+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:15.617536+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:16.617697+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392798 data_alloc: 251658240 data_used: 40370176
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:17.617846+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:18.618002+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209453056 unmapped: 35627008 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:19.618139+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 35610624 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:20.618848+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 35610624 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:21.618998+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.372642517s of 11.383035660s, submitted: 4
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b526e960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2393030 data_alloc: 251658240 data_used: 40378368
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b526f0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209469440 unmapped: 35610624 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:22.619656+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:23.620179+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:24.620613+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:25.620738+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:26.621065+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2393030 data_alloc: 251658240 data_used: 40378368
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:27.621361+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:28.621587+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209477632 unmapped: 35602432 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:29.621735+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:30.621906+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:31.622119+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2393030 data_alloc: 251658240 data_used: 40378368
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:32.622305+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:33.622415+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:34.622683+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209494016 unmapped: 35586048 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:35.622805+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209502208 unmapped: 35577856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:36.622944+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2393030 data_alloc: 251658240 data_used: 40378368
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 35569664 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:37.623061+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 35569664 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:38.623135+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209518592 unmapped: 35561472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:39.623256+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209518592 unmapped: 35561472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:40.623424+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.331880569s of 19.336585999s, submitted: 1
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209534976 unmapped: 35545088 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:41.623619+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2394862 data_alloc: 251658240 data_used: 40366080
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209534976 unmapped: 35545088 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:42.623857+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209534976 unmapped: 35545088 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:43.623993+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209543168 unmapped: 35536896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:44.624171+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209543168 unmapped: 35536896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:45.624320+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209551360 unmapped: 35528704 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:46.624454+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2394862 data_alloc: 251658240 data_used: 40366080
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209551360 unmapped: 35528704 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:47.624634+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 35520512 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:48.624763+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3670d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b386f0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:49.624885+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:50.625021+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:51.625221+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392498 data_alloc: 251658240 data_used: 40374272
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:52.625367+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 35512320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:53.625474+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:54.625623+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:55.625746+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:56.625883+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392498 data_alloc: 251658240 data_used: 40374272
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:57.626032+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:58.626149+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:33:59.626303+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:00.626434+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 35495936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:01.626596+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392498 data_alloc: 251658240 data_used: 40374272
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209592320 unmapped: 35487744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:02.626753+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209592320 unmapped: 35487744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f204b000/0x0/0x4ffc00000, data 0x4a97dd1/0x4b71000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:03.626843+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209600512 unmapped: 35479552 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.428400040s of 23.454566956s, submitted: 19
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0800 session 0x5631b3362000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b644da40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:04.626973+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 35446784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:05.627130+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5771e00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:06.627307+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1965385 data_alloc: 234881024 data_used: 17801216
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4697000/0x0/0x4ffc00000, data 0x2447d2c/0x251d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:07.627505+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:08.627671+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:09.627865+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:10.628014+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b592cb40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b6198960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:11.628208+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4697000/0x0/0x4ffc00000, data 0x2447d2c/0x251d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1965385 data_alloc: 234881024 data_used: 17801216
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f5c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 47636480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:12.628383+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f5c00 session 0x5631b6b52960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:13.628654+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:14.628782+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:15.628924+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:16.629059+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1660288 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:17.629176+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:18.629330+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:19.629487+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:20.629594+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:21.629753+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1660288 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:22.629943+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:23.630071+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:24.630287+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:25.630437+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:26.630561+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1660288 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618a000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:27.630693+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:28.630824+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:29.630998+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:30.631167+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 60375040 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6968960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:31.631294+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b592de00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b6921a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b3372d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6caa000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.032833099s of 27.336183548s, submitted: 106
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6caa000 session 0x5631b6bf2d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749474 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6b52960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b3362000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b526e960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b33734a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:32.631453+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5782000/0x0/0x4ffc00000, data 0x1365d09/0x143a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:33.631619+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:34.631723+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:35.631831+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:36.632054+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749490 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5782000/0x0/0x4ffc00000, data 0x1365d09/0x143a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:37.632172+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:38.632333+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e72800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e72800 session 0x5631b53961e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:39.632528+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5782000/0x0/0x4ffc00000, data 0x1365d09/0x143a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:40.632655+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b331c960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:41.632774+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749490 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5782000/0x0/0x4ffc00000, data 0x1365d09/0x143a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 59924480 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:42.632989+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b363ef00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.996141434s of 11.116294861s, submitted: 52
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b3372000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185073664 unmapped: 60006400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:43.633139+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185090048 unmapped: 59990016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:44.633274+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:45.633398+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:46.633514+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1815118 data_alloc: 218103808 data_used: 10960896
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:47.633660+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:48.633857+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:49.634002+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:50.634126+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:51.634215+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1815118 data_alloc: 218103808 data_used: 10960896
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:52.634396+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:53.634533+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 61218816 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:54.634671+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5780000/0x0/0x4ffc00000, data 0x1365d3c/0x143c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.228917122s of 12.266461372s, submitted: 13
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194617344 unmapped: 50462720 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:55.634801+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 190840832 unmapped: 54239232 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:56.634981+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1922448 data_alloc: 234881024 data_used: 11800576
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b6b530e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb1400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb1400 session 0x5631b63a8780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b363eb40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b6968960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 190906368 unmapped: 54173696 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:57.635098+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b3670d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b63a8000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192290816 unmapped: 52789248 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f1000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f1000 session 0x5631b5770000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:58.635236+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f62d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b526fe00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f40f5000/0x0/0x4ffc00000, data 0x29ecd75/0x2ac5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:34:59.635403+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f40a4000/0x0/0x4ffc00000, data 0x2a3fdae/0x2b18000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:00.635516+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:01.635641+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2024138 data_alloc: 234881024 data_used: 12627968
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:02.635852+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 52838400 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:03.636016+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:04.636147+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b592dc20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:05.636566+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4080000/0x0/0x4ffc00000, data 0x2a63dae/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:06.636680+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2024850 data_alloc: 234881024 data_used: 12627968
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b33692c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:07.636785+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4080000/0x0/0x4ffc00000, data 0x2a63dae/0x2b3c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192249856 unmapped: 52830208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:08.636997+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b58d7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b58d7000 session 0x5631b65e0960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.912911415s of 13.777595520s, submitted: 216
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b636e780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192258048 unmapped: 52822016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:09.637110+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f407f000/0x0/0x4ffc00000, data 0x2a63dbe/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192258048 unmapped: 52822016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:10.637225+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f407f000/0x0/0x4ffc00000, data 0x2a63dbe/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 52813824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:11.637425+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2068033 data_alloc: 234881024 data_used: 18526208
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:12.637591+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:13.637703+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:14.637808+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:15.637939+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:16.638089+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f407f000/0x0/0x4ffc00000, data 0x2a63dbe/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2082697 data_alloc: 234881024 data_used: 20631552
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f407f000/0x0/0x4ffc00000, data 0x2a63dbe/0x2b3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:17.638265+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:18.638407+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:19.638552+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:20.638696+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:21.638820+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193249280 unmapped: 51830784 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.899151802s of 12.938499451s, submitted: 11
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2203335 data_alloc: 234881024 data_used: 20676608
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:22.638974+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f30eb000/0x0/0x4ffc00000, data 0x39efdbe/0x3ac9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:23.639186+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:24.639325+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:25.639485+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:26.639660+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2207655 data_alloc: 234881024 data_used: 20905984
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:27.639788+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3051000/0x0/0x4ffc00000, data 0x3a91dbe/0x3b6b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:28.639956+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199262208 unmapped: 45817856 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:29.640094+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:30.640205+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:31.640408+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f302b000/0x0/0x4ffc00000, data 0x3ab6dbe/0x3b90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209631 data_alloc: 234881024 data_used: 20905984
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:32.640577+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f302b000/0x0/0x4ffc00000, data 0x3ab6dbe/0x3b90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:33.640657+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.130858421s of 12.511350632s, submitted: 149
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:34.640810+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:35.640896+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:36.641009+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2208935 data_alloc: 234881024 data_used: 20905984
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:37.641134+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3029000/0x0/0x4ffc00000, data 0x3ab9dbe/0x3b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:38.641244+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:39.641369+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3029000/0x0/0x4ffc00000, data 0x3ab9dbe/0x3b93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:40.641491+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:41.641646+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2208991 data_alloc: 234881024 data_used: 20905984
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:42.641826+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:43.642022+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 45801472 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.078755379s of 10.091617584s, submitted: 4
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:44.642289+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:45.642422+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:46.642621+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2211159 data_alloc: 234881024 data_used: 20893696
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:47.642781+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:48.642917+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:49.643076+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:50.643213+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:51.643378+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199286784 unmapped: 45793280 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2211159 data_alloc: 234881024 data_used: 20893696
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:52.643532+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199294976 unmapped: 45785088 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:53.643647+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 45776896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:54.644137+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 45776896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:55.644916+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 45776896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:56.645106+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199303168 unmapped: 45776896 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209311 data_alloc: 234881024 data_used: 20893696
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:57.645366+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.115762711s of 13.137649536s, submitted: 17
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 45768704 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:58.645644+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199311360 unmapped: 45768704 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b636ef00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b6bf30e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:35:59.646013+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 45760512 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:00.646281+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 45760512 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:01.646442+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 45760512 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2208947 data_alloc: 234881024 data_used: 20901888
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:02.646655+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:03.646890+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:04.647193+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:05.647534+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:06.647790+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2208947 data_alloc: 234881024 data_used: 20901888
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:07.647953+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199327744 unmapped: 45752320 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:08.648378+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199344128 unmapped: 45735936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.913313866s of 11.937891960s, submitted: 6
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:09.648594+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199344128 unmapped: 45735936 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:10.649017+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:11.649273+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209111 data_alloc: 234881024 data_used: 20901888
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:12.649485+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:13.649778+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:14.649932+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:15.650097+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 45727744 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:16.650249+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199368704 unmapped: 45711360 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209111 data_alloc: 234881024 data_used: 20901888
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:17.650585+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199368704 unmapped: 45711360 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:18.650855+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:19.651079+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:20.651289+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:21.651517+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209111 data_alloc: 234881024 data_used: 20901888
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:22.651752+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:23.651894+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199376896 unmapped: 45703168 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:24.652139+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199385088 unmapped: 45694976 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:25.652373+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199385088 unmapped: 45694976 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:26.652558+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.307643890s of 17.318552017s, submitted: 3
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199401472 unmapped: 45678592 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b665d800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209679 data_alloc: 234881024 data_used: 20910080
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:27.652673+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199401472 unmapped: 45678592 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:28.652868+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b386f0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0800 session 0x5631b636fc20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199417856 unmapped: 45662208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:29.653772+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199417856 unmapped: 45662208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:30.654211+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199417856 unmapped: 45662208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:31.654871+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199417856 unmapped: 45662208 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209251 data_alloc: 234881024 data_used: 20910080
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:32.656097+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199426048 unmapped: 45654016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:33.657383+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199426048 unmapped: 45654016 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:34.657790+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:35.658867+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:36.659777+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209251 data_alloc: 234881024 data_used: 20910080
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:37.660524+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:38.661222+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:39.661848+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:40.662402+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:41.662734+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199434240 unmapped: 45645824 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209251 data_alloc: 234881024 data_used: 20910080
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:42.663092+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199442432 unmapped: 45637632 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.144123077s of 16.171749115s, submitted: 8
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b402ab40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b3379680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:43.663240+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3026000/0x0/0x4ffc00000, data 0x3abcdbe/0x3b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 199450624 unmapped: 45629440 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3369e00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:44.664028+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:45.664775+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4916000/0x0/0x4ffc00000, data 0x2168d3c/0x223f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:46.664949+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1963754 data_alloc: 234881024 data_used: 12619776
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:47.665320+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:48.665881+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:49.666461+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:50.666835+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4916000/0x0/0x4ffc00000, data 0x2168d3c/0x223f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:51.667278+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1963754 data_alloc: 234881024 data_used: 12619776
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:52.667566+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b6c074a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b665d800 session 0x5631b69d0f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 52502528 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:53.667852+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.496988297s of 10.720852852s, submitted: 70
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185229312 unmapped: 59850752 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6428780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:54.668009+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:55.668142+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:56.668274+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:57.668406+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:58.668517+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:36:59.668774+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:00.668918+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:01.669116+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:02.669434+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:03.669553+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:04.669757+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:05.669955+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:06.670085+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:07.670216+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:08.670454+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:09.670810+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:10.671000+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:11.671202+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:12.671414+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:13.671583+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:14.671743+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:15.671896+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:16.672131+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:17.672279+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:18.672541+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:19.672740+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:20.672899+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:21.673073+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:22.673279+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:23.673507+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:24.673720+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:25.673922+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:26.674136+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:27.674320+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:28.674544+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:29.674753+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:30.674911+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:31.675055+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:32.675203+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:33.675369+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:34.675496+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:35.675673+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:36.675858+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:37.676034+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702789 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:38.676223+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 59826176 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:39.676357+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.602958679s of 45.905929565s, submitted: 76
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b636e5a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b33692c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b526fe00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73de000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73de000 session 0x5631b5f62d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:40.676539+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5770000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f618f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:41.676791+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f584c000/0x0/0x4ffc00000, data 0x129dc97/0x1370000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:42.677744+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1781801 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:43.677900+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b63a8000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:44.678106+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:45.678300+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:46.678491+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b3670d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 60825600 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f584c000/0x0/0x4ffc00000, data 0x129dc97/0x1370000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:47.678697+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1781801 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b6968960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73df000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184139776 unmapped: 60940288 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:48.678848+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df000 session 0x5631b363eb40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 60612608 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:49.678971+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 60612608 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:50.679208+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:51.679493+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:52.679722+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1848846 data_alloc: 234881024 data_used: 11169792
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:53.679889+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:54.680053+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:55.680224+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:56.680435+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:57.680826+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1848846 data_alloc: 234881024 data_used: 11169792
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 22K writes, 81K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 22K writes, 7463 syncs, 2.98 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4034 writes, 15K keys, 4034 commit groups, 1.0 writes per commit group, ingest: 17.63 MB, 0.03 MB/s
                                           Interval WAL: 4034 writes, 1593 syncs, 2.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:58.681005+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x12c4c97/0x1397000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:37:59.681164+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 59883520 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.734361649s of 20.849309921s, submitted: 29
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:00.681361+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 192380928 unmapped: 52699136 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:01.681453+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194084864 unmapped: 50995200 heap: 245080064 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b65e14a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b35505a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b6b52960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:02.681572+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1970704 data_alloc: 234881024 data_used: 13291520
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586c00 session 0x5631b65e0960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b40000/0x0/0x4ffc00000, data 0x1f9bc97/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586400 session 0x5631b6b53680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b639ba40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194519040 unmapped: 54239232 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586c00 session 0x5631b331c000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b592d860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:03.681692+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b6c061e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 54231040 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:04.681839+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 54231040 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:05.682016+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f7800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f7800 session 0x5631b69d03c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 54231040 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:06.682191+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 54231040 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:07.682436+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128715 data_alloc: 234881024 data_used: 13291520
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3644000/0x0/0x4ffc00000, data 0x34a5c97/0x3578000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194650112 unmapped: 54108160 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:08.682604+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b5f65c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194650112 unmapped: 54108160 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:09.682784+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586c00 session 0x5631b64283c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b38703c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194658304 unmapped: 54099968 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:10.683039+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.757110596s of 10.213796616s, submitted: 184
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194666496 unmapped: 54091776 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:11.683226+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 195723264 unmapped: 53035008 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:12.683429+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2205903 data_alloc: 234881024 data_used: 24137728
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205684736 unmapped: 43073536 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:13.683640+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3623000/0x0/0x4ffc00000, data 0x34c4cca/0x3599000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:14.683812+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:15.683975+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:16.684108+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:17.684253+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2269247 data_alloc: 251658240 data_used: 30818304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:18.684407+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3619000/0x0/0x4ffc00000, data 0x34cecca/0x35a3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:19.684559+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205750272 unmapped: 43008000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:20.684695+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.221982002s of 10.237133026s, submitted: 4
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205758464 unmapped: 42999808 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:21.684890+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205758464 unmapped: 42999808 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:22.685073+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2294221 data_alloc: 251658240 data_used: 30855168
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2481000/0x0/0x4ffc00000, data 0x4658cca/0x472d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212967424 unmapped: 35790848 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:23.685204+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:24.685442+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:25.685625+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:26.686473+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:27.686665+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2419059 data_alloc: 251658240 data_used: 32759808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2463000/0x0/0x4ffc00000, data 0x467bcca/0x4750000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:28.686869+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:29.687026+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2463000/0x0/0x4ffc00000, data 0x467bcca/0x4750000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:30.687165+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:31.687297+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:32.687515+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2419075 data_alloc: 251658240 data_used: 32759808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.701073647s of 12.032704353s, submitted: 164
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:33.687696+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:34.687897+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:35.688011+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:36.688164+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:37.688317+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413787 data_alloc: 251658240 data_used: 32759808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:38.688489+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:39.688696+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:40.688856+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:41.689123+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:42.689456+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413787 data_alloc: 251658240 data_used: 32759808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:43.689643+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:44.689840+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:45.689995+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:46.690163+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212426752 unmapped: 36331520 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:47.690462+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413787 data_alloc: 251658240 data_used: 32759808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.345106125s of 15.359903336s, submitted: 5
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:48.690628+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:49.690783+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:50.690974+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:51.691179+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246b000/0x0/0x4ffc00000, data 0x467ccca/0x4751000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 36323328 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:52.691393+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2415451 data_alloc: 251658240 data_used: 32747520
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212443136 unmapped: 36315136 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:53.691548+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212443136 unmapped: 36315136 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:54.691705+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 36306944 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:55.691869+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2469000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 36306944 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:56.692041+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 36306944 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:57.692205+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413367 data_alloc: 251658240 data_used: 32747520
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212459520 unmapped: 36298752 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:58.692418+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3372000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.486456871s of 10.526856422s, submitted: 18
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b592c3c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:38:59.692618+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:00.692782+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:01.692998+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:02.693232+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413235 data_alloc: 251658240 data_used: 32747520
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 36290560 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:03.693436+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212475904 unmapped: 36282368 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:04.693645+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212475904 unmapped: 36282368 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:05.693787+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212484096 unmapped: 36274176 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:06.693926+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212484096 unmapped: 36274176 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:07.694078+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2413235 data_alloc: 251658240 data_used: 32747520
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 36257792 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:08.694205+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 36257792 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:09.694427+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:10.694588+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 36257792 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x467dcca/0x4752000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:11.694726+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 36257792 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.050309181s of 13.052925110s, submitted: 1
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0400 session 0x5631b6c07a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa000 session 0x5631b592cb40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:12.694869+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212508672 unmapped: 36249600 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1994465 data_alloc: 218103808 data_used: 10506240
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b639a780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:13.695000+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:14.695155+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:15.695383+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a89000/0x0/0x4ffc00000, data 0x205fc97/0x2132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:16.695585+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:17.695723+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1992476 data_alloc: 218103808 data_used: 10502144
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:18.695861+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:19.696038+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:20.696186+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a89000/0x0/0x4ffc00000, data 0x205fc97/0x2132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:21.696472+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:22.696726+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a89000/0x0/0x4ffc00000, data 0x205fc97/0x2132000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1992476 data_alloc: 218103808 data_used: 10502144
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b6428d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.049098969s of 11.203976631s, submitted: 69
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa800 session 0x5631b5394960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:23.696861+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200032256 unmapped: 48726016 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6417680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:24.697080+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:25.697287+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:26.697466+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:27.697833+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6190000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742138 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:28.698098+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:29.698308+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:30.698515+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:31.698732+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6190000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:32.698939+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742138 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:33.699073+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b73df800 session 0x5631b5f641e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:34.699329+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6190000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:35.699538+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:36.699668+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:37.699838+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742138 data_alloc: 218103808 data_used: 1839104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f6190000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x903f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:38.699988+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:39.700121+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 55459840 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:40.700383+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.434934616s of 17.548748016s, submitted: 42
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 54403072 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:41.700548+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194428928 unmapped: 54329344 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:42.700787+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194428928 unmapped: 54329344 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:43.700919+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 54272000 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4be0000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa5ef9c5), peers [1] op hist [0,0,1])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:44.701170+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194584576 unmapped: 54173696 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:45.701439+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:46.701640+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:47.701790+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:48.702027+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:49.702227+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:50.702383+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:51.702562+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:52.702885+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:53.703060+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:54.703251+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:55.703425+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:56.703626+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:57.703786+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:58.703995+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:39:59.704225+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:00.704432+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:01.704626+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194600960 unmapped: 54157312 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:02.704868+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:03.705009+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:04.705140+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:05.705275+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:06.705452+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:07.705688+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:08.705813+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:09.705987+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194609152 unmapped: 54149120 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:10.706117+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194617344 unmapped: 54140928 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:11.706301+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194617344 unmapped: 54140928 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f5c20000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:12.706510+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194625536 unmapped: 54132736 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1744114 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:13.706665+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194625536 unmapped: 54132736 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:14.706818+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194625536 unmapped: 54132736 heap: 248758272 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.943851471s of 34.339908600s, submitted: 376
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa000 session 0x5631b644d680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b5771e00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b386eb40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199800 session 0x5631b5f66f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6416f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:15.706935+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194134016 unmapped: 58302464 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:16.707049+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 58294272 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:17.707151+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b99000/0x0/0x4ffc00000, data 0x19e0c97/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 58294272 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1872710 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:18.707283+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b64294a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:19.707509+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:20.707648+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa000 session 0x5631b3372d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:21.707950+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b99000/0x0/0x4ffc00000, data 0x19e0c97/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:22.708123+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b63a9c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586c00 session 0x5631b33734a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1873012 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:23.708243+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 58286080 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:24.708390+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 193929216 unmapped: 58507264 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:25.708504+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197828608 unmapped: 54607872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b98000/0x0/0x4ffc00000, data 0x19e0ca7/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b98000/0x0/0x4ffc00000, data 0x19e0ca7/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:26.708690+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:27.708833+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1981844 data_alloc: 234881024 data_used: 17993728
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:28.708988+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:29.709119+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b98000/0x0/0x4ffc00000, data 0x19e0ca7/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:30.709275+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:31.709434+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197836800 unmapped: 54599680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:32.709615+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4b98000/0x0/0x4ffc00000, data 0x19e0ca7/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 54591488 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1981844 data_alloc: 234881024 data_used: 17993728
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:33.709752+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 54591488 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.195119858s of 19.356214523s, submitted: 25
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:34.709895+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 47718400 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:35.710033+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 47718400 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:36.710168+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 47079424 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:37.710311+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212074496 unmapped: 40361984 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa000 session 0x5631b69d0f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: mgrc ms_handle_reset ms_handle_reset con 0x5631b66f8c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2285351161
Sep 30 19:04:00 compute-0 ceph-osd[82241]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2285351161,v1:192.168.122.100:6801/2285351161]
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: get_auth_request con 0x5631bb8fa800 auth_method 0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: mgrc handle_mgr_configure stats_period=5
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b636f680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b9fb0000 session 0x5631b526e780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b6b381e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb000 session 0x5631b69d0b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2132737 data_alloc: 234881024 data_used: 18255872
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:38.711623+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398a000/0x0/0x4ffc00000, data 0x2be4d09/0x2cb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b7848c00 session 0x5631b5f62780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b9fb0000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520c800 session 0x5631b644cd20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b73dfc00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:39.711798+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:40.711964+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398a000/0x0/0x4ffc00000, data 0x2be4d09/0x2cb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:41.712182+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:42.712638+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2125649 data_alloc: 234881024 data_used: 18255872
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:43.712828+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520c800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520c800 session 0x5631b592c000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:44.713052+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:45.713178+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:46.713290+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 45776896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3990000/0x0/0x4ffc00000, data 0x2be7d09/0x2cbc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b64170e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:47.713442+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 45768704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2125649 data_alloc: 234881024 data_used: 18255872
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:48.713588+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b337b0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.909849167s of 14.555604935s, submitted: 146
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b64283c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206684160 unmapped: 45752320 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fbc00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:49.713739+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206684160 unmapped: 45752320 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398e000/0x0/0x4ffc00000, data 0x2be7d3c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:50.713984+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 206692352 unmapped: 45744128 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:51.714105+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208068608 unmapped: 44367872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398e000/0x0/0x4ffc00000, data 0x2be7d3c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:52.714256+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208068608 unmapped: 44367872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398e000/0x0/0x4ffc00000, data 0x2be7d3c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2172193 data_alloc: 234881024 data_used: 24395776
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:53.714374+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208068608 unmapped: 44367872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:54.714544+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208068608 unmapped: 44367872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:55.714675+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208076800 unmapped: 44359680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:56.714782+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208076800 unmapped: 44359680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:57.715104+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208076800 unmapped: 44359680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2172193 data_alloc: 234881024 data_used: 24395776
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:58.715288+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f398e000/0x0/0x4ffc00000, data 0x2be7d3c/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x95af9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 44351488 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:40:59.715422+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208084992 unmapped: 44351488 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.638042450s of 11.702260971s, submitted: 12
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:00.715640+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213581824 unmapped: 38854656 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:01.715754+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213827584 unmapped: 38608896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cdd000/0x0/0x4ffc00000, data 0x36f2d3c/0x37c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:02.715875+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213827584 unmapped: 38608896 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278387 data_alloc: 234881024 data_used: 25186304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:03.716028+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213835776 unmapped: 38600704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:04.716174+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213835776 unmapped: 38600704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:05.716394+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213835776 unmapped: 38600704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:06.716572+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:07.716711+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278555 data_alloc: 234881024 data_used: 25186304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:08.716892+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:09.717017+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:10.717151+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 38592512 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:11.717311+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213860352 unmapped: 38576128 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:12.717497+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213860352 unmapped: 38576128 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278555 data_alloc: 234881024 data_used: 25186304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:13.717844+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213860352 unmapped: 38576128 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:14.717982+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:15.718171+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:16.718301+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:17.718516+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278555 data_alloc: 234881024 data_used: 25186304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:18.718684+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213868544 unmapped: 38567936 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:19.718847+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213876736 unmapped: 38559744 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:20.718971+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213876736 unmapped: 38559744 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:21.719111+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213893120 unmapped: 38543360 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55cc400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.734886169s of 21.945653915s, submitted: 102
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:22.719290+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1caf000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272331 data_alloc: 234881024 data_used: 25186304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:23.719555+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f62000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b526fe00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:24.719661+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:25.719782+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbe000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:26.719949+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 37429248 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:27.720196+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215015424 unmapped: 37421056 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272331 data_alloc: 234881024 data_used: 25186304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:28.720369+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215015424 unmapped: 37421056 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:29.720517+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215023616 unmapped: 37412864 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbe000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:30.720642+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbe000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215023616 unmapped: 37412864 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:31.720815+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.074160576s of 10.085861206s, submitted: 13
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:32.720974+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbe000/0x0/0x4ffc00000, data 0x3717d3c/0x37ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272635 data_alloc: 234881024 data_used: 25186304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:33.721108+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:34.721298+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:35.721745+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:36.721948+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 37404672 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbc000/0x0/0x4ffc00000, data 0x3718d3c/0x37ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:37.722068+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272635 data_alloc: 234881024 data_used: 25186304
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:38.722243+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbc000/0x0/0x4ffc00000, data 0x3718d3c/0x37ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:39.722409+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:40.722543+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:41.722693+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37396480 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b2b3b0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fbc00 session 0x5631b6969a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:42.722857+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 215048192 unmapped: 37388288 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.691148758s of 10.702739716s, submitted: 3
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2087680 data_alloc: 234881024 data_used: 18255872
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:43.723037+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1cbd000/0x0/0x4ffc00000, data 0x3718d3c/0x37ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [0,1])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f65860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213442560 unmapped: 38993920 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:44.723197+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213442560 unmapped: 38993920 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:45.723411+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2b13000/0x0/0x4ffc00000, data 0x24f9ca7/0x25cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213442560 unmapped: 38993920 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:46.723535+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2b13000/0x0/0x4ffc00000, data 0x24f9ca7/0x25cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:47.723690+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2085691 data_alloc: 234881024 data_used: 18251776
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:48.723842+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:49.723972+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:50.724131+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2b13000/0x0/0x4ffc00000, data 0x24f9ca7/0x25cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55cc400 session 0x5631b363eb40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b33692c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 38985728 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:51.724293+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b59d85a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:52.724460+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775507 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:53.724598+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:54.724776+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:55.724938+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a80000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:56.725098+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:57.725252+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775507 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:58.725396+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a80000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:41:59.725574+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a80000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:00.725707+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:01.725863+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:02.726022+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775507 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:03.726185+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:04.726360+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a80000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:05.726478+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b6b534a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b33625a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fbc00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fbc00 session 0x5631b639a780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3670b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 201924608 unmapped: 50511872 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.629732132s of 22.947402954s, submitted: 81
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b331c780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b6b38780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b5f674a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b520c800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:06.726610+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b520c800 session 0x5631b3372000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6c07a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:07.726749+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4137000/0x0/0x4ffc00000, data 0x12a0d09/0x1375000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:08.726972+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1851177 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:09.727124+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:10.727429+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:11.727641+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:12.727913+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b526f860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4137000/0x0/0x4ffc00000, data 0x12a0d09/0x1375000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:13.728109+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1851177 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:14.728295+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b5f66780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:15.728420+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 52240384 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:16.728593+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b35514a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.606972694s of 10.775773048s, submitted: 43
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fa400 session 0x5631b5f66960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200523776 unmapped: 51912704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:17.728789+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200523776 unmapped: 51912704 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:18.728927+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1855363 data_alloc: 218103808 data_used: 1900544
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 200851456 unmapped: 51585024 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:19.729047+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203472896 unmapped: 48963584 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:20.729334+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203472896 unmapped: 48963584 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:21.729527+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203472896 unmapped: 48963584 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:22.729710+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203472896 unmapped: 48963584 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:23.729834+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1920115 data_alloc: 234881024 data_used: 11452416
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203481088 unmapped: 48955392 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:24.729978+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203481088 unmapped: 48955392 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:25.730109+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203489280 unmapped: 48947200 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:26.730224+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203489280 unmapped: 48947200 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:27.730441+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f410f000/0x0/0x4ffc00000, data 0x12c7d19/0x139d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 203489280 unmapped: 48947200 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:28.730597+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.547014236s of 11.612545967s, submitted: 4
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1921175 data_alloc: 234881024 data_used: 11481088
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211107840 unmapped: 41328640 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:29.730692+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211107840 unmapped: 41328640 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:30.730783+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210124800 unmapped: 42311680 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:31.730917+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f309a000/0x0/0x4ffc00000, data 0x233cd19/0x2412000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211247104 unmapped: 41189376 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:32.731053+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211247104 unmapped: 41189376 heap: 252436480 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:33.731189+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2054427 data_alloc: 234881024 data_used: 11579392
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b639ab40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b3870f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b65e14a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b5f665a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b38d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b57714a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b402ab40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 51347456 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b5770f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b402a780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:34.731329+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 51347456 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:35.731540+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 51347456 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:36.731678+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f269e000/0x0/0x4ffc00000, data 0x2d38d19/0x2e0e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211591168 unmapped: 51347456 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:37.731825+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211607552 unmapped: 51331072 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:38.731980+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2124507 data_alloc: 234881024 data_used: 11583488
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211607552 unmapped: 51331072 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:39.732078+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f269b000/0x0/0x4ffc00000, data 0x2d3bd19/0x2e11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211623936 unmapped: 51314688 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:40.732261+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f269b000/0x0/0x4ffc00000, data 0x2d3bd19/0x2e11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211623936 unmapped: 51314688 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:41.732403+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211632128 unmapped: 51306496 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:42.732562+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.989665985s of 14.372500420s, submitted: 158
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb800 session 0x5631b5771680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 50970624 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2673000/0x0/0x4ffc00000, data 0x2d62d3c/0x2e39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:43.732695+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2130209 data_alloc: 234881024 data_used: 11583488
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 211968000 unmapped: 50970624 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:44.732819+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2673000/0x0/0x4ffc00000, data 0x2d62d3c/0x2e39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 212303872 unmapped: 50634752 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:45.732977+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213106688 unmapped: 49831936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:46.733126+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213106688 unmapped: 49831936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:47.733268+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213114880 unmapped: 49823744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:48.733416+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2673000/0x0/0x4ffc00000, data 0x2d62d3c/0x2e39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2199045 data_alloc: 234881024 data_used: 21688320
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213114880 unmapped: 49823744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:49.733536+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:50.733654+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:51.733766+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2673000/0x0/0x4ffc00000, data 0x2d62d3c/0x2e39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:52.733930+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:53.734048+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2199045 data_alloc: 234881024 data_used: 21688320
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213123072 unmapped: 49815552 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:54.734190+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.245738029s of 12.282820702s, submitted: 12
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 48168960 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:55.734299+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 48062464 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:56.734409+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214892544 unmapped: 48046080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:57.734555+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f211f000/0x0/0x4ffc00000, data 0x32aed3c/0x3385000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214900736 unmapped: 48037888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:58.734724+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253889 data_alloc: 234881024 data_used: 21815296
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f211f000/0x0/0x4ffc00000, data 0x32aed3c/0x3385000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214900736 unmapped: 48037888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:42:59.734887+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214900736 unmapped: 48037888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:00.735058+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214900736 unmapped: 48037888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:01.735234+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214245376 unmapped: 48693248 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:02.735414+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214245376 unmapped: 48693248 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:03.735535+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2247201 data_alloc: 234881024 data_used: 21815296
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214253568 unmapped: 48685056 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:04.735704+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214253568 unmapped: 48685056 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:05.735858+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:06.736001+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:07.736326+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:08.736487+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2247201 data_alloc: 234881024 data_used: 21815296
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:09.736594+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:10.736780+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 48676864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:11.736955+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.766283035s of 17.103271484s, submitted: 66
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214269952 unmapped: 48668672 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:12.737172+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214269952 unmapped: 48668672 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:13.737397+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2247201 data_alloc: 234881024 data_used: 21815296
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:14.737516+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:15.737680+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:16.737861+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:17.738065+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:18.738281+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2125000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2247201 data_alloc: 234881024 data_used: 21815296
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 48652288 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:19.738426+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214294528 unmapped: 48644096 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:20.738605+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214294528 unmapped: 48644096 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:21.738832+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2126000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3369860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b6969e00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:22.739042+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:23.739225+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246349 data_alloc: 234881024 data_used: 21819392
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:24.739404+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:25.739538+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:26.739654+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2126000/0x0/0x4ffc00000, data 0x32afd3c/0x3386000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.018495560s of 15.045900345s, submitted: 6
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214310912 unmapped: 48627712 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:27.739784+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214327296 unmapped: 48611328 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:28.739913+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246989 data_alloc: 234881024 data_used: 21819392
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214335488 unmapped: 48603136 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:29.740056+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214335488 unmapped: 48603136 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:30.740274+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:31.740431+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:32.740629+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:33.740756+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246989 data_alloc: 234881024 data_used: 21819392
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:34.740893+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:35.741016+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2124000/0x0/0x4ffc00000, data 0x32b0d3c/0x3387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:36.741147+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6417a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.072799683s of 10.080914497s, submitted: 3
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b337af00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 214343680 unmapped: 48594944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:37.741289+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209379328 unmapped: 53559296 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:38.741421+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b68d1000 session 0x5631b6b385a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2062857 data_alloc: 234881024 data_used: 11587584
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:39.741555+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3066000/0x0/0x4ffc00000, data 0x236fd19/0x2445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:40.741754+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:41.741900+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3066000/0x0/0x4ffc00000, data 0x236fd19/0x2445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:42.742062+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:43.749460+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2062857 data_alloc: 234881024 data_used: 11587584
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3066000/0x0/0x4ffc00000, data 0x236fd19/0x2445000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:44.749613+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:45.749738+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209395712 unmapped: 53542912 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b6c06b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b6b38000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:46.749885+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209428480 unmapped: 53510144 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b337b860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:47.750032+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:48.750170+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:49.750306+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:50.750425+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:51.750561+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:52.750796+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:53.751020+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:54.751180+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:55.751330+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:56.751484+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:57.751630+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:58.751774+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:43:59.751941+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:00.752138+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:01.752330+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:02.752828+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:03.752962+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:04.753118+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:05.753298+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:06.753422+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:07.753541+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:08.753697+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:09.753863+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:10.753980+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:11.754140+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:12.754327+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:13.754504+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:14.754619+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:15.754819+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:16.755051+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:17.755257+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:18.755417+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 58294272 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:19.755549+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:20.755689+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:21.755856+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:22.756057+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:23.756194+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:24.756334+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:25.756494+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:26.756624+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:27.756740+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:28.756868+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:29.757032+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:30.757160+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:31.757275+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:32.757428+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:33.757666+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805942 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4a7f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:34.757858+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 58286080 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.559127808s of 58.033184052s, submitted: 95
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:35.758032+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b592cb40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b6969e00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b402a780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205922304 unmapped: 57016320 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b38d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b69690e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:36.758159+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:37.758441+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3784000/0x0/0x4ffc00000, data 0x1c54cf9/0x1d28000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:38.758582+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b3870f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3784000/0x0/0x4ffc00000, data 0x1c54cf9/0x1d28000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946190 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:39.758725+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:40.758896+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205938688 unmapped: 56999936 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb400 session 0x5631b639ab40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:41.759064+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205946880 unmapped: 56991744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:42.759185+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205946880 unmapped: 56991744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b35514a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b5f66780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:43.759540+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205946880 unmapped: 56991744 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946648 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:44.759654+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 205963264 unmapped: 56975360 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:45.759760+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 54247424 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:46.759840+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 54247424 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:47.759911+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 54247424 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:48.760086+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 54247424 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2072504 data_alloc: 234881024 data_used: 20656128
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:49.760222+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208699392 unmapped: 54239232 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:50.760333+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208707584 unmapped: 54231040 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:51.760495+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208707584 unmapped: 54231040 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:52.760633+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208707584 unmapped: 54231040 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3783000/0x0/0x4ffc00000, data 0x1c54d09/0x1d29000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:53.760715+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 208707584 unmapped: 54231040 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2072960 data_alloc: 234881024 data_used: 20668416
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:54.760823+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.133584976s of 19.291557312s, submitted: 52
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 52740096 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:55.760955+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2940000/0x0/0x4ffc00000, data 0x2a97d09/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213065728 unmapped: 49872896 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2940000/0x0/0x4ffc00000, data 0x2a97d09/0x2b6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b402b0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b5f65680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3400 session 0x5631b636fe00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:56.761082+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b644d680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b386f0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:57.761234+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:58.761406+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2306968 data_alloc: 234881024 data_used: 21622784
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:44:59.761554+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:00.761697+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:01.761818+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be7000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:02.762006+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b592dc20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:03.762172+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2306968 data_alloc: 234881024 data_used: 21622784
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:04.762381+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b592d4a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:05.762549+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be7000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:06.762685+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213319680 unmapped: 49618944 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5587800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5587800 session 0x5631b6b53680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.046493530s of 12.419847488s, submitted: 145
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b63a94a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:07.762861+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213336064 unmapped: 49602560 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:08.763017+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 213581824 unmapped: 49356800 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2327905 data_alloc: 234881024 data_used: 25317376
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:09.763134+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be8000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 39542784 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:10.763289+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 39542784 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:11.763463+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223395840 unmapped: 39542784 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be8000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:12.763646+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 39526400 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:13.763762+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 39526400 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2388401 data_alloc: 251658240 data_used: 34312192
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:14.763917+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 39526400 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:15.764113+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223412224 unmapped: 39526400 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:16.764254+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1be8000/0x0/0x4ffc00000, data 0x37eed6b/0x38c4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223428608 unmapped: 39510016 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:17.764434+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223428608 unmapped: 39510016 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:18.764565+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.702298164s of 11.732785225s, submitted: 8
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223870976 unmapped: 39067648 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2481871 data_alloc: 251658240 data_used: 34750464
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:19.764674+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229236736 unmapped: 33701888 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0fe7000/0x0/0x4ffc00000, data 0x43e1d6b/0x44b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:20.764801+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230006784 unmapped: 32931840 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:21.764909+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b63a9a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b6bf3c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229343232 unmapped: 33595392 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:22.765101+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5587800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5587800 session 0x5631b69d01e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:23.765269+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2214773 data_alloc: 234881024 data_used: 21626880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:24.765455+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:25.765618+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f257d000/0x0/0x4ffc00000, data 0x2aacd09/0x2b81000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:26.765769+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:27.765915+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f257b000/0x0/0x4ffc00000, data 0x2aadd09/0x2b82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xa74f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:28.766073+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2215077 data_alloc: 234881024 data_used: 21626880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:29.766225+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221446144 unmapped: 41492480 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:30.766415+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.652027130s of 12.008926392s, submitted: 172
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b3372000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b5771a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221429760 unmapped: 41508864 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:31.766531+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6b52f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:32.766732+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:33.766906+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:34.767048+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:35.767241+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:36.767404+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:37.767564+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:38.767693+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:39.767865+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:40.768038+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:41.768226+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:42.768410+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:43.768612+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:44.768793+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:45.769049+0000)
Sep 30 19:04:00 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19902 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:46.769182+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:47.769413+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:48.769559+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:49.769680+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:50.769863+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:51.769996+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:52.770198+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:53.770327+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:54.770519+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:55.770696+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:56.770815+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:57.770984+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:58.771132+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:45:59.771272+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:00.771437+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:01.771568+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:02.771826+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:03.772257+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:04.772404+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:05.772656+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:06.772792+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:07.772992+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:08.773160+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:09.773324+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:10.773534+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:11.773663+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:12.773806+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:13.773922+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:14.774071+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:15.774218+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:16.774417+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:17.774582+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:18.774766+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:19.774926+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:20.775123+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:21.775288+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:22.775462+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:23.775998+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:24.776274+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:25.776449+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:26.776624+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:27.776760+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:28.777017+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:29.777121+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:30.777267+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:31.777457+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:32.777659+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:33.777816+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:34.777969+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841780 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f466f000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210067456 unmapped: 52871168 heap: 262938624 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:35.778169+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b64172c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b639a5a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5199400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5199400 session 0x5631b331c000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6c07a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.936187744s of 65.134979248s, submitted: 54
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b392c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b59d9c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b644c780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 60104704 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5587800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5587800 session 0x5631b636f0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3870000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:36.778432+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:37.778575+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:38.779114+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3857000/0x0/0x4ffc00000, data 0x1770d09/0x1845000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:39.779398+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1956484 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:40.779842+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:41.779986+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3857000/0x0/0x4ffc00000, data 0x1770d09/0x1845000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:42.780252+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 60899328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:43.780396+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b2b3be00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209928192 unmapped: 60891136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:44.780495+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1959882 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 209936384 unmapped: 60882944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:45.780608+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:46.780736+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:47.780882+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3856000/0x0/0x4ffc00000, data 0x1770d2c/0x1846000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:48.780989+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:49.781115+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2058074 data_alloc: 234881024 data_used: 16293888
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:50.781303+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3856000/0x0/0x4ffc00000, data 0x1770d2c/0x1846000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:51.781439+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:52.781599+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3856000/0x0/0x4ffc00000, data 0x1770d2c/0x1846000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:53.781776+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 60375040 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:54.781916+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2058074 data_alloc: 234881024 data_used: 16293888
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.958608627s of 19.136236191s, submitted: 64
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 220626944 unmapped: 50192384 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:55.782031+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222814208 unmapped: 48005120 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:56.782143+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b64174a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b639a1e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5843400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5843400 session 0x5631b6c06780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2a6b000/0x0/0x4ffc00000, data 0x254dd2c/0x2623000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6c065a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 46759936 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b53c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b5395a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f29c3000/0x0/0x4ffc00000, data 0x25fbd2c/0x26d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:57.782336+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b5394b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5842000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5842000 session 0x5631b69d03c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f623c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223649792 unmapped: 47169536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:58.782497+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c3b000/0x0/0x4ffc00000, data 0x3383d2c/0x3459000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223649792 unmapped: 47169536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:46:59.782617+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2295141 data_alloc: 234881024 data_used: 17666048
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c3b000/0x0/0x4ffc00000, data 0x3383d2c/0x3459000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223649792 unmapped: 47169536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:00.782759+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c3b000/0x0/0x4ffc00000, data 0x3383d2c/0x3459000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223649792 unmapped: 47169536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:01.782891+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223666176 unmapped: 47153152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b402b680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:02.783023+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223666176 unmapped: 47153152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:03.783155+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1f000/0x0/0x4ffc00000, data 0x33a7d2c/0x347d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223666176 unmapped: 47153152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:04.783303+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2288741 data_alloc: 234881024 data_used: 17670144
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1f000/0x0/0x4ffc00000, data 0x33a7d2c/0x347d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b6bf32c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223666176 unmapped: 47153152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:05.783480+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b35de960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5843000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.674231529s of 11.071226120s, submitted: 215
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5843000 session 0x5631b5f625a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223682560 unmapped: 47136768 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1f000/0x0/0x4ffc00000, data 0x33a7d2c/0x347d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:06.783807+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223682560 unmapped: 47136768 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:07.783956+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223682560 unmapped: 47136768 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:08.784104+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230400000 unmapped: 40419328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:09.784275+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2386331 data_alloc: 251658240 data_used: 30572544
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:10.784651+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:11.784799+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1e000/0x0/0x4ffc00000, data 0x33a7d4f/0x347e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:12.784965+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:13.785264+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:14.785455+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1e000/0x0/0x4ffc00000, data 0x33a7d4f/0x347e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2387131 data_alloc: 251658240 data_used: 30576640
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1b000/0x0/0x4ffc00000, data 0x33aad4f/0x3481000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:15.785609+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:16.785926+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1b000/0x0/0x4ffc00000, data 0x33aad4f/0x3481000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230948864 unmapped: 39870464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:17.786164+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f1c1b000/0x0/0x4ffc00000, data 0x33aad4f/0x3481000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.891641617s of 11.933318138s, submitted: 14
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 236093440 unmapped: 34725888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:18.786328+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237633536 unmapped: 33185792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:19.786514+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2507695 data_alloc: 251658240 data_used: 31805440
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:20.786738+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:21.786908+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:22.787237+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0de9000/0x0/0x4ffc00000, data 0x41d6d4f/0x42ad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0de9000/0x0/0x4ffc00000, data 0x41d6d4f/0x42ad000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:23.787417+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:24.787579+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2507847 data_alloc: 251658240 data_used: 31809536
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0ded000/0x0/0x4ffc00000, data 0x41d8d4f/0x42af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:25.787790+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0ded000/0x0/0x4ffc00000, data 0x41d8d4f/0x42af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237559808 unmapped: 33259520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:26.787934+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:27.788134+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:28.788278+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:29.788408+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2504911 data_alloc: 251658240 data_used: 31809536
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:30.788581+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.494389534s of 12.792341232s, submitted: 169
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:31.788740+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:32.788914+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:33.789095+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:34.789257+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2505087 data_alloc: 251658240 data_used: 31809536
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237608960 unmapped: 33210368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:35.789468+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:36.789606+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237617152 unmapped: 33202176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:37.789946+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237617152 unmapped: 33202176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:38.790309+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237641728 unmapped: 33177600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:39.790481+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237641728 unmapped: 33177600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b639b680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b64161e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2503915 data_alloc: 251658240 data_used: 31809536
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:40.790632+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237649920 unmapped: 33169408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:41.790756+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237649920 unmapped: 33169408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:42.790929+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237649920 unmapped: 33169408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:43.791081+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237649920 unmapped: 33169408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:44.791221+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237658112 unmapped: 33161216 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2503915 data_alloc: 251658240 data_used: 31809536
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:45.791390+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237658112 unmapped: 33161216 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:46.791503+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237666304 unmapped: 33153024 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:47.791627+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237666304 unmapped: 33153024 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:48.791754+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237666304 unmapped: 33153024 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dec000/0x0/0x4ffc00000, data 0x41d9d4f/0x42b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:49.791915+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237674496 unmapped: 33144832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2503915 data_alloc: 251658240 data_used: 31809536
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.172214508s of 19.200347900s, submitted: 8
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dea000/0x0/0x4ffc00000, data 0x41dad4f/0x42b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:50.792081+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237682688 unmapped: 33136640 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:51.792216+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237682688 unmapped: 33136640 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:52.792518+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237682688 unmapped: 33136640 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0dea000/0x0/0x4ffc00000, data 0x41dad4f/0x42b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:53.792658+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237682688 unmapped: 33136640 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:54.792770+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237699072 unmapped: 33120256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2504723 data_alloc: 251658240 data_used: 31809536
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:55.792937+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237699072 unmapped: 33120256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f645a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b402ab40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:56.793083+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55cb800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 237699072 unmapped: 33120256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55cb800 session 0x5631b3670f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f0deb000/0x0/0x4ffc00000, data 0x41dad4f/0x42b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:57.793235+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229425152 unmapped: 41394176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 26K writes, 98K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 26K writes, 9158 syncs, 2.90 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4299 writes, 16K keys, 4299 commit groups, 1.0 writes per commit group, ingest: 19.72 MB, 0.03 MB/s
                                           Interval WAL: 4299 writes, 1695 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:58.793480+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:47:59.793650+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2202981 data_alloc: 234881024 data_used: 16560128
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:00.793817+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:01.793985+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:02.794196+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f29a3000/0x0/0x4ffc00000, data 0x2622d2c/0x26f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:03.794386+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:04.794561+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230522880 unmapped: 40296448 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2202981 data_alloc: 234881024 data_used: 16560128
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.812459946s of 15.024492264s, submitted: 87
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b5f67a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b63a9a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:05.794751+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f29a4000/0x0/0x4ffc00000, data 0x2622d2c/0x26f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229548032 unmapped: 41271296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3670960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:06.794886+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959cba/0xa2d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:07.797502+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:08.799112+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:09.799219+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:10.799541+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:11.799754+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:12.799922+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:13.800045+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:14.800187+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:15.800327+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:16.800513+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:17.800692+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:18.800835+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:19.801005+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:20.801166+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:21.801293+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:22.801463+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:23.801676+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:24.801879+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:25.802148+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:26.802395+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:27.802611+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:28.802776+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:29.802947+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:30.803172+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:31.803303+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:32.803455+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:33.803623+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:34.803776+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 52740096 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883301 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f42e4000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.031751633s of 30.238092422s, submitted: 78
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:35.803910+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b38712c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197c00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197c00 session 0x5631b3379a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b3ff3680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b6b53860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b35de1e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:36.804070+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:37.804208+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:38.804406+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:39.804684+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1966912 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b5395c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:40.804828+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c73000/0x0/0x4ffc00000, data 0x1356c97/0x1429000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:41.804990+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:42.805124+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fac00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fac00 session 0x5631b592c780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:43.805287+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6bf3c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b5397c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:44.805433+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218882048 unmapped: 51937280 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1970570 data_alloc: 218103808 data_used: 1892352
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:45.805639+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218882048 unmapped: 51937280 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:46.805828+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 51445760 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c4b000/0x0/0x4ffc00000, data 0x137dca7/0x1451000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:47.805943+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 51445760 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:48.806065+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219381760 unmapped: 51437568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:49.806201+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219381760 unmapped: 51437568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2032890 data_alloc: 218103808 data_used: 10952704
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c4b000/0x0/0x4ffc00000, data 0x137dca7/0x1451000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:50.806307+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219381760 unmapped: 51437568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c4b000/0x0/0x4ffc00000, data 0x137dca7/0x1451000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:51.806421+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219389952 unmapped: 51429376 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:52.806574+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219389952 unmapped: 51429376 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:53.806766+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219389952 unmapped: 51429376 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:54.806984+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219389952 unmapped: 51429376 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f3c4b000/0x0/0x4ffc00000, data 0x137dca7/0x1451000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2032890 data_alloc: 218103808 data_used: 10952704
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.829759598s of 19.939464569s, submitted: 36
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:55.807255+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224174080 unmapped: 46645248 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:56.807496+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224174080 unmapped: 46645248 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:57.807620+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224894976 unmapped: 45924352 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:58.807740+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d400 session 0x5631b63a8f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d000 session 0x5631b386f4a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3364800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b3364800 session 0x5631b6b52b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 224894976 unmapped: 45924352 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6b53860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d000 session 0x5631b331d680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d400 session 0x5631b2b3a960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b5770780
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b6e73400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b6e73400 session 0x5631b636eb40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b35ded20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:48:59.807869+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2d0b000/0x0/0x4ffc00000, data 0x22bbce0/0x2391000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225058816 unmapped: 45760512 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2169298 data_alloc: 234881024 data_used: 12369920
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:00.808028+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225058816 unmapped: 45760512 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:01.808221+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225058816 unmapped: 45760512 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:02.808432+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225058816 unmapped: 45760512 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d000 session 0x5631b639b860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2d0b000/0x0/0x4ffc00000, data 0x22bbd19/0x2391000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:03.808544+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225067008 unmapped: 45752320 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:04.808697+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225067008 unmapped: 45752320 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2165802 data_alloc: 234881024 data_used: 12369920
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d400 session 0x5631b5f62000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:05.808857+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225067008 unmapped: 45752320 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:06.809082+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225067008 unmapped: 45752320 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b63a94a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.581088066s of 11.953786850s, submitted: 163
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce7000/0x0/0x4ffc00000, data 0x22dfd19/0x23b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [1])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fb000 session 0x5631b639ab40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:07.809221+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225075200 unmapped: 45744128 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:08.809363+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225083392 unmapped: 45735936 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:09.809588+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225083392 unmapped: 45735936 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2173129 data_alloc: 234881024 data_used: 12935168
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:10.809766+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce6000/0x0/0x4ffc00000, data 0x22dfd29/0x23b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:11.809940+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:12.810169+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:13.810323+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:14.810467+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2194937 data_alloc: 234881024 data_used: 16080896
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce6000/0x0/0x4ffc00000, data 0x22dfd29/0x23b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:15.810786+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce3000/0x0/0x4ffc00000, data 0x22e2d29/0x23b9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:16.810922+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:17.811093+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2ce3000/0x0/0x4ffc00000, data 0x22e2d29/0x23b9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:18.811328+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:19.811605+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225607680 unmapped: 45211648 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2194937 data_alloc: 234881024 data_used: 16080896
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:20.811780+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.392265320s of 13.433295250s, submitted: 11
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226746368 unmapped: 44072960 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:21.811962+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229335040 unmapped: 41484288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2179000/0x0/0x4ffc00000, data 0x2e44d29/0x2f1b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:22.812142+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:23.812273+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:24.812429+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298633 data_alloc: 234881024 data_used: 16474112
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:25.812593+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:26.812842+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 40337408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:27.812998+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2146000/0x0/0x4ffc00000, data 0x2e76d29/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:28.813154+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:29.813532+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2293017 data_alloc: 234881024 data_used: 16478208
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:30.813657+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:31.813775+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:32.813937+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:33.814043+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:34.814211+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2293017 data_alloc: 234881024 data_used: 16478208
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.330355644s of 14.789398193s, submitted: 104
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:35.814376+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:36.814494+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5586800 session 0x5631b331cf00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b55f3800 session 0x5631b64e6960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:37.814641+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:38.814777+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:39.814950+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2291885 data_alloc: 234881024 data_used: 16486400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:40.815079+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229810176 unmapped: 41009152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:41.815238+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229851136 unmapped: 40968192 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:42.815441+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229851136 unmapped: 40968192 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:43.815654+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229892096 unmapped: 40927232 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:44.815824+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230031360 unmapped: 40787968 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2291977 data_alloc: 234881024 data_used: 16486400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:45.815959+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:46.816096+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:47.816494+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:48.816643+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:49.816820+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f214c000/0x0/0x4ffc00000, data 0x2e79d29/0x2f50000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2291977 data_alloc: 234881024 data_used: 16486400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.061479568s of 14.912302971s, submitted: 309
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b33625a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d000 session 0x5631b402a5a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:50.817015+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230039552 unmapped: 40779776 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fbc00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:51.817147+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631bb8fbc00 session 0x5631b3551680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:52.817397+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:53.817527+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:54.817654+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2140436 data_alloc: 234881024 data_used: 12378112
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:55.817791+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f2f6e000/0x0/0x4ffc00000, data 0x1e30ca7/0x1f04000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:56.817923+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:57.818052+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:58.818170+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b441d400 session 0x5631b5f663c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b5197400 session 0x5631b639b680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230121472 unmapped: 40697856 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:49:59.818307+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 ms_handle_reset con 0x5631b40f7000 session 0x5631b6417a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4648000/0x0/0x4ffc00000, data 0x980ca7/0xa54000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:00.818430+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:01.818560+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:02.818773+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:03.818988+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:04.819157+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:05.819290+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:06.819454+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:07.819814+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:08.819937+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:09.820089+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:10.820250+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:11.820412+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:12.820596+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:13.820743+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:14.820914+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:15.821040+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:16.821173+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:17.821386+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:18.821516+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221536256 unmapped: 49283072 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:19.821699+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:20.821826+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:21.821962+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:22.822126+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:23.822268+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:24.822394+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:25.822512+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:26.822640+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:27.822781+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:28.822898+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:29.823012+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:30.823143+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:31.823268+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:32.823425+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:33.823636+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:34.823760+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221544448 unmapped: 49274880 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:35.823892+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:36.824041+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:37.824187+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:38.824377+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:39.824509+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:40.824627+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:41.824773+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:42.824940+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:43.825064+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:44.825248+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:45.825540+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914569 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:46.825684+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221552640 unmapped: 49266688 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:47.825814+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.480453491s of 57.782039642s, submitted: 106
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _renew_subs
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 143 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:48.825964+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 144 heartbeat osd_stat(store_statfs(0x4f4670000/0x0/0x4ffc00000, data 0x959c97/0xa2c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _renew_subs
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f466b000/0x0/0x4ffc00000, data 0x95bba7/0xa30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 145 ms_handle_reset con 0x5631b441d000 session 0x5631b5397c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:49.826080+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f466b000/0x0/0x4ffc00000, data 0x95bba7/0xa30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:50.826295+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1927667 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:51.826489+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 49225728 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:52.826727+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f4667000/0x0/0x4ffc00000, data 0x95db70/0xa34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 49217536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:53.826947+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 49217536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:54.827127+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 49217536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:55.827270+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1927667 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f4667000/0x0/0x4ffc00000, data 0x95db70/0xa34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5586800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 145 ms_handle_reset con 0x5631b5586800 session 0x5631b6bf2d20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 49201152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:56.827428+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 49201152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:57.827609+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f3800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b55f3800 session 0x5631b5f623c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b3379a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d000 session 0x5631b33705a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 49201152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d400 session 0x5631b337b0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f4664000/0x0/0x4ffc00000, data 0x95f95e/0xa37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.240273476s of 10.334043503s, submitted: 50
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:58.827710+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b5197400 session 0x5631b636fa40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fbc00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fbc00 session 0x5631b2b3be00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b6921a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d000 session 0x5631b6bf3c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d400 session 0x5631b35dfc20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8f000/0x0/0x4ffc00000, data 0x133595e/0x140d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:50:59.827840+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:00.827982+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2000623 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:01.828108+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:02.828258+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:03.828383+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:04.828604+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 49086464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8f000/0x0/0x4ffc00000, data 0x133595e/0x140d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:05.828801+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2000623 data_alloc: 218103808 data_used: 1888256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 49078272 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b5197400 session 0x5631b2b3b0e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:06.828929+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 49078272 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:07.829229+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 49078272 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:08.829402+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8e000/0x0/0x4ffc00000, data 0x1335981/0x140e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 49061888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:09.829505+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222748672 unmapped: 48070656 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:10.829690+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2072012 data_alloc: 234881024 data_used: 12201984
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222748672 unmapped: 48070656 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:11.829808+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.287197113s of 13.350550652s, submitted: 13
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b68d1000 session 0x5631b331d680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222756864 unmapped: 48062464 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:12.830031+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222765056 unmapped: 48054272 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:13.830183+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8c000/0x0/0x4ffc00000, data 0x13359f3/0x1410000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222773248 unmapped: 48046080 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:14.830409+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222773248 unmapped: 48046080 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:15.830542+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2075678 data_alloc: 234881024 data_used: 12210176
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222781440 unmapped: 48037888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:16.830716+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8c000/0x0/0x4ffc00000, data 0x13359f3/0x1410000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222781440 unmapped: 48037888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:17.830895+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f3c8c000/0x0/0x4ffc00000, data 0x13359f3/0x1410000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xab5f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222781440 unmapped: 48037888 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:18.831054+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:19.831181+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225452032 unmapped: 45367296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:20.831332+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2134968 data_alloc: 234881024 data_used: 13152256
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226541568 unmapped: 44277760 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:21.831487+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226549760 unmapped: 44269568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:22.831619+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226549760 unmapped: 44269568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:23.831734+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f322e000/0x0/0x4ffc00000, data 0x19839f3/0x1a5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xaf6f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226549760 unmapped: 44269568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:24.831877+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 226549760 unmapped: 44269568 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:25.832001+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2147658 data_alloc: 234881024 data_used: 13295616
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.303266525s of 14.531414986s, submitted: 67
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225419264 unmapped: 45400064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:26.832141+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225419264 unmapped: 45400064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:27.832281+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225427456 unmapped: 45391872 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:28.832421+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d000 session 0x5631b53974a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:29.832544+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f322a000/0x0/0x4ffc00000, data 0x1985a65/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xaf6f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:30.832686+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146348 data_alloc: 234881024 data_used: 13295616
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:31.832807+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:32.833027+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:33.833144+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:34.833311+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b441d400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x1986a65/0x1a63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b386f4a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x1986a65/0x1a63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225435648 unmapped: 45383680 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:35.833490+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146426 data_alloc: 234881024 data_used: 13303808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225443840 unmapped: 45375488 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:36.833660+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b5197400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b5197400 session 0x5631b402a5a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b51c6800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.851206779s of 10.898889542s, submitted: 9
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225452032 unmapped: 45367296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b51c6800 session 0x5631b6b381e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:37.833792+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225452032 unmapped: 45367296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:38.833918+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225452032 unmapped: 45367296 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:39.834051+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:40.834197+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146472 data_alloc: 234881024 data_used: 13303808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:41.834321+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:42.834560+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:43.834683+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:44.834833+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:45.834981+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146472 data_alloc: 234881024 data_used: 13303808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:46.835159+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:47.835294+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:48.835420+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:49.835572+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208b000/0x0/0x4ffc00000, data 0x19869f3/0x1a61000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:50.835816+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146472 data_alloc: 234881024 data_used: 13303808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.099124908s of 14.182600975s, submitted: 13
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:51.835967+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:52.836200+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:53.836394+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225460224 unmapped: 45359104 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b441d400 session 0x5631b6c072c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:54.836572+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:55.836752+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146872 data_alloc: 234881024 data_used: 13303808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:56.836942+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:57.837075+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:58.837244+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:51:59.837413+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:00.837574+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146872 data_alloc: 234881024 data_used: 13303808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:01.838451+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:02.838670+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:03.838808+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:04.838952+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:05.839077+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146872 data_alloc: 234881024 data_used: 13303808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:06.839228+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:07.839377+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:08.839557+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:09.839690+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:10.839845+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146872 data_alloc: 234881024 data_used: 13303808
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:11.840702+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225468416 unmapped: 45350912 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:12.840877+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:13.841038+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:14.841122+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2089000/0x0/0x4ffc00000, data 0x19879f3/0x1a62000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:15.841283+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b64e74a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.129930496s of 24.143545151s, submitted: 3
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b53970e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2144984 data_alloc: 234881024 data_used: 13299712
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:16.841403+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:17.841580+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fa400 session 0x5631b63a9a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fb400 session 0x5631b3670b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f208c000/0x0/0x4ffc00000, data 0x1987981/0x1a60000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:18.841791+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b55f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 225476608 unmapped: 45342720 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b55f7000 session 0x5631b5394000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:19.841932+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:20.842082+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1946858 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:21.842250+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:22.842458+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:23.842587+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 52600832 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f95e/0xa37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b5397e00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:24.842704+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218243072 unmapped: 52576256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b3379a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:25.842861+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218243072 unmapped: 52576256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1945316 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:26.842972+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218243072 unmapped: 52576256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:27.843146+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218243072 unmapped: 52576256 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.070165634s of 12.286311150s, submitted: 71
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fa400 session 0x5631b3ff3680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:28.843383+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:29.843574+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:30.843767+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1950348 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:31.843948+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:32.844099+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:33.844233+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:34.844432+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:35.844565+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1950348 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:36.844711+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:37.844842+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:38.844968+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:39.845496+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:40.845590+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1950348 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:41.845774+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218251264 unmapped: 52568064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fb400 session 0x5631b644c1e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b68d1800 session 0x5631b64e7860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b5397c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b636fa40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.203206062s of 14.218482971s, submitted: 4
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b68d1800 session 0x5631b6921a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fa400 session 0x5631b69d01e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fb400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fb400 session 0x5631b3362f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b6c07680
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:42.845972+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b5f652c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f26b9000/0x0/0x4ffc00000, data 0x13589df/0x1433000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:43.846117+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:44.846284+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:45.846459+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2036782 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:46.846655+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:47.846821+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218529792 unmapped: 52289536 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f26b9000/0x0/0x4ffc00000, data 0x13589df/0x1433000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:48.847025+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218537984 unmapped: 52281344 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:49.847212+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218537984 unmapped: 52281344 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:50.847470+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 52273152 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2036782 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f26b9000/0x0/0x4ffc00000, data 0x13589df/0x1433000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:51.847608+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b68d1800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f26b9000/0x0/0x4ffc00000, data 0x13589df/0x1433000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [0,0,1])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b68d1800 session 0x5631b63a94a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218882048 unmapped: 51937280 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.918356895s of 10.087418556s, submitted: 52
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b51c7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:52.847781+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 218890240 unmapped: 51929088 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:53.847917+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 219987968 unmapped: 50831360 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:54.848106+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 47210496 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:55.848231+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223608832 unmapped: 47210496 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2108895 data_alloc: 234881024 data_used: 12058624
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2d4e400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b2d4e400 session 0x5631b64283c0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:56.848393+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 47202304 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f2691000/0x0/0x4ffc00000, data 0x137fa02/0x145b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:57.848536+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223617024 unmapped: 47202304 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:58.848695+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:52:59.848825+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:00.848986+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2112409 data_alloc: 234881024 data_used: 12062720
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:01.849188+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:02.849433+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bc6b4400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f268f000/0x0/0x4ffc00000, data 0x137fa74/0x145d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:03.849587+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223625216 unmapped: 47194112 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f268f000/0x0/0x4ffc00000, data 0x137fa74/0x145d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.809801102s of 11.819898605s, submitted: 4
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:04.849738+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 228433920 unmapped: 42385408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:05.849880+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 229515264 unmapped: 41304064 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2225273 data_alloc: 234881024 data_used: 12775424
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1aff000/0x0/0x4ffc00000, data 0x1f0da74/0x1feb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:06.849961+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:07.850114+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1aaa000/0x0/0x4ffc00000, data 0x1f5ca74/0x203a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1aaa000/0x0/0x4ffc00000, data 0x1f5ca74/0x203a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:08.850300+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:09.850476+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:10.850629+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230375424 unmapped: 40443904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2227097 data_alloc: 234881024 data_used: 12918784
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1aaa000/0x0/0x4ffc00000, data 0x1f5ca74/0x203a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:11.850805+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230383616 unmapped: 40435712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:12.850978+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230383616 unmapped: 40435712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:13.851117+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230383616 unmapped: 40435712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:14.851279+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:15.851403+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222749 data_alloc: 234881024 data_used: 12922880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:16.851533+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8e000/0x0/0x4ffc00000, data 0x1f80a74/0x205e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:17.851725+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8e000/0x0/0x4ffc00000, data 0x1f80a74/0x205e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:18.870909+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230391808 unmapped: 40427520 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8e000/0x0/0x4ffc00000, data 0x1f80a74/0x205e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:19.871070+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230400000 unmapped: 40419328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:20.871220+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230400000 unmapped: 40419328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222749 data_alloc: 234881024 data_used: 12922880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:21.871399+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230400000 unmapped: 40419328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:22.871531+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:23.872220+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.634010315s of 19.986522675s, submitted: 139
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:24.872431+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:25.872591+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222489 data_alloc: 234881024 data_used: 12922880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bc6b4400 session 0x5631b6428b40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:26.872771+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230408192 unmapped: 40411136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:27.873023+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:28.873138+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:29.873451+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:30.873677+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222489 data_alloc: 234881024 data_used: 12922880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:31.873990+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230416384 unmapped: 40402944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:32.874208+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:33.874431+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:34.874557+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:35.874723+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222489 data_alloc: 234881024 data_used: 12922880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.534486771s of 12.544783592s, submitted: 2
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:36.874839+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230424576 unmapped: 40394752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:37.874988+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:38.875147+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:39.875324+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:40.875548+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222657 data_alloc: 234881024 data_used: 12922880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:41.875731+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:42.875930+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:43.876091+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230432768 unmapped: 40386560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:44.876227+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 40378368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:45.876406+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 40378368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2222657 data_alloc: 234881024 data_used: 12922880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:46.876656+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8b000/0x0/0x4ffc00000, data 0x1f83a74/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230449152 unmapped: 40370176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:47.876786+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230449152 unmapped: 40370176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2d4e400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b2d4e400 session 0x5631b644c5a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.738879204s of 11.748149872s, submitted: 3
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b3870960
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:48.876903+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230457344 unmapped: 40361984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:49.877076+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230457344 unmapped: 40361984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631bb8fa400 session 0x5631b6c070e0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b51c7000 session 0x5631b5f66f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:50.877208+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 230465536 unmapped: 40353792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2220696 data_alloc: 234881024 data_used: 12922880
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:51.877310+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f1a8d000/0x0/0x4ffc00000, data 0x1f83a02/0x205f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b40f7000 session 0x5631b6bf3c20
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:52.877487+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:53.877623+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b2000/0x0/0x4ffc00000, data 0x95f96d/0xa38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:54.877804+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:55.877950+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972234 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:56.878125+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:57.878279+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222167040 unmapped: 48652288 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b2d4e400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:58.878424+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.059069633s of 10.277308464s, submitted: 75
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b2d4e400 session 0x5631b6b52f00
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b3365400
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b3365400 session 0x5631b337b860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:53:59.878592+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:00.878773+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:01.878957+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:02.879238+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:03.879414+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:04.879543+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:05.879739+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:06.879893+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:07.880028+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:08.880180+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:09.880320+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:10.880525+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:11.880728+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:12.880934+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:13.881126+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:14.881253+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:15.881448+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:16.881620+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:17.881822+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:18.881962+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:19.882137+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:20.882302+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:21.882450+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:22.882665+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:23.882828+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:24.882964+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222183424 unmapped: 48635904 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:25.883028+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:26.883157+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:27.883303+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:28.883461+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:29.883581+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:30.883706+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:31.883841+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:32.883999+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222191616 unmapped: 48627712 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:33.884117+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b9fb0400 session 0x5631b6bf25a0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b40f7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:34.884281+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:35.884396+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:36.884513+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:37.884635+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:38.884776+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:39.884962+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:40.885179+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:41.885395+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:42.885587+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222208000 unmapped: 48611328 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:43.885713+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:44.885962+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:45.886212+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:46.886400+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:47.886566+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:48.886752+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:49.886946+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:50.887143+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222216192 unmapped: 48603136 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:51.887283+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:52.887482+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:53.887608+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:54.887732+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:55.887864+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:56.888026+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:57.888179+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:58.888321+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222224384 unmapped: 48594944 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:54:59.888420+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:00.888623+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:01.888750+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:02.888907+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:03.889037+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:04.889235+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:05.889409+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:06.889562+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222232576 unmapped: 48586752 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:07.889738+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:08.889871+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:09.890118+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:10.890305+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:11.890421+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:12.890590+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:13.890671+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 48578560 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:14.890822+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:15.890942+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:16.891089+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:17.891276+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:18.891474+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:19.891608+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:20.891797+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:21.892060+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:22.892207+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:23.892387+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:24.892579+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:25.892742+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:26.892891+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 48570368 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:27.892991+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:28.893129+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:29.893248+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:30.893394+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:31.893527+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:32.893684+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:33.893826+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:34.893950+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:35.894107+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:36.894233+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222257152 unmapped: 48562176 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:37.894368+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: mgrc ms_handle_reset ms_handle_reset con 0x5631bb8fa800
Sep 30 19:04:00 compute-0 ceph-osd[82241]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2285351161
Sep 30 19:04:00 compute-0 ceph-osd[82241]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2285351161,v1:192.168.122.100:6801/2285351161]
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: get_auth_request con 0x5631bb8fb400 auth_method 0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: mgrc handle_mgr_configure stats_period=5
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:38.894727+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b9fb0000 session 0x5631b59d9860
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631b51c7000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 ms_handle_reset con 0x5631b73dfc00 session 0x5631b3ff3a40
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: handle_auth_request added challenge on 0x5631bb8fa000
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:39.894837+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:40.894974+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:41.895151+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:42.895322+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222265344 unmapped: 48553984 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:43.895525+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:44.895699+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:45.895877+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:46.896047+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:47.896192+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:48.896395+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:49.896583+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:50.896746+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222273536 unmapped: 48545792 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:51.896889+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:52.897068+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:53.897232+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:54.897359+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:55.897468+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:56.897593+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:57.897710+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:58.897825+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:55:59.897942+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 48537600 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:00.898048+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222289920 unmapped: 48529408 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:01.898169+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'config diff' '{prefix=config diff}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'config show' '{prefix=config show}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222019584 unmapped: 48799744 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'counter dump' '{prefix=counter dump}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:02.898326+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'counter schema' '{prefix=counter schema}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221487104 unmapped: 49332224 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:03.898458+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221806592 unmapped: 49012736 heap: 270819328 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:04.898675+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'log dump' '{prefix=log dump}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221806592 unmapped: 60055552 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'perf dump' '{prefix=perf dump}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:05.898834+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'perf schema' '{prefix=perf schema}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221577216 unmapped: 60284928 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:06.898996+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221577216 unmapped: 60284928 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:07.899142+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221577216 unmapped: 60284928 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:08.899278+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:09.899416+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:10.899549+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread fragmentation_score=0.001074 took=0.000030s
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:11.899672+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:12.899836+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:13.899993+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:14.900133+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:15.900267+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:16.900412+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221585408 unmapped: 60276736 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:17.900892+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 60268544 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:18.901030+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 60268544 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:19.901148+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 60268544 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:20.901286+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 60268544 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:21.901386+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 60268544 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:22.901530+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:23.901637+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 60268544 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:24.901751+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 60268544 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:25.901870+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221593600 unmapped: 60268544 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:26.902019+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 60260352 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:27.902175+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 60260352 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:28.902306+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 60260352 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:29.902402+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 60260352 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:30.902585+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 60260352 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:31.902715+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 60260352 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:32.902863+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 60260352 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:33.902977+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221601792 unmapped: 60260352 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:34.904992+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221609984 unmapped: 60252160 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:35.905102+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221609984 unmapped: 60252160 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:36.905246+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221609984 unmapped: 60252160 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:37.905469+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221609984 unmapped: 60252160 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:38.905602+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221609984 unmapped: 60252160 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:39.905793+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221609984 unmapped: 60252160 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:40.905934+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221609984 unmapped: 60252160 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:41.906093+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221609984 unmapped: 60252160 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:42.906295+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 60243968 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:43.906442+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 60243968 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:44.906584+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 60243968 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:45.906721+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 60243968 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:46.906863+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 60243968 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:47.907021+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 60243968 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:48.907180+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 60243968 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:49.907318+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221618176 unmapped: 60243968 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:50.907514+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221626368 unmapped: 60235776 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:51.907679+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221626368 unmapped: 60235776 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:52.907866+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221626368 unmapped: 60235776 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:53.908017+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221626368 unmapped: 60235776 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:54.908269+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221626368 unmapped: 60235776 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:55.908388+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221626368 unmapped: 60235776 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:56.908601+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221626368 unmapped: 60235776 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:57.908760+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221626368 unmapped: 60235776 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:58.908883+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:56:59.909128+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:00.909280+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:01.909452+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:02.909611+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:03.909730+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:04.909853+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:05.909966+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:06.910098+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:07.910231+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:08.910414+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:09.910536+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:10.910645+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221634560 unmapped: 60227584 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:11.910807+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221642752 unmapped: 60219392 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:12.910969+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221642752 unmapped: 60219392 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:13.911106+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221642752 unmapped: 60219392 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:14.911220+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221642752 unmapped: 60219392 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:15.911396+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221642752 unmapped: 60219392 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:16.911563+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221642752 unmapped: 60219392 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:17.911702+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221642752 unmapped: 60219392 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:18.911838+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221642752 unmapped: 60219392 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:19.911955+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:20.912777+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:21.913457+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:22.914053+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:23.914283+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:24.914728+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:25.914906+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:26.915186+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:27.915405+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:28.915835+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221650944 unmapped: 60211200 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:29.916248+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:30.916604+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:31.916925+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:32.917261+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:33.917537+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:34.917814+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:35.917992+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:36.918180+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:37.918382+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221659136 unmapped: 60203008 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:38.918543+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221667328 unmapped: 60194816 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:39.918707+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221667328 unmapped: 60194816 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:40.918908+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221667328 unmapped: 60194816 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:41.919092+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221667328 unmapped: 60194816 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:42.919296+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221667328 unmapped: 60194816 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:43.919420+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221667328 unmapped: 60194816 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:44.919560+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221667328 unmapped: 60194816 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:45.920281+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221667328 unmapped: 60194816 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:46.920413+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:47.920571+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:48.920757+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:49.920870+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:50.921007+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:51.921196+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:52.921384+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:53.921526+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:54.921685+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:55.921823+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:56.921956+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 28K writes, 105K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.83 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2418 writes, 7509 keys, 2418 commit groups, 1.0 writes per commit group, ingest: 6.38 MB, 0.01 MB/s
                                           Interval WAL: 2418 writes, 1061 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:57.922134+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:58.922274+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221675520 unmapped: 60186624 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:57:59.922395+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221683712 unmapped: 60178432 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:00.922619+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221683712 unmapped: 60178432 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:01.922862+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221683712 unmapped: 60178432 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:02.923118+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221683712 unmapped: 60178432 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:03.923293+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221683712 unmapped: 60178432 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:04.923441+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221683712 unmapped: 60178432 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:05.923584+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221683712 unmapped: 60178432 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:06.923725+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221683712 unmapped: 60178432 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:07.923861+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221691904 unmapped: 60170240 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:08.923966+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221691904 unmapped: 60170240 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:09.924065+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221691904 unmapped: 60170240 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:10.924280+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221691904 unmapped: 60170240 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:11.924434+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221691904 unmapped: 60170240 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:12.924643+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221691904 unmapped: 60170240 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:13.924960+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221691904 unmapped: 60170240 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:14.925089+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221691904 unmapped: 60170240 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:15.925436+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:16.925600+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:17.925738+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:18.925842+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:19.926025+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:20.926165+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:21.926307+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:22.926543+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:23.926651+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:24.926815+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221700096 unmapped: 60162048 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:25.926937+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221708288 unmapped: 60153856 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:26.927055+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221708288 unmapped: 60153856 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:27.927213+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221708288 unmapped: 60153856 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:28.927357+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221708288 unmapped: 60153856 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:29.927528+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221708288 unmapped: 60153856 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:30.927664+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221708288 unmapped: 60153856 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:31.927805+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221708288 unmapped: 60153856 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:32.928024+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221708288 unmapped: 60153856 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _send_mon_message to mon.compute-1 at v2:192.168.122.101:3300/0
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:33.928181+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221716480 unmapped: 60145664 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:34.928307+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221716480 unmapped: 60145664 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:35.928459+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221716480 unmapped: 60145664 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:36.928597+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221716480 unmapped: 60145664 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:37.928771+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221716480 unmapped: 60145664 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:38.928892+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221716480 unmapped: 60145664 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:39.929023+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:40.929153+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:41.929303+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:42.929509+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:43.929679+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:44.929817+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:45.929988+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:46.930153+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:47.930435+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:48.930704+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221724672 unmapped: 60137472 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:49.930897+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 60129280 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:50.931095+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 60129280 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:51.931277+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 60129280 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:52.931475+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 60129280 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:53.931639+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 60129280 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:54.931788+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 60129280 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:55.931923+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 60129280 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:56.932102+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221732864 unmapped: 60129280 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:57.932247+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 60121088 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:58.932408+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 60121088 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:58:59.932575+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 60121088 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:00.932724+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 60121088 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:01.932863+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 60121088 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:02.933074+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 60121088 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:03.933231+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 60121088 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:04.933430+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221741056 unmapped: 60121088 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:05.933559+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:06.933723+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:07.933854+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:08.933976+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:09.934088+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:10.934260+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:11.934419+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:12.934650+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:13.934781+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:14.934926+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:15.935066+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:16.935212+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:17.935393+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:18.935581+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221749248 unmapped: 60112896 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:19.935717+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 60104704 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:20.935870+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 60104704 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:21.936014+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 60104704 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:22.936173+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 60104704 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:23.936318+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 60104704 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:24.936549+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 60104704 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:25.936737+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 60104704 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:26.937610+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221757440 unmapped: 60104704 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:27.937730+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221765632 unmapped: 60096512 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:28.937896+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221765632 unmapped: 60096512 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:29.938100+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221765632 unmapped: 60096512 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:30.938261+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221765632 unmapped: 60096512 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:31.938391+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221765632 unmapped: 60096512 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:32.938548+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221765632 unmapped: 60096512 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:33.938694+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221765632 unmapped: 60096512 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:34.938838+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221765632 unmapped: 60096512 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:35.939007+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221773824 unmapped: 60088320 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:36.939157+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221773824 unmapped: 60088320 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:37.939292+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221773824 unmapped: 60088320 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:38.939455+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221773824 unmapped: 60088320 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:39.939612+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221773824 unmapped: 60088320 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:40.939750+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221773824 unmapped: 60088320 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 342.560913086s of 342.774291992s, submitted: 29
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:41.939874+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221790208 unmapped: 60071936 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:42.940125+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221790208 unmapped: 60071936 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:43.940265+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 221831168 unmapped: 60030976 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:44.940397+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223002624 unmapped: 58859520 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:45.940650+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223010816 unmapped: 58851328 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:46.940796+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223019008 unmapped: 58843136 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:47.940923+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223019008 unmapped: 58843136 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:48.941147+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223019008 unmapped: 58843136 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:49.941286+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223019008 unmapped: 58843136 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:50.941472+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223019008 unmapped: 58843136 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:51.941651+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223027200 unmapped: 58834944 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:52.941861+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223035392 unmapped: 58826752 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:53.942012+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223035392 unmapped: 58826752 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:54.942159+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223035392 unmapped: 58826752 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:55.942438+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223043584 unmapped: 58818560 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:56.942606+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223043584 unmapped: 58818560 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:57.942748+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223043584 unmapped: 58818560 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:58.942855+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223043584 unmapped: 58818560 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T18:59:59.943003+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223051776 unmapped: 58810368 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:00.943141+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223051776 unmapped: 58810368 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:01.943644+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223051776 unmapped: 58810368 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:02.944445+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223051776 unmapped: 58810368 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:03.944854+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:04.945402+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:05.945758+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:06.945991+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:07.946816+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:08.947248+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:09.947607+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:10.947881+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:11.948129+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:12.948463+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:13.948731+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:14.949027+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223059968 unmapped: 58802176 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:15.949240+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:16.949456+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:17.949736+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:18.949887+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:19.950174+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:20.950460+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:21.950616+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:22.951015+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:23.951190+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:24.951416+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:25.951547+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:26.951662+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:27.951906+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:28.952135+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:29.952280+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:30.952409+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:31.952615+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:32.952866+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:33.953032+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:34.953173+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:35.953316+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:36.953493+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:37.953618+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:38.953738+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:39.953878+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:40.953994+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:41.954137+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:42.954317+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:43.954512+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:44.954654+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:45.954801+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:46.954932+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:47.955058+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:48.955175+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:49.955429+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:50.955553+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:51.955679+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:52.955865+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:53.956028+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:54.956215+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:55.956410+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:56.956538+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:57.956665+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:58.956802+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:00:59.956923+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:00.957040+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:01.957237+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:02.957449+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:03.957599+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:04.957761+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:05.957961+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:06.961525+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:07.961686+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:08.961842+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:09.962010+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:10.962156+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:11.962304+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223068160 unmapped: 58793984 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:12.962548+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:13.962826+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:14.962956+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:15.963088+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:16.963259+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:17.963383+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:18.963773+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:19.963921+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:20.964074+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:21.964242+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:22.964474+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223076352 unmapped: 58785792 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:23.964617+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:24.964777+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:25.964971+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:26.965187+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:27.965319+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:28.965458+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:29.965593+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:30.965713+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:31.965851+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:32.966035+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:33.966226+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:34.966545+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:35.966817+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223084544 unmapped: 58777600 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:36.966995+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:37.967161+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:38.967314+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:39.967491+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:40.967675+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:41.967802+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:42.968046+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:43.968189+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:44.968323+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:45.968457+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:46.968589+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:47.968731+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:48.968896+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:49.969100+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:50.969254+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223092736 unmapped: 58769408 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:51.969496+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:52.969766+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:53.969954+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:54.970095+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:55.970231+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:56.970479+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:57.970598+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:58.970748+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:01:59.970907+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:00.971061+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223100928 unmapped: 58761216 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:01.971190+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:02.971364+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:03.971502+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:04.971742+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:05.971878+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:06.972057+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:07.972202+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:08.972401+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:09.972538+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:10.972709+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:11.972867+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:12.973060+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:13.973257+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:14.973435+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:15.973594+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:16.973740+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223109120 unmapped: 58753024 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:17.974143+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:18.974329+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:19.974577+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:20.974800+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:21.974967+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:22.975263+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:23.975429+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:24.975926+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:25.976072+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:26.976286+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:27.976466+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:28.976589+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:29.976751+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:30.976906+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:31.977027+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:32.977184+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223117312 unmapped: 58744832 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:33.977418+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 58736640 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:34.977607+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 58736640 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:35.977736+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 58736640 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:36.978005+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 58736640 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:37.978275+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 58736640 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:38.978429+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 58736640 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:39.978606+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 58736640 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:40.978719+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223125504 unmapped: 58736640 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:41.978906+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:42.979088+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:43.979230+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:44.979392+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:45.979549+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:46.979732+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:47.979871+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:48.980025+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:49.980158+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:50.980332+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:51.980559+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223133696 unmapped: 58728448 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:52.980813+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:53.980951+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:54.981162+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:55.981396+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:56.981619+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:57.981890+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:58.982149+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:02:59.982375+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:00.982571+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:01.982762+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:02.982972+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223141888 unmapped: 58720256 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:03.983168+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:04.983390+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:05.983630+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:06.983768+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:07.983952+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:08.984160+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:09.984332+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:10.984571+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:11.984726+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:12.984888+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:13.985028+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:14.985219+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:15.985598+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:16.985848+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223150080 unmapped: 58712064 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:17.986068+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:18.986254+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:19.986413+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:20.986577+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:21.986738+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:22.987293+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:23.987441+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:24.987648+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:25.987767+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:26.987904+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 223158272 unmapped: 58703872 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'config diff' '{prefix=config diff}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:27.988085+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'config show' '{prefix=config show}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'counter dump' '{prefix=counter dump}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'counter schema' '{prefix=counter schema}'
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222945280 unmapped: 58916864 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 19:04:00 compute-0 ceph-osd[82241]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 19:04:00 compute-0 ceph-osd[82241]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969698 data_alloc: 218103808 data_used: 1896448
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:28.988211+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: prioritycache tune_memory target: 4294967296 mapped: 222986240 unmapped: 58875904 heap: 281862144 old mem: 2845415832 new mem: 2845415832
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: tick
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_tickets
Sep 30 19:04:00 compute-0 ceph-osd[82241]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T19:03:29.988335+0000)
Sep 30 19:04:00 compute-0 ceph-osd[82241]: osd.0 146 heartbeat osd_stat(store_statfs(0x4f30b6000/0x0/0x4ffc00000, data 0x95f8fc/0xa36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0xc10f9c5), peers [1] op hist [])
Sep 30 19:04:00 compute-0 ceph-osd[82241]: do_command 'log dump' '{prefix=log dump}'
Sep 30 19:04:00 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Sep 30 19:04:00 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994101095' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 19:04:00 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/112825725' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.19890 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/71466818' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: pgmap v2579: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 766 B/s rd, 0 op/s
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3851135064' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.28115 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.19902 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1994101095' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2610471172' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4273556960' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19914 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:01 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:01 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:01 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:01.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 19:04:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251933188' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 19:04:01 compute-0 openstack_network_exporter[279566]: ERROR   19:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Sep 30 19:04:01 compute-0 openstack_network_exporter[279566]: ERROR   19:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:04:01 compute-0 openstack_network_exporter[279566]: ERROR   19:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Sep 30 19:04:01 compute-0 openstack_network_exporter[279566]: ERROR   19:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Sep 30 19:04:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:04:01 compute-0 openstack_network_exporter[279566]: ERROR   19:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Sep 30 19:04:01 compute-0 openstack_network_exporter[279566]: 
Sep 30 19:04:01 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19924 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:04:01 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28135 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 19:04:01 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3700030127' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 19:04:01 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.19914 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2147346475' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4251933188' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1199056701' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.19924 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.28135 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3700030127' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.19932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3264826547' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28143 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon stat"} v 0)
Sep 30 19:04:02 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916086542' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 19:04:02 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:02 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:04:02 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:02.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:04:02 compute-0 crontab[387055]: (root) LIST (root)
Sep 30 19:04:02 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19942 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2580: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:02 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28151 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19950 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28159 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:02 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19954 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "node ls"} v 0)
Sep 30 19:04:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914316547' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.28143 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2916086542' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.19942 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: pgmap v2580: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2996930861' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.28151 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.19950 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/209043629' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.28159 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: from='client.19954 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:03 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:03 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:03 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:03.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:03 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.19964 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28169 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Sep 30 19:04:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935998326' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 19:04:03 compute-0 nova_compute[265391]: 2025-09-30 19:04:03.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:04:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Sep 30 19:04:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/400747515' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28177 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:03 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Sep 30 19:04:03 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099182053' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:04:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:03 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:04:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:04:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:04 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:04:04 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:04:04.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:04:04 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:04 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 19:04:04 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:04.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/914316547' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/244187955' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.19964 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.28169 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2935998326' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1004522143' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/400747515' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.28177 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1099182053' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Sep 30 19:04:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857942967' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28185 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Sep 30 19:04:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2065453627' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2581: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:04:04 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28193 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Sep 30 19:04:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987872530' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 19:04:04 compute-0 nova_compute[265391]: 2025-09-30 19:04:04.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:04:04 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Sep 30 19:04:04 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862213668' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Sep 30 19:04:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109097292' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28201 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3676917808' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1857942967' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.28185 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2065453627' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: pgmap v2581: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.28193 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1987872530' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1862213668' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1109097292' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/280760085' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 19:04:05 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:05 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:05 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:05.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Sep 30 19:04:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060393709' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 nova_compute[265391]: 2025-09-30 19:04:05.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:04:05 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28207 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Sep 30 19:04:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2067289253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Sep 30 19:04:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866048963' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Sep 30 19:04:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/783253440' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28217 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:05 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 19:04:05 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713804743' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Sep 30 19:04:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1941053519' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 19:04:06 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:06 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:06 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:06.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:06 compute-0 systemd[1]: Starting Hostname Service...
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.28201 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2060393709' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.28207 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2196663918' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2067289253' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1866048963' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/783253440' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.28217 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1713804743' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1941053519' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 19:04:06 compute-0 systemd[1]: Started Hostname Service.
Sep 30 19:04:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Sep 30 19:04:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4168390125' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd utilization"} v 0)
Sep 30 19:04:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248394770' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2582: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:06 compute-0 nova_compute[265391]: 2025-09-30 19:04:06.429 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:04:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:04:06 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Sep 30 19:04:06 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4087842539' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.574280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759259046574317, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1723, "num_deletes": 257, "total_data_size": 2927938, "memory_usage": 2963896, "flush_reason": "Manual Compaction"}
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759259046588459, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2857143, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62109, "largest_seqno": 63831, "table_properties": {"data_size": 2849148, "index_size": 4680, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18239, "raw_average_key_size": 20, "raw_value_size": 2832508, "raw_average_value_size": 3207, "num_data_blocks": 204, "num_entries": 883, "num_filter_entries": 883, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759258891, "oldest_key_time": 1759258891, "file_creation_time": 1759259046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 14225 microseconds, and 7489 cpu microseconds.
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.588503) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2857143 bytes OK
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.588522) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.591953) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.591984) EVENT_LOG_v1 {"time_micros": 1759259046591976, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.592005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2920324, prev total WAL file size 2920324, number of live WAL files 2.
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.592988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353131' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2790KB)], [146(10MB)]
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759259046593045, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13604858, "oldest_snapshot_seqno": -1}
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8799 keys, 13457498 bytes, temperature: kUnknown
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759259046679696, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 13457498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13404570, "index_size": 29839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 233567, "raw_average_key_size": 26, "raw_value_size": 13253351, "raw_average_value_size": 1506, "num_data_blocks": 1153, "num_entries": 8799, "num_filter_entries": 8799, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759253787, "oldest_key_time": 0, "file_creation_time": 1759259046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6f6abc8b-413d-4a47-b5c7-406ff28f77d6", "db_session_id": "VEKEW5JSCHGYP3QRRVQQ", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.679894) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 13457498 bytes
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.683754) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.9 rd, 155.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 9329, records dropped: 530 output_compression: NoCompression
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.683775) EVENT_LOG_v1 {"time_micros": 1759259046683765, "job": 90, "event": "compaction_finished", "compaction_time_micros": 86705, "compaction_time_cpu_micros": 32427, "output_level": 6, "num_output_files": 1, "total_output_size": 13457498, "num_input_records": 9329, "num_output_records": 8799, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759259046684384, "job": 90, "event": "table_file_deletion", "file_number": 148}
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759259046686042, "job": 90, "event": "table_file_deletion", "file_number": 146}
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.592835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.686097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.686109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.686111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.686113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:04:06 compute-0 ceph-mon[73755]: rocksdb: (Original Log Time 2025/09/30-19:04:06.686114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 19:04:06 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20040 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:06 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20044 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:07 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:07 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:07 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:07.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20048 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4168390125' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4248394770' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3466285778' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: pgmap v2582: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4087842539' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1769819485' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.20040 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.20044 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/795006569' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2502588670' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20052 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 19:04:07 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 19:04:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:04:07.497Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 19:04:07 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:04:07.498Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:04:07 compute-0 sudo[387799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 19:04:07 compute-0 sudo[387799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:04:07 compute-0 sudo[387799]: pam_unix(sudo:session): session closed for user root
Sep 30 19:04:07 compute-0 podman[387823]: 2025-09-30 19:04:07.67149416 +0000 UTC m=+0.078745505 container health_status 422f6abf0852b3ec959e4d6dd447f6b9c428af196293837ace1326969920db4b (image=38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Sep 30 19:04:07 compute-0 podman[387824]: 2025-09-30 19:04:07.71409338 +0000 UTC m=+0.113363998 container health_status ccf37146f390393469d9490f365ee9bfd68d65318532033d624eee25883d10ff (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Sep 30 19:04:07 compute-0 podman[387826]: 2025-09-30 19:04:07.716717158 +0000 UTC m=+0.121083548 container health_status c9d0ed459d0797b0ff8d28ee0917ed810b03a16fbb74f09a41dba84f844790b4 (image=38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.129.56.221:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 19:04:07 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20062 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20072 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "quorum_status"} v 0)
Sep 30 19:04:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221413528' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 19:04:08 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:08 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:08 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:08.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='client.20048 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='client.20052 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/4052718143' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2145157587' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='client.20062 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3814407398' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2221413528' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2726757586' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2583: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20078 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Optimize plan auto_2025-09-30_19:04:08
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [balancer INFO root] do_upmap
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [balancer INFO root] pools ['default.rgw.log', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.mgr', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images']
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 19:04:08 compute-0 nova_compute[265391]: 2025-09-30 19:04:08.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:04:08 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "versions"} v 0)
Sep 30 19:04:08 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/793667937' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 19:04:08 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-mgr-compute-0-efvthf[74047]: ::ffff:192.168.122.100 - - [30/Sep/2025:19:04:08] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: [prometheus INFO cherrypy.access.140370575226864] ::ffff:192.168.122.100 - - [30/Sep/2025:19:04:08] "GET /metrics HTTP/1.1" 200 46744 "" "Prometheus/2.51.0"
Sep 30 19:04:08 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20088 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:04:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:04:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:08 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:04:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:09 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:04:09 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:04:09.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:04:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Sep 30 19:04:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643641839' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28269 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:09 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:09 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:04:09 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:09.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:04:09 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20100 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.20072 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/950578681' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: pgmap v2583: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.20078 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2325417661' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/793667937' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2981551538' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.20088 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/643641839' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1952010579' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Sep 30 19:04:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767955293' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 19:04:09 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 19:04:09 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28277 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:09 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28281 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:09 compute-0 nova_compute[265391]: 2025-09-30 19:04:09.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:04:09 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28291 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config dump"} v 0)
Sep 30 19:04:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040008997' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 19:04:10 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:10 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:10 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:10.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='client.28269 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='client.20100 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/767955293' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='client.28277 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='client.28281 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='client.28291 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3040008997' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2584: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:04:10 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28297 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20128 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28305 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:10 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Sep 30 19:04:10 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3076856318' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 19:04:11 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:11 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:11 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:11 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28313 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: pgmap v2584: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:04:11 compute-0 ceph-mon[73755]: from='client.28297 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: from='client.20128 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/3004934123' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: from='client.28305 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3076856318' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/715233239' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df"} v 0)
Sep 30 19:04:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412535743' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:04:11 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28321 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "fs dump"} v 0)
Sep 30 19:04:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3684697327' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 19:04:11 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 19:04:12 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28329 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:12 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:12 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:12 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:12.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:12 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "fs ls"} v 0)
Sep 30 19:04:12 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450248131' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2585: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='client.28313 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2986511073' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/412535743' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='client.28321 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/1746201692' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/3684697327' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 19:04:12 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/450248131' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20156 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:12 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28349 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mds stat"} v 0)
Sep 30 19:04:13 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357266147' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 19:04:13 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:13 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:13 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:13 compute-0 ceph-mon[73755]: from='client.28329 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 19:04:13 compute-0 ceph-mon[73755]: pgmap v2585: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 511 B/s rd, 0 op/s
Sep 30 19:04:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/18398888' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 19:04:13 compute-0 ceph-mon[73755]: from='client.20156 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/2357266147' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 19:04:13 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2489630526' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 19:04:13 compute-0 nova_compute[265391]: 2025-09-30 19:04:13.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:04:13 compute-0 sudo[388554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:04:13 compute-0 sudo[388554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:04:13 compute-0 sudo[388554]: pam_unix(sudo:session): session closed for user root
Sep 30 19:04:13 compute-0 sudo[388581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 19:04:13 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon dump"} v 0)
Sep 30 19:04:13 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192387500' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 19:04:13 compute-0 sudo[388581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:13 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 19:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 19:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 19:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-nfs-cephfs-1-0-compute-0-syzvbh[327761]: 30/09/2025 19:04:14 : epoch 68dc2147 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 19:04:14 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:04:14.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:04:14 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20172 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:14 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:14 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 19:04:14 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:14.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 19:04:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2586: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:04:14 compute-0 nova_compute[265391]: 2025-09-30 19:04:14.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:04:14 compute-0 ceph-mon[73755]: from='client.28349 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/1192387500' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 19:04:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2642681482' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 19:04:14 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2968927765' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 19:04:14 compute-0 sudo[388581]: pam_unix(sudo:session): session closed for user root
Sep 30 19:04:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:04:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:04:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 19:04:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 19:04:14 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2587: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 604 B/s rd, 0 op/s
Sep 30 19:04:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 19:04:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:04:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 19:04:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:04:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 19:04:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 19:04:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 19:04:14 compute-0 ceph-mon[73755]: log_channel(audit) log [INF] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 19:04:14 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 19:04:14 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:04:14 compute-0 sudo[388870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:04:14 compute-0 sudo[388870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:04:14 compute-0 sudo[388870]: pam_unix(sudo:session): session closed for user root
Sep 30 19:04:14 compute-0 sudo[388911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 19:04:14 compute-0 sudo[388911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:04:14 compute-0 nova_compute[265391]: 2025-09-30 19:04:14.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Sep 30 19:04:14 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20178 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:15 compute-0 podman[389106]: 2025-09-30 19:04:15.153463514 +0000 UTC m=+0.045187358 container create 169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:04:15 compute-0 systemd[1]: Started libpod-conmon-169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10.scope.
Sep 30 19:04:15 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28375 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:15 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:15 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:15 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:15.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:04:15 compute-0 podman[389106]: 2025-09-30 19:04:15.129799302 +0000 UTC m=+0.021523176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:04:15 compute-0 podman[389106]: 2025-09-30 19:04:15.229717013 +0000 UTC m=+0.121440877 container init 169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Sep 30 19:04:15 compute-0 podman[389106]: 2025-09-30 19:04:15.235875772 +0000 UTC m=+0.127599616 container start 169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:04:15 compute-0 tender_haslett[389140]: 167 167
Sep 30 19:04:15 compute-0 systemd[1]: libpod-169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10.scope: Deactivated successfully.
Sep 30 19:04:15 compute-0 podman[389106]: 2025-09-30 19:04:15.242422521 +0000 UTC m=+0.134146385 container attach 169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:04:15 compute-0 podman[389106]: 2025-09-30 19:04:15.245995963 +0000 UTC m=+0.137719807 container died 169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 19:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ebe55bfb21669bf05f77cba82c7490ee6dd859e571ffe4e72c34c313ac26e12-merged.mount: Deactivated successfully.
Sep 30 19:04:15 compute-0 podman[389106]: 2025-09-30 19:04:15.284904558 +0000 UTC m=+0.176628402 container remove 169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:04:15 compute-0 systemd[1]: libpod-conmon-169eed507bbd155bacce1ce16437319e037f46e7848fd8b84e039627893d9f10.scope: Deactivated successfully.
Sep 30 19:04:15 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20182 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:15 compute-0 podman[389206]: 2025-09-30 19:04:15.486986616 +0000 UTC m=+0.065662937 container create 4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_swanson, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='client.20172 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:15 compute-0 ceph-mon[73755]: pgmap v2586: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 767 B/s rd, 0 op/s
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/412147261' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 19:04:15 compute-0 ceph-mon[73755]: pgmap v2587: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 604 B/s rd, 0 op/s
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' 
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='mgr.14526 192.168.122.100:0/3226362960' entity='mgr.compute-0.efvthf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/2257068152' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 19:04:15 compute-0 ceph-mon[73755]: from='client.20178 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:15 compute-0 systemd[1]: Started libpod-conmon-4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e.scope.
Sep 30 19:04:15 compute-0 podman[389206]: 2025-09-30 19:04:15.460006789 +0000 UTC m=+0.038683120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:04:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee96ca70451a1c5f539ff16e03bc0d3334b643bb2be4814c9ccc4fd3d72f64fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee96ca70451a1c5f539ff16e03bc0d3334b643bb2be4814c9ccc4fd3d72f64fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee96ca70451a1c5f539ff16e03bc0d3334b643bb2be4814c9ccc4fd3d72f64fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee96ca70451a1c5f539ff16e03bc0d3334b643bb2be4814c9ccc4fd3d72f64fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee96ca70451a1c5f539ff16e03bc0d3334b643bb2be4814c9ccc4fd3d72f64fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:15 compute-0 podman[389206]: 2025-09-30 19:04:15.574483795 +0000 UTC m=+0.153160146 container init 4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 19:04:15 compute-0 podman[389206]: 2025-09-30 19:04:15.586006913 +0000 UTC m=+0.164683234 container start 4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_swanson, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:04:15 compute-0 podman[389206]: 2025-09-30 19:04:15.589454662 +0000 UTC m=+0.168131003 container attach 4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_swanson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 19:04:15 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd dump"} v 0)
Sep 30 19:04:15 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377434133' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 19:04:15 compute-0 dazzling_swanson[389266]: --> passed data devices: 0 physical, 1 LVM
Sep 30 19:04:15 compute-0 dazzling_swanson[389266]: --> All data devices are unavailable
Sep 30 19:04:15 compute-0 systemd[1]: libpod-4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e.scope: Deactivated successfully.
Sep 30 19:04:15 compute-0 podman[389206]: 2025-09-30 19:04:15.932451969 +0000 UTC m=+0.511128280 container died 4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_swanson, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee96ca70451a1c5f539ff16e03bc0d3334b643bb2be4814c9ccc4fd3d72f64fd-merged.mount: Deactivated successfully.
Sep 30 19:04:15 compute-0 podman[389206]: 2025-09-30 19:04:15.979224637 +0000 UTC m=+0.557900958 container remove 4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 19:04:15 compute-0 systemd[1]: libpod-conmon-4325d54e13724a38ba6f33f56e08e1a5efd22c8966ce71d4f306d3cd3929561e.scope: Deactivated successfully.
Sep 30 19:04:16 compute-0 sudo[388911]: pam_unix(sudo:session): session closed for user root
Sep 30 19:04:16 compute-0 sudo[389394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 19:04:16 compute-0 sudo[389394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:04:16 compute-0 sudo[389394]: pam_unix(sudo:session): session closed for user root
Sep 30 19:04:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Sep 30 19:04:16 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218910979' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 19:04:16 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:16 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:16 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.101 - anonymous [30/Sep/2025:19:04:16.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:16 compute-0 sudo[389442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/63d32c6a-fa18-54ed-8711-9a3915cc367b/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 63d32c6a-fa18-54ed-8711-9a3915cc367b -- lvm list --format json
Sep 30 19:04:16 compute-0 sudo[389442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 19:04:16 compute-0 nova_compute[265391]: 2025-09-30 19:04:16.424 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:04:16 compute-0 nova_compute[265391]: 2025-09-30 19:04:16.427 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:04:16 compute-0 nova_compute[265391]: 2025-09-30 19:04:16.428 2 DEBUG oslo_service.periodic_task [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20200 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mon[73755]: from='client.28375 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mon[73755]: from='client.20182 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/494616576' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/377434133' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.100:0/4218910979' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mon[73755]: from='client.? 192.168.122.101:0/6242083' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28389 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: log_channel(cluster) log [DBG] : pgmap v2588: 353 pgs: 353 active+clean; 41 MiB data, 405 MiB used, 40 GiB / 40 GiB avail; 604 B/s rd, 0 op/s
Sep 30 19:04:16 compute-0 ceph-mon[73755]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 19:04:16 compute-0 podman[389719]: 2025-09-30 19:04:16.602182334 +0000 UTC m=+0.057236980 container create da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_mestorf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 19:04:16 compute-0 systemd[1]: Started libpod-conmon-da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f.scope.
Sep 30 19:04:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:04:16 compute-0 podman[389719]: 2025-09-30 19:04:16.572967639 +0000 UTC m=+0.028022305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:04:16 compute-0 podman[389719]: 2025-09-30 19:04:16.681870031 +0000 UTC m=+0.136924707 container init da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_mestorf, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 19:04:16 compute-0 podman[389719]: 2025-09-30 19:04:16.689672573 +0000 UTC m=+0.144727219 container start da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_mestorf, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 19:04:16 compute-0 serene_mestorf[389771]: 167 167
Sep 30 19:04:16 compute-0 systemd[1]: libpod-da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f.scope: Deactivated successfully.
Sep 30 19:04:16 compute-0 podman[389719]: 2025-09-30 19:04:16.764100245 +0000 UTC m=+0.219154911 container attach da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 19:04:16 compute-0 podman[389719]: 2025-09-30 19:04:16.764437933 +0000 UTC m=+0.219492579 container died da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.20204 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000998787452383278 of space, bias 1.0, pg target 0.1997574904766556 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.630884938464543e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 9.538606173080679e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00011446327407696817 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.243126098847431e-06 of space, bias 1.0, pg target 0.0006486252197694861 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 19:04:16 compute-0 ceph-mgr[74051]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.9077212346161359e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 19:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d894b71f241aea39c6a388ff7fc8190e67fea94135b3d502345c2ecc9cb147e-merged.mount: Deactivated successfully.
Sep 30 19:04:16 compute-0 nova_compute[265391]: 2025-09-30 19:04:16.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Sep 30 19:04:16 compute-0 nova_compute[265391]: 2025-09-30 19:04:16.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Sep 30 19:04:16 compute-0 nova_compute[265391]: 2025-09-30 19:04:16.945 2 DEBUG oslo_concurrency.lockutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Sep 30 19:04:16 compute-0 nova_compute[265391]: 2025-09-30 19:04:16.946 2 DEBUG nova.compute.resource_tracker [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Sep 30 19:04:16 compute-0 nova_compute[265391]: 2025-09-30 19:04:16.946 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Sep 30 19:04:17 compute-0 ovs-appctl[389915]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 19:04:17 compute-0 podman[389810]: 2025-09-30 19:04:17.057202443 +0000 UTC m=+0.350878001 container remove da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 19:04:17 compute-0 systemd[1]: libpod-conmon-da3e381113a656de61b7af0a03cfe0d87ffbdb41db8acb45526faf2b6e562c5f.scope: Deactivated successfully.
Sep 30 19:04:17 compute-0 ovs-appctl[389920]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 19:04:17 compute-0 ovs-appctl[389926]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 19:04:17 compute-0 radosgw[96126]: ====== starting new request req=0x7f25f11485d0 =====
Sep 30 19:04:17 compute-0 radosgw[96126]: ====== req done req=0x7f25f11485d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 19:04:17 compute-0 radosgw[96126]: beast: 0x7f25f11485d0: 192.168.122.100 - anonymous [30/Sep/2025:19:04:17.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 19:04:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Sep 30 19:04:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946550237' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 19:04:17 compute-0 podman[389966]: 2025-09-30 19:04:17.217330077 +0000 UTC m=+0.025885479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 19:04:17 compute-0 podman[389966]: 2025-09-30 19:04:17.323567911 +0000 UTC m=+0.132123303 container create d2d87bd3209bec0b0ce0c2aeaa0d36bb32df5098369d762cab47f9a052491ac7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 19:04:17 compute-0 ceph-mgr[74051]: log_channel(audit) log [DBG] : from='client.28401 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 19:04:17 compute-0 ceph-mon[73755]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 19:04:17 compute-0 ceph-mon[73755]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3656320152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 19:04:17 compute-0 nova_compute[265391]: 2025-09-30 19:04:17.423 2 DEBUG oslo_concurrency.processutils [None req-20ac4ccb-77ec-43ee-8e92-7783418950bf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Sep 30 19:04:17 compute-0 systemd[1]: Started libpod-conmon-d2d87bd3209bec0b0ce0c2aeaa0d36bb32df5098369d762cab47f9a052491ac7.scope.
Sep 30 19:04:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 19:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797ccd3bcdd4a17b98c00efa5125dc46d5d4eb1b90fa9f5fd85f784c28b1c82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797ccd3bcdd4a17b98c00efa5125dc46d5d4eb1b90fa9f5fd85f784c28b1c82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797ccd3bcdd4a17b98c00efa5125dc46d5d4eb1b90fa9f5fd85f784c28b1c82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797ccd3bcdd4a17b98c00efa5125dc46d5d4eb1b90fa9f5fd85f784c28b1c82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 19:04:17 compute-0 ceph-63d32c6a-fa18-54ed-8711-9a3915cc367b-alertmanager-compute-0[104212]: ts=2025-09-30T19:04:17.499Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 19:04:17 compute-0 podman[389966]: 2025-09-30 19:04:17.528974125 +0000 UTC m=+0.337529537 container init d2d87bd3209bec0b0ce0c2aeaa0d36bb32df5098369d762cab47f9a052491ac7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 19:04:17 compute-0 podman[389966]: 2025-09-30 19:04:17.539310142 +0000 UTC m=+0.347865534 container start d2d87bd3209bec0b0ce0c2aeaa0d36bb32df5098369d762cab47f9a052491ac7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ramanujan, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
